Hypothesis Testing Using Factor Score Regression: A Comparison of Four Methods
ERIC Educational Resources Information Center
Devlieger, Ines; Mayer, Axel; Rosseel, Yves
2016-01-01
In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and…
Hypothesis Testing Using Factor Score Regression
Devlieger, Ines; Mayer, Axel; Rosseel, Yves
2015-01-01
In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886
Method for exploiting bias in factor analysis using constrained alternating least squares algorithms
Keenan, Michael R.
2008-12-30
Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.
Zhao, Huaqing; Rebbeck, Timothy R; Mitra, Nandita
2009-12-01
Confounding due to population stratification (PS) arises when differences in both allele and disease frequencies exist in a population of mixed racial/ethnic subpopulations. Genomic control, structured association, principal components analysis (PCA), and multidimensional scaling (MDS) approaches have been proposed to address this bias using genetic markers. However, confounding due to PS can also be due to non-genetic factors. Propensity scores are widely used to address confounding in observational studies but have not been adapted to deal with PS in genetic association studies. We propose a genomic propensity score (GPS) approach to correct for bias due to PS that considers both genetic and non-genetic factors. We compare the GPS method with PCA and MDS using simulation studies. Our results show that GPS can adequately adjust and consistently correct for bias due to PS. Under no/mild, moderate, and severe PS, GPS yielded estimated with bias close to 0 (mean=-0.0044, standard error=0.0087). Under moderate or severe PS, the GPS method consistently outperforms the PCA method in terms of bias, coverage probability (CP), and type I error. Under moderate PS, the GPS method consistently outperforms the MDS method in terms of CP. PCA maintains relatively high power compared to both MDS and GPS methods under the simulated situations. GPS and MDS are comparable in terms of statistical properties such as bias, type I error, and power. The GPS method provides a novel and robust tool for obtaining less-biased estimates of genetic associations that can consider both genetic and non-genetic factors. 2009 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Smitha, P. S.; Narasimhan, B.; Sudheer, K. P.; Annamalai, H.
2018-01-01
Regional climate models (RCMs) are used to downscale the coarse resolution General Circulation Model (GCM) outputs to a finer resolution for hydrological impact studies. However, RCM outputs often deviate from the observed climatological data, and therefore need bias correction before they are used for hydrological simulations. While there are a number of methods for bias correction, most of them use monthly statistics to derive correction factors, which may cause errors in the rainfall magnitude when applied on a daily scale. This study proposes a sliding window based daily correction factor derivations that help build reliable daily rainfall data from climate models. The procedure is applied to five existing bias correction methods, and is tested on six watersheds in different climatic zones of India for assessing the effectiveness of the corrected rainfall and the consequent hydrological simulations. The bias correction was performed on rainfall data downscaled using Conformal Cubic Atmospheric Model (CCAM) to 0.5° × 0.5° from two different CMIP5 models (CNRM-CM5.0, GFDL-CM3.0). The India Meteorological Department (IMD) gridded (0.25° × 0.25°) observed rainfall data was considered to test the effectiveness of the proposed bias correction method. The quantile-quantile (Q-Q) plots and Nash Sutcliffe efficiency (NSE) were employed for evaluation of different methods of bias correction. The analysis suggested that the proposed method effectively corrects the daily bias in rainfall as compared to using monthly factors. The methods such as local intensity scaling, modified power transformation and distribution mapping, which adjusted the wet day frequencies, performed superior compared to the other methods, which did not consider adjustment of wet day frequencies. The distribution mapping method with daily correction factors was able to replicate the daily rainfall pattern of observed data with NSE value above 0.81 over most parts of India. Hydrological simulations forced using the bias corrected rainfall (distribution mapping and modified power transformation methods that used the proposed daily correction factors) was similar to those simulated by the IMD rainfall. The results demonstrate that the methods and the time scales used for bias correction of RCM rainfall data have a larger impact on the accuracy of the daily rainfall and consequently the simulated streamflow. The analysis suggests that the distribution mapping with daily correction factors can be preferred for adjusting RCM rainfall data irrespective of seasons or climate zones for realistic simulation of streamflow.
Correction factors for self-selection when evaluating screening programmes.
Spix, Claudia; Berthold, Frank; Hero, Barbara; Michaelis, Jörg; Schilling, Freimut H
2016-03-01
In screening programmes there is recognized bias introduced through participant self-selection (the healthy screenee bias). Methods used to evaluate screening programmes include Intention-to-screen, per-protocol, and the "post hoc" approach in which, after introducing screening for everyone, the only evaluation option is participants versus non-participants. All methods are prone to bias through self-selection. We present an overview of approaches to correct for this bias. We considered four methods to quantify and correct for self-selection bias. Simple calculations revealed that these corrections are actually all identical, and can be converted into each other. Based on this, correction factors for further situations and measures were derived. The application of these correction factors requires a number of assumptions. Using as an example the German Neuroblastoma Screening Study, no relevant reduction in mortality or stage 4 incidence due to screening was observed. The largest bias (in favour of screening) was observed when comparing participants with non-participants. Correcting for bias is particularly necessary when using the post hoc evaluation approach, however, in this situation not all required data are available. External data or further assumptions may be required for estimation. © The Author(s) 2015.
Lee, Wen-Chung
2014-02-05
The randomized controlled study is the gold-standard research method in biomedicine. In contrast, the validity of a (nonrandomized) observational study is often questioned because of unknown/unmeasured factors, which may have confounding and/or effect-modifying potential. In this paper, the author proposes a perturbation test to detect the bias of unmeasured factors and a perturbation adjustment to correct for such bias. The proposed method circumvents the problem of measuring unknowns by collecting the perturbations of unmeasured factors instead. Specifically, a perturbation is a variable that is readily available (or can be measured easily) and is potentially associated, though perhaps only very weakly, with unmeasured factors. The author conducted extensive computer simulations to provide a proof of concept. Computer simulations show that, as the number of perturbation variables increases from data mining, the power of the perturbation test increased progressively, up to nearly 100%. In addition, after the perturbation adjustment, the bias decreased progressively, down to nearly 0%. The data-mining perturbation analysis described here is recommended for use in detecting and correcting the bias of unmeasured factors in observational studies.
Regression dilution bias: tools for correction methods and sample size calculation.
Berglund, Lars
2012-08-01
Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.
Detecting Social Desirability Bias Using Factor Mixture Models
ERIC Educational Resources Information Center
Leite, Walter L.; Cooper, Lou Ann
2010-01-01
Based on the conceptualization that social desirable bias (SDB) is a discrete event resulting from an interaction between a scale's items, the testing situation, and the respondent's latent trait on a social desirability factor, we present a method that makes use of factor mixture models to identify which examinees are most likely to provide…
Cancer Survival Estimates Due to Non-Uniform Loss to Follow-Up and Non-Proportional Hazards
K M, Jagathnath Krishna; Mathew, Aleyamma; Sara George, Preethi
2017-06-25
Background: Cancer survival depends on loss to follow-up (LFU) and non-proportional hazards (non-PH). If LFU is high, survival will be over-estimated. If hazard is non-PH, rank tests will provide biased inference and Cox-model will provide biased hazard-ratio. We assessed the bias due to LFU and non-PH factor in cancer survival and provided alternate methods for unbiased inference and hazard-ratio. Materials and Methods: Kaplan-Meier survival were plotted using a realistic breast cancer (BC) data-set, with >40%, 5-year LFU and compared it using another BC data-set with <15%, 5-year LFU to assess the bias in survival due to high LFU. Age at diagnosis of the latter data set was used to illustrate the bias due to a non-PH factor. Log-rank test was employed to assess the bias in p-value and Cox-model was used to assess the bias in hazard-ratio for the non-PH factor. Schoenfeld statistic was used to test the non-PH of age. For the non-PH factor, we employed Renyi statistic for inference and time dependent Cox-model for hazard-ratio. Results: Five-year BC survival was 69% (SE: 1.1%) vs. 90% (SE: 0.7%) for data with low vs. high LFU respectively. Age (<45, 46-54 & >54 years) was a non-PH factor (p-value: 0.036). However, survival by age was significant (log-rank p-value: 0.026), but not significant using Renyi statistic (p=0.067). Hazard ratio (HR) for age using Cox-model was 1.012 (95%CI: 1.004 -1.019) and the same using time-dependent Cox-model was in the other direction (HR: 0.997; 95% CI: 0.997- 0.998). Conclusion: Over-estimated survival was observed for cancer with high LFU. Log-rank statistic and Cox-model provided biased results for non-PH factor. For data with non-PH factors, Renyi statistic and time dependent Cox-model can be used as alternate methods to obtain unbiased inference and estimates. Creative Commons Attribution License
Identifying Items to Assess Methodological Quality in Physical Therapy Trials: A Factor Analysis
Cummings, Greta G.; Fuentes, Jorge; Saltaji, Humam; Ha, Christine; Chisholm, Annabritt; Pasichnyk, Dion; Rogers, Todd
2014-01-01
Background Numerous tools and individual items have been proposed to assess the methodological quality of randomized controlled trials (RCTs). The frequency of use of these items varies according to health area, which suggests a lack of agreement regarding their relevance to trial quality or risk of bias. Objective The objectives of this study were: (1) to identify the underlying component structure of items and (2) to determine relevant items to evaluate the quality and risk of bias of trials in physical therapy by using an exploratory factor analysis (EFA). Design A methodological research design was used, and an EFA was performed. Methods Randomized controlled trials used for this study were randomly selected from searches of the Cochrane Database of Systematic Reviews. Two reviewers used 45 items gathered from 7 different quality tools to assess the methodological quality of the RCTs. An exploratory factor analysis was conducted using the principal axis factoring (PAF) method followed by varimax rotation. Results Principal axis factoring identified 34 items loaded on 9 common factors: (1) selection bias; (2) performance and detection bias; (3) eligibility, intervention details, and description of outcome measures; (4) psychometric properties of the main outcome; (5) contamination and adherence to treatment; (6) attrition bias; (7) data analysis; (8) sample size; and (9) control and placebo adequacy. Limitation Because of the exploratory nature of the results, a confirmatory factor analysis is needed to validate this model. Conclusions To the authors' knowledge, this is the first factor analysis to explore the underlying component items used to evaluate the methodological quality or risk of bias of RCTs in physical therapy. The items and factors represent a starting point for evaluating the methodological quality and risk of bias in physical therapy trials. Empirical evidence of the association among these items with treatment effects and a confirmatory factor analysis of these results are needed to validate these items. PMID:24786942
Verdam, Mathilde G. E.; Oort, Frans J.
2014-01-01
Highlights Application of Kronecker product to construct parsimonious structural equation models for multivariate longitudinal data. A method for the investigation of measurement bias with Kronecker product restricted models. Application of these methods to health-related quality of life data from bone metastasis patients, collected at 13 consecutive measurement occasions. The use of curves to facilitate substantive interpretation of apparent measurement bias. Assessment of change in common factor means, after accounting for apparent measurement bias. Longitudinal measurement invariance is usually investigated with a longitudinal factor model (LFM). However, with multiple measurement occasions, the number of parameters to be estimated increases with a multiple of the number of measurement occasions. To guard against too low ratios of numbers of subjects and numbers of parameters, we can use Kronecker product restrictions to model the multivariate longitudinal structure of the data. These restrictions can be imposed on all parameter matrices, including measurement invariance restrictions on factor loadings and intercepts. The resulting models are parsimonious and have attractive interpretation, but require different methods for the investigation of measurement bias. Specifically, additional parameter matrices are introduced to accommodate possible violations of measurement invariance. These additional matrices consist of measurement bias parameters that are either fixed at zero or free to be estimated. In cases of measurement bias, it is also possible to model the bias over time, e.g., with linear or non-linear curves. Measurement bias detection with Kronecker product restricted models will be illustrated with multivariate longitudinal data from 682 bone metastasis patients whose health-related quality of life (HRQL) was measured at 13 consecutive weeks. PMID:25295016
Verdam, Mathilde G E; Oort, Frans J
2014-01-01
Application of Kronecker product to construct parsimonious structural equation models for multivariate longitudinal data.A method for the investigation of measurement bias with Kronecker product restricted models.Application of these methods to health-related quality of life data from bone metastasis patients, collected at 13 consecutive measurement occasions.The use of curves to facilitate substantive interpretation of apparent measurement bias.Assessment of change in common factor means, after accounting for apparent measurement bias.Longitudinal measurement invariance is usually investigated with a longitudinal factor model (LFM). However, with multiple measurement occasions, the number of parameters to be estimated increases with a multiple of the number of measurement occasions. To guard against too low ratios of numbers of subjects and numbers of parameters, we can use Kronecker product restrictions to model the multivariate longitudinal structure of the data. These restrictions can be imposed on all parameter matrices, including measurement invariance restrictions on factor loadings and intercepts. The resulting models are parsimonious and have attractive interpretation, but require different methods for the investigation of measurement bias. Specifically, additional parameter matrices are introduced to accommodate possible violations of measurement invariance. These additional matrices consist of measurement bias parameters that are either fixed at zero or free to be estimated. In cases of measurement bias, it is also possible to model the bias over time, e.g., with linear or non-linear curves. Measurement bias detection with Kronecker product restricted models will be illustrated with multivariate longitudinal data from 682 bone metastasis patients whose health-related quality of life (HRQL) was measured at 13 consecutive weeks.
Adjusting for partial verification or workup bias in meta-analyses of diagnostic accuracy studies.
de Groot, Joris A H; Dendukuri, Nandini; Janssen, Kristel J M; Reitsma, Johannes B; Brophy, James; Joseph, Lawrence; Bossuyt, Patrick M M; Moons, Karel G M
2012-04-15
A key requirement in the design of diagnostic accuracy studies is that all study participants receive both the test under evaluation and the reference standard test. For a variety of practical and ethical reasons, sometimes only a proportion of patients receive the reference standard, which can bias the accuracy estimates. Numerous methods have been described for correcting this partial verification bias or workup bias in individual studies. In this article, the authors describe a Bayesian method for obtaining adjusted results from a diagnostic meta-analysis when partial verification or workup bias is present in a subset of the primary studies. The method corrects for verification bias without having to exclude primary studies with verification bias, thus preserving the main advantages of a meta-analysis: increased precision and better generalizability. The results of this method are compared with the existing methods for dealing with verification bias in diagnostic meta-analyses. For illustration, the authors use empirical data from a systematic review of studies of the accuracy of the immunohistochemistry test for diagnosis of human epidermal growth factor receptor 2 status in breast cancer patients.
Accounting for Selection Bias in Studies of Acute Cardiac Events.
Banack, Hailey R; Harper, Sam; Kaufman, Jay S
2018-06-01
In cardiovascular research, pre-hospital mortality represents an important potential source of selection bias. Inverse probability of censoring weights are a method to account for this source of bias. The objective of this article is to examine and correct for the influence of selection bias due to pre-hospital mortality on the relationship between cardiovascular risk factors and all-cause mortality after an acute cardiac event. The relationship between the number of cardiovascular disease (CVD) risk factors (0-5; smoking status, diabetes, hypertension, dyslipidemia, and obesity) and all-cause mortality was examined using data from the Atherosclerosis Risk in Communities (ARIC) study. To illustrate the magnitude of selection bias, estimates from an unweighted generalized linear model with a log link and binomial distribution were compared with estimates from an inverse probability of censoring weighted model. In unweighted multivariable analyses the estimated risk ratio for mortality ranged from 1.09 (95% confidence interval [CI], 0.98-1.21) for 1 CVD risk factor to 1.95 (95% CI, 1.41-2.68) for 5 CVD risk factors. In the inverse probability of censoring weights weighted analyses, the risk ratios ranged from 1.14 (95% CI, 0.94-1.39) to 4.23 (95% CI, 2.69-6.66). Estimates from the inverse probability of censoring weighted model were substantially greater than unweighted, adjusted estimates across all risk factor categories. This shows the magnitude of selection bias due to pre-hospital mortality and effect on estimates of the effect of CVD risk factors on mortality. Moreover, the results highlight the utility of using this method to address a common form of bias in cardiovascular research. Copyright © 2018 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.
Liberal bias and the five-factor model.
Charney, Evan
2015-01-01
Duarte et al. draw attention to the "embedding of liberal values and methods" in social psychological research. They note how these biases are often invisible to the researchers themselves. The authors themselves fall prey to these "invisible biases" by utilizing the five-factor model of personality and the trait of openness to experience as one possible explanation for the under-representation of political conservatives in social psychology. I show that the manner in which the trait of openness to experience is conceptualized and measured is a particularly blatant example of the very liberal bias the authors decry.
[On-orbit radiometric calibration accuracy of FY-3A MERSI thermal infrared channel].
Xu, Na; Hu, Xiu-qing; Chen, Lin; Zhang, Yong; Hu, Ju-yang; Sun, Ling
2014-12-01
Accurate satellite radiance measurements are significant for data assimilations and quantitative retrieval applications. In the present paper, radiometric calibration accuracy of FungYun-3A (FY-3A) Medium Resolution Spectral Imager (MERSI) thermal infrared (TIR) channel was evaluated based on simultaneous nadir observation (SNO) intercalibration method. Hyperspectral and high-quality measurements of METOP-A/IASI were used as reference. Assessment uncertainty from intercalibration method was also investigated by examining the relation between BT bias against four main collocation factors, i. e. observation time difference, view geometric difference related to zenith angles and azimuth angles, and scene spatial homogeneity. It was indicated that the BT bias is evenly distributed across the collocation variables with no significant linear relationship in MERSI IR channel. Among the four collocation factors, the scene spatial homogeneity may be the most important factor with the uncertainty less than 2% of BT bias. Statistical analysis of monitoring biases during one and a half years indicates that the brightness temperature measured by MERSI is much warmer than that of IASI. The annual mean bias (MERSI-IASI) in 2012 is (3.18±0.34) K. Monthly averaged BT biases show a little seasonal variation character, and fluctuation range is less than 0.8 K. To further verify the reliability, our evaluation result was also compared with the synchronous experiment results at Dunhuang and Qinghai Lake sites, which showed excellent agreement. Preliminary analysis indicates that there are two reasons leading to the warm bias. One is the overestimation of blackbody emissivity, and the other is probably the incorrect spectral respond function which has shifted to window spectral. Considering the variation character of BT biases, SRF error seems to be the dominant factor.
ERIC Educational Resources Information Center
Davidow, Joseph; Levinson, Edward M.
1993-01-01
Describes factors that may bias psychoeducational decision making and discusses three heuristic principles that affect decision making. Discusses means by which school psychologists can be made aware of these heuristic principles and encouraged to consider them when making psychoeducational decisions. Also discusses methods by which bias in…
Basic research for the geodynamics program
NASA Technical Reports Server (NTRS)
1983-01-01
Laser systems deployed in satellite tracking were upgraded to accuracy levels where biases from systematic unmodelled effects constitute the basic factor that prohibits extraction of the full amount of information contained in the observations. Taking into consideration that the quality of the instrument advances at a faster pace compared to the understanding and modeling of the physical processes involved, one can foresee that in the near future when all lasers are replaced with third generation ones the limiting factor for the estimated accuracies will be the aforementioned biases. Therefore, for the reduction of the observations, methods should be deployed in such a way that the effect of the biases will be kept well below the noise level. Such a method was proposed and studied. This method consists of using the observed part of the satellite pass and converting the laser ranges into range differences in hopes that they will be less affected by biases in the orbital models, the reference system, and the observations themselves.
Comparing interval estimates for small sample ordinal CFA models
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002
Comparing interval estimates for small sample ordinal CFA models.
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.
Comparison of estimators of standard deviation for hydrologic time series
Tasker, Gary D.; Gilroy, Edward J.
1982-01-01
Unbiasing factors as a function of serial correlation, ρ, and sample size, n for the sample standard deviation of a lag one autoregressive model were generated by random number simulation. Monte Carlo experiments were used to compare the performance of several alternative methods for estimating the standard deviation σ of a lag one autoregressive model in terms of bias, root mean square error, probability of underestimation, and expected opportunity design loss. Three methods provided estimates of σ which were much less biased but had greater mean square errors than the usual estimate of σ: s = (1/(n - 1) ∑ (xi −x¯)2)½. The three methods may be briefly characterized as (1) a method using a maximum likelihood estimate of the unbiasing factor, (2) a method using an empirical Bayes estimate of the unbiasing factor, and (3) a robust nonparametric estimate of σ suggested by Quenouille. Because s tends to underestimate σ, its use as an estimate of a model parameter results in a tendency to underdesign. If underdesign losses are considered more serious than overdesign losses, then the choice of one of the less biased methods may be wise.
A new method to measure galaxy bias by combining the density and weak lensing fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pujol, Arnau; Chang, Chihway; Gaztañaga, Enrique
We present a new method to measure redshift-dependent galaxy bias by combining information from the galaxy density field and the weak lensing field. This method is based on the work of Amara et al., who use the galaxy density field to construct a bias-weighted convergence field κg. The main difference between Amara et al.'s work and our new implementation is that here we present another way to measure galaxy bias, using tomography instead of bias parametrizations. The correlation between κg and the true lensing field κ allows us to measure galaxy bias using different zero-lag correlations, such as / ormore » /. Our method measures the linear bias factor on linear scales, under the assumption of no stochasticity between galaxies and matter. We use the Marenostrum Institut de Ciències de l'Espai (MICE) simulation to measure the linear galaxy bias for a flux-limited sample (i < 22.5) in tomographic redshift bins using this method. This article is the first that studies the accuracy and systematic uncertainties associated with the implementation of the method and the regime in which it is consistent with the linear galaxy bias defined by projected two-point correlation functions (2PCF). We find that our method is consistent with a linear bias at the per cent level for scales larger than 30 arcmin, while non-linearities appear at smaller scales. This measurement is a good complement to other measurements of bias, since it does not depend strongly on σ8 as do the 2PCF measurements. We will apply this method to the Dark Energy Survey Science Verification data in a follow-up article.« less
Correcting the Relative Bias of Light Obscuration and Flow Imaging Particle Counters.
Ripple, Dean C; Hu, Zhishang
2016-03-01
Industry and regulatory bodies desire more accurate methods for counting and characterizing particles. Measurements of proteinaceous-particle concentrations by light obscuration and flow imaging can differ by factors of ten or more. We propose methods to correct the diameters reported by light obscuration and flow imaging instruments. For light obscuration, diameters were rescaled based on characterization of the refractive index of typical particles and a light scattering model for the extinction efficiency factor. The light obscuration models are applicable for either homogeneous materials (e.g., silicone oil) or for chemically homogeneous, but spatially non-uniform aggregates (e.g., protein aggregates). For flow imaging, the method relied on calibration of the instrument with silica beads suspended in water-glycerol mixtures. These methods were applied to a silicone-oil droplet suspension and four particle suspensions containing particles produced from heat stressed and agitated human serum albumin, agitated polyclonal immunoglobulin, and abraded ethylene tetrafluoroethylene polymer. All suspensions were measured by two flow imaging and one light obscuration apparatus. Prior to correction, results from the three instruments disagreed by a factor ranging from 3.1 to 48 in particle concentration over the size range from 2 to 20 μm. Bias corrections reduced the disagreement from an average factor of 14 down to an average factor of 1.5. The methods presented show promise in reducing the relative bias between light obscuration and flow imaging.
Behura, Susanta K; Severson, David W
2013-02-01
Codon usage bias refers to the phenomenon where specific codons are used more often than other synonymous codons during translation of genes, the extent of which varies within and among species. Molecular evolutionary investigations suggest that codon bias is manifested as a result of balance between mutational and translational selection of such genes and that this phenomenon is widespread across species and may contribute to genome evolution in a significant manner. With the advent of whole-genome sequencing of numerous species, both prokaryotes and eukaryotes, genome-wide patterns of codon bias are emerging in different organisms. Various factors such as expression level, GC content, recombination rates, RNA stability, codon position, gene length and others (including environmental stress and population size) can influence codon usage bias within and among species. Moreover, there has been a continuous quest towards developing new concepts and tools to measure the extent of codon usage bias of genes. In this review, we outline the fundamental concepts of evolution of the genetic code, discuss various factors that may influence biased usage of synonymous codons and then outline different principles and methods of measurement of codon usage bias. Finally, we discuss selected studies performed using whole-genome sequences of different insect species to show how codon bias patterns vary within and among genomes. We conclude with generalized remarks on specific emerging aspects of codon bias studies and highlight the recent explosion of genome-sequencing efforts on arthropods (such as twelve Drosophila species, species of ants, honeybee, Nasonia and Anopheles mosquitoes as well as the recent launch of a genome-sequencing project involving 5000 insects and other arthropods) that may help us to understand better the evolution of codon bias and its biological significance. © 2012 The Authors. Biological Reviews © 2012 Cambridge Philosophical Society.
Latkin, Carl A; Edwards, Catie; Davey-Rothwell, Melissa A; Tobin, Karin E
2017-10-01
Social desirability response bias may lead to inaccurate self-reports and erroneous study conclusions. The present study examined the relationship between social desirability response bias and self-reports of mental health, substance use, and social network factors among a community sample of inner-city substance users. The study was conducted in a sample of 591 opiate and cocaine users in Baltimore, Maryland from 2009 to 2013. Modified items from the Marlowe-Crowne Social Desirability Scale were included in the survey, which was conducted face-to-face and using Audio Computer Self Administering Interview (ACASI) methods. There were highly statistically significant differences in levels of social desirability response bias by levels of depressive symptoms, drug use stigma, physical health status, recent opiate and cocaine use, Alcohol Use Disorders Identification Test (AUDIT) scores, and size of social networks. There were no associations between health service utilization measures and social desirability bias. In multiple logistic regression models, even after including the Center for Epidemiologic Studies Depression Scale (CES-D) as a measure of depressive symptomology, social desirability bias was associated with recent drug use and drug user stigma. Social desirability bias was not associated with enrollment in prior research studies. These findings suggest that social desirability bias is associated with key health measures and that the associations are not primarily due to depressive symptoms. Methods are needed to reduce social desirability bias. Such methods may include the wording and prefacing of questions, clearly defining the role of "study participant," and assessing and addressing motivations for socially desirable responses. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hyperdynamics boost factor achievable with an ideal bias potential
Huang, Chen; Perez, Danny; Voter, Arthur F.
2015-08-20
Hyperdynamics is a powerful method to significantly extend the time scales amenable to molecular dynamics simulation of infrequent events. One outstanding challenge, however, is the development of the so-called bias potential required by the method. In this work, we design a bias potential using information about all minimum energy pathways (MEPs) out of the current state. While this approach is not suitable for use in an actual hyperdynamics simulation, because the pathways are generally not known in advance, it allows us to show that it is possible to come very close to the theoretical boost limit of hyperdynamics while maintainingmore » high accuracy. We demonstrate this by applying this MEP-based hyperdynamics (MEP-HD) to metallic surface diffusion systems. In most cases, MEP-HD gives boost factors that are orders of magnitude larger than the best existing bias potential, indicating that further development of hyperdynamics bias potentials could have a significant payoff. Lastly, we discuss potential practical uses of MEP-HD, including the possibility of developing MEP-HD into a true hyperdynamics.« less
Adjusted Analyses in Studies Addressing Therapy and Harm: Users' Guides to the Medical Literature.
Agoritsas, Thomas; Merglen, Arnaud; Shah, Nilay D; O'Donnell, Martin; Guyatt, Gordon H
2017-02-21
Observational studies almost always have bias because prognostic factors are unequally distributed between patients exposed or not exposed to an intervention. The standard approach to dealing with this problem is adjusted or stratified analysis. Its principle is to use measurement of risk factors to create prognostically homogeneous groups and to combine effect estimates across groups.The purpose of this Users' Guide is to introduce readers to fundamental concepts underlying adjustment as a way of dealing with prognostic imbalance and to the basic principles and relative trustworthiness of various adjustment strategies.One alternative to the standard approach is propensity analysis, in which groups are matched according to the likelihood of membership in exposed or unexposed groups. Propensity methods can deal with multiple prognostic factors, even if there are relatively few patients having outcome events. However, propensity methods do not address other limitations of traditional adjustment: investigators may not have measured all relevant prognostic factors (or not accurately), and unknown factors may bias the results.A second approach, instrumental variable analysis, relies on identifying a variable associated with the likelihood of receiving the intervention but not associated with any prognostic factor or with the outcome (other than through the intervention); this could mimic randomization. However, as with assumptions of other adjustment approaches, it is never certain if an instrumental variable analysis eliminates bias.Although all these approaches can reduce the risk of bias in observational studies, none replace the balance of both known and unknown prognostic factors offered by randomization.
Dechartres, Agnes; Trinquart, Ludovic; Atal, Ignacio; Moher, David; Dickersin, Kay; Boutron, Isabelle; Perrodeau, Elodie; Altman, Douglas G; Ravaud, Philippe
2017-06-08
Objective To examine how poor reporting and inadequate methods for key methodological features in randomised controlled trials (RCTs) have changed over the past three decades. Design Mapping of trials included in Cochrane reviews. Data sources Data from RCTs included in all Cochrane reviews published between March 2011 and September 2014 reporting an evaluation of the Cochrane risk of bias items: sequence generation, allocation concealment, blinding, and incomplete outcome data. Data extraction For each RCT, we extracted consensus on risk of bias made by the review authors and identified the primary reference to extract publication year and journal. We matched journal names with Journal Citation Reports to get 2014 impact factors. Main outcomes measures We considered the proportions of trials rated by review authors at unclear and high risk of bias as surrogates for poor reporting and inadequate methods, respectively. Results We analysed 20 920 RCTs (from 2001 reviews) published in 3136 journals. The proportion of trials with unclear risk of bias was 48.7% for sequence generation and 57.5% for allocation concealment; the proportion of those with high risk of bias was 4.0% and 7.2%, respectively. For blinding and incomplete outcome data, 30.6% and 24.7% of trials were at unclear risk and 33.1% and 17.1% were at high risk, respectively. Higher journal impact factor was associated with a lower proportion of trials at unclear or high risk of bias. The proportion of trials at unclear risk of bias decreased over time, especially for sequence generation, which fell from 69.1% in 1986-1990 to 31.2% in 2011-14 and for allocation concealment (70.1% to 44.6%). After excluding trials at unclear risk of bias, use of inadequate methods also decreased over time: from 14.8% to 4.6% for sequence generation and from 32.7% to 11.6% for allocation concealment. Conclusions Poor reporting and inadequate methods have decreased over time, especially for sequence generation and allocation concealment. But more could be done, especially in lower impact factor journals. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Identifying items to assess methodological quality in physical therapy trials: a factor analysis.
Armijo-Olivo, Susan; Cummings, Greta G; Fuentes, Jorge; Saltaji, Humam; Ha, Christine; Chisholm, Annabritt; Pasichnyk, Dion; Rogers, Todd
2014-09-01
Numerous tools and individual items have been proposed to assess the methodological quality of randomized controlled trials (RCTs). The frequency of use of these items varies according to health area, which suggests a lack of agreement regarding their relevance to trial quality or risk of bias. The objectives of this study were: (1) to identify the underlying component structure of items and (2) to determine relevant items to evaluate the quality and risk of bias of trials in physical therapy by using an exploratory factor analysis (EFA). A methodological research design was used, and an EFA was performed. Randomized controlled trials used for this study were randomly selected from searches of the Cochrane Database of Systematic Reviews. Two reviewers used 45 items gathered from 7 different quality tools to assess the methodological quality of the RCTs. An exploratory factor analysis was conducted using the principal axis factoring (PAF) method followed by varimax rotation. Principal axis factoring identified 34 items loaded on 9 common factors: (1) selection bias; (2) performance and detection bias; (3) eligibility, intervention details, and description of outcome measures; (4) psychometric properties of the main outcome; (5) contamination and adherence to treatment; (6) attrition bias; (7) data analysis; (8) sample size; and (9) control and placebo adequacy. Because of the exploratory nature of the results, a confirmatory factor analysis is needed to validate this model. To the authors' knowledge, this is the first factor analysis to explore the underlying component items used to evaluate the methodological quality or risk of bias of RCTs in physical therapy. The items and factors represent a starting point for evaluating the methodological quality and risk of bias in physical therapy trials. Empirical evidence of the association among these items with treatment effects and a confirmatory factor analysis of these results are needed to validate these items. © 2014 American Physical Therapy Association.
Calibration of a rotating accelerometer gravity gradiometer using centrifugal gradients
NASA Astrophysics Data System (ADS)
Yu, Mingbiao; Cai, Tijing
2018-05-01
The purpose of this study is to calibrate scale factors and equivalent zero biases of a rotating accelerometer gravity gradiometer (RAGG). We calibrate scale factors by determining the relationship between the centrifugal gradient excitation and RAGG response. Compared with calibration by changing the gravitational gradient excitation, this method does not need test masses and is easier to implement. The equivalent zero biases are superpositions of self-gradients and the intrinsic zero biases of the RAGG. A self-gradient is the gravitational gradient produced by surrounding masses, and it correlates well with the RAGG attitude angle. We propose a self-gradient model that includes self-gradients and the intrinsic zero biases of the RAGG. The self-gradient model is a function of the RAGG attitude, and it includes parameters related to surrounding masses. The calibration of equivalent zero biases determines the parameters of the self-gradient model. We provide detailed procedures and mathematical formulations for calibrating scale factors and parameters in the self-gradient model. A RAGG physical simulation system substitutes for the actual RAGG in the calibration and validation experiments. Four point masses simulate four types of surrounding masses producing self-gradients. Validation experiments show that the self-gradients predicted by the self-gradient model are consistent with those from the outputs of the RAGG physical simulation system, suggesting that the presented calibration method is valid.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aggarwal, R.K.; Litton, R.W.; Cornell, C.A.
1996-12-31
The performance of more than 3,000 offshore platforms in the Gulf of Mexico was observed during the passage of Hurricane Andrew in August 1992. This event provided an opportunity to test the procedures used for platform analysis and design. A global bias was inferred for overall platform capacity and loads in the Andrew Joint Industry Project (JIP) Phase 1. It was predicted that the pile foundations of several platforms should have failed, but did not. These results indicated that the biases specific to foundation failure modes may be higher than those of jacket failure modes. The biases in predictions ofmore » foundation failure modes were therefore investigated further in this study. The work included capacity analysis and calibration of predictions with the observed behavior for 3 jacket platforms and 3 caissons using Bayesian updating. Bias factors for two foundation failure modes, lateral shear and overturning, were determined for each structure. Foundation capacity estimates using conventional methods were found to be conservatively biased overall.« less
Whiplash and the compensation hypothesis.
Spearing, Natalie M; Connelly, Luke B
2011-12-01
Review article. To explain why the evidence that compensation-related factors lead to worse health outcomes is not compelling, either in general, or in the specific case of whiplash. There is a common view that compensation-related factors lead to worse health outcomes ("the compensation hypothesis"), despite the presence of important, and unresolved sources of bias. The empirical evidence on this question has ramifications for the design of compensation schemes. Using studies on whiplash, this article outlines the methodological problems that impede attempts to confirm or refute the compensation hypothesis. Compensation studies are prone to measurement bias, reverse causation bias, and selection bias. Errors in measurement are largely due to the latent nature of whiplash injuries and health itself, a lack of clarity over the unit of measurement (specific factors, or "compensation"), and a lack of appreciation for the heterogeneous qualities of compensation-related factors and schemes. There has been a failure to acknowledge and empirically address reverse causation bias, or the likelihood that poor health influences the decision to pursue compensation: it is unclear if compensation is a cause or a consequence of poor health, or both. Finally, unresolved selection bias (and hence, confounding) is evident in longitudinal studies and natural experiments. In both cases, between-group differences have not been addressed convincingly. The nature of the relationship between compensation-related factors and health is unclear. Current approaches to testing the compensation hypothesis are prone to several important sources of bias, which compromise the validity of their results. Methods that explicitly test the hypothesis and establish whether or not a causal relationship exists between compensation factors and prolonged whiplash symptoms are needed in future studies.
Is social class standardisation appropriate in occupational studies?
Brisson, C; Loomis, D; Pearce, N
1987-01-01
Social class standardisation has been proposed as a method for separating the effects of occupation and "social" or "lifestyle" factors in epidemiological studies, by comparing workers in a particular occupation with other workers in the same social class. The validity of this method rests upon two assumptions: (1) that social factors have the same effect in all occupational groups in the same social class, and (2) that other workers in the same social class as the workers being studied are free of occupational risk factors for the disease of interest. These assumptions will not always be satisfied. In particular, the effect of occupation will be underestimated when the comparison group also has job-related exposures which cause the disease under study. Thus, although adjustment for social class may minimise bias due to social factors, it may introduce bias due to unmeasured occupational factors. This difficulty may be magnified when occupational category is used as the measure of social class. Because of this potential bias, adjustment for social class should be done only after careful consideration of the exposures and disease involved and should be based on an appropriate definition of social class. Both crude and standardised results should be presented when such adjustments are made. PMID:3455422
Shape measurement biases from underfitting and ellipticity gradients
Bernstein, Gary M.
2010-08-21
With this study, precision weak gravitational lensing experiments require measurements of galaxy shapes accurate to <1 part in 1000. We investigate measurement biases, noted by Voigt and Bridle (2009) and Melchior et al. (2009), that are common to shape measurement methodologies that rely upon fitting elliptical-isophote galaxy models to observed data. The first bias arises when the true galaxy shapes do not match the models being fit. We show that this "underfitting bias" is due, at root, to these methods' attempts to use information at high spatial frequencies that has been destroyed by the convolution with the point-spread function (PSF)more » and/or by sampling. We propose a new shape-measurement technique that is explicitly confined to observable regions of k-space. A second bias arises for galaxies whose ellipticity varies with radius. For most shape-measurement methods, such galaxies are subject to "ellipticity gradient bias". We show how to reduce such biases by factors of 20–100 within the new shape-measurement method. The resulting shear estimator has multiplicative errors < 1 part in 10 3 for high-S/N images, even for highly asymmetric galaxies. Without any training or recalibration, the new method obtains Q = 3000 in the GREAT08 Challenge of blind shear reconstruction on low-noise galaxies, several times better than any previous method.« less
Evaluation of Bias Correction Method for Satellite-Based Rainfall Data
Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter
2016-01-01
With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration’s (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003–2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW’s) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach. PMID:27314363
Evaluation of Bias Correction Method for Satellite-Based Rainfall Data.
Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter
2016-06-15
With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration's (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003-2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW's) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach.
An improved level set method for brain MR images segmentation and bias correction.
Chen, Yunjie; Zhang, Jianwei; Macione, Jim
2009-10-01
Intensity inhomogeneities cause considerable difficulty in the quantitative analysis of magnetic resonance (MR) images. Thus, bias field estimation is a necessary step before quantitative analysis of MR data can be undertaken. This paper presents a variational level set approach to bias correction and segmentation for images with intensity inhomogeneities. Our method is based on an observation that intensities in a relatively small local region are separable, despite of the inseparability of the intensities in the whole image caused by the overall intensity inhomogeneity. We first define a localized K-means-type clustering objective function for image intensities in a neighborhood around each point. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. The objective function is then integrated over the entire domain to define the data term into the level set framework. Our method is able to capture bias of quite general profiles. Moreover, it is robust to initialization, and thereby allows fully automated applications. The proposed method has been used for images of various modalities with promising results.
Li, Xinpeng; Li, Hong; Liu, Yun; Xiong, Wei; Fang, Sheng
2018-03-05
The release rate of atmospheric radionuclide emissions is a critical factor in the emergency response to nuclear accidents. However, there are unavoidable biases in radionuclide transport models, leading to inaccurate estimates. In this study, a method that simultaneously corrects these biases and estimates the release rate is developed. Our approach provides a more complete measurement-by-measurement correction of the biases with a coefficient matrix that considers both deterministic and stochastic deviations. This matrix and the release rate are jointly solved by the alternating minimization algorithm. The proposed method is generic because it does not rely on specific features of transport models or scenarios. It is validated against wind tunnel experiments that simulate accidental releases in a heterogonous and densely built nuclear power plant site. The sensitivities to the position, number, and quality of measurements and extendibility of the method are also investigated. The results demonstrate that this method effectively corrects the model biases, and therefore outperforms Tikhonov's method in both release rate estimation and model prediction. The proposed approach is robust to uncertainties and extendible with various center estimators, thus providing a flexible framework for robust source inversion in real accidents, even if large uncertainties exist in multiple factors. Copyright © 2017 Elsevier B.V. All rights reserved.
Method for removing atomic-model bias in macromolecular crystallography
Terwilliger, Thomas C [Santa Fe, NM
2006-08-01
Structure factor bias in an electron density map for an unknown crystallographic structure is minimized by using information in a first electron density map to elicit expected structure factor information. Observed structure factor amplitudes are combined with a starting set of crystallographic phases to form a first set of structure factors. A first electron density map is then derived and features of the first electron density map are identified to obtain expected distributions of electron density. Crystallographic phase probability distributions are established for possible crystallographic phases of reflection k, and the process is repeated as k is indexed through all of the plurality of reflections. An updated electron density map is derived from the crystallographic phase probability distributions for each one of the reflections. The entire process is then iterated to obtain a final set of crystallographic phases with minimum bias from known electron density maps.
NASA Astrophysics Data System (ADS)
Shirley, Rachel Elizabeth
Nuclear power plant (NPP) simulators are proliferating in academic research institutions and national laboratories in response to the availability of affordable, digital simulator platforms. Accompanying the new research facilities is a renewed interest in using data collected in NPP simulators for Human Reliability Analysis (HRA) research. An experiment conducted in The Ohio State University (OSU) NPP Simulator Facility develops data collection methods and analytical tools to improve use of simulator data in HRA. In the pilot experiment, student operators respond to design basis accidents in the OSU NPP Simulator Facility. Thirty-three undergraduate and graduate engineering students participated in the research. Following each accident scenario, student operators completed a survey about perceived simulator biases and watched a video of the scenario. During the video, they periodically recorded their perceived strength of significant Performance Shaping Factors (PSFs) such as Stress. This dissertation reviews three aspects of simulator-based research using the data collected in the OSU NPP Simulator Facility: First, a qualitative comparison of student operator performance to computer simulations of expected operator performance generated by the Information Decision Action Crew (IDAC) HRA method. Areas of comparison include procedure steps, timing of operator actions, and PSFs. Second, development of a quantitative model of the simulator bias introduced by the simulator environment. Two types of bias are defined: Environmental Bias and Motivational Bias. This research examines Motivational Bias--that is, the effect of the simulator environment on an operator's motivations, goals, and priorities. A bias causal map is introduced to model motivational bias interactions in the OSU experiment. Data collected in the OSU NPP Simulator Facility are analyzed using Structural Equation Modeling (SEM). Data include crew characteristics, operator surveys, and time to recognize and diagnose the accident in the scenario. These models estimate how the effects of the scenario conditions are mediated by simulator bias, and demonstrate how to quantify the strength of the simulator bias. Third, development of a quantitative model of subjective PSFs based on objective data (plant parameters, alarms, etc.) and PSF values reported by student operators. The objective PSF model is based on the PSF network in the IDAC HRA method. The final model is a mixed effects Bayesian hierarchical linear regression model. The subjective PSF model includes three factors: The Environmental PSF, the simulator Bias, and the Context. The Environmental Bias is mediated by an operator sensitivity coefficient that captures the variation in operator reactions to plant conditions. The data collected in the pilot experiments are not expected to reflect professional NPP operator performance, because the students are still novice operators. However, the models used in this research and the methods developed to analyze them demonstrate how to consider simulator bias in experiment design and how to use simulator data to enhance the technical basis of a complex HRA method. The contributions of the research include a framework for discussing simulator bias, a quantitative method for estimating simulator bias, a method for obtaining operator-reported PSF values, and a quantitative method for incorporating the variability in operator perception into PSF models. The research demonstrates applications of Structural Equation Modeling and hierarchical Bayesian linear regression models in HRA. Finally, the research demonstrates the benefits of using student operators as a test platform for HRA research.
Collinear Latent Variables in Multilevel Confirmatory Factor Analysis
van de Schoot, Rens; Hox, Joop
2014-01-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions. PMID:29795827
Can, Seda; van de Schoot, Rens; Hox, Joop
2015-06-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions.
Helium Mass Spectrometer Leak Detection: A Method to Quantify Total Measurement Uncertainty
NASA Technical Reports Server (NTRS)
Mather, Janice L.; Taylor, Shawn C.
2015-01-01
In applications where leak rates of components or systems are evaluated against a leak rate requirement, the uncertainty of the measured leak rate must be included in the reported result. However, in the helium mass spectrometer leak detection method, the sensitivity, or resolution, of the instrument is often the only component of the total measurement uncertainty noted when reporting results. To address this shortfall, a measurement uncertainty analysis method was developed that includes the leak detector unit's resolution, repeatability, hysteresis, and drift, along with the uncertainty associated with the calibration standard. In a step-wise process, the method identifies the bias and precision components of the calibration standard, the measurement correction factor (K-factor), and the leak detector unit. Together these individual contributions to error are combined and the total measurement uncertainty is determined using the root-sum-square method. It was found that the precision component contributes more to the total uncertainty than the bias component, but the bias component is not insignificant. For helium mass spectrometer leak rate tests where unit sensitivity alone is not enough, a thorough evaluation of the measurement uncertainty such as the one presented herein should be performed and reported along with the leak rate value.
Nelson, Jennifer Clark; Marsh, Tracey; Lumley, Thomas; Larson, Eric B; Jackson, Lisa A; Jackson, Michael L
2013-08-01
Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased owing to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. We applied two such methods, namely imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method's ability to reduce bias using the control time period before influenza circulation. Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not use the validation sample confounders. Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from health care database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which the data can be imputed or reweighted using the additional validation sample information. Copyright © 2013 Elsevier Inc. All rights reserved.
Modeling bias and variation in the stochastic processes of small RNA sequencing
Etheridge, Alton; Sakhanenko, Nikita; Galas, David
2017-01-01
Abstract The use of RNA-seq as the preferred method for the discovery and validation of small RNA biomarkers has been hindered by high quantitative variability and biased sequence counts. In this paper we develop a statistical model for sequence counts that accounts for ligase bias and stochastic variation in sequence counts. This model implies a linear quadratic relation between the mean and variance of sequence counts. Using a large number of sequencing datasets, we demonstrate how one can use the generalized additive models for location, scale and shape (GAMLSS) distributional regression framework to calculate and apply empirical correction factors for ligase bias. Bias correction could remove more than 40% of the bias for miRNAs. Empirical bias correction factors appear to be nearly constant over at least one and up to four orders of magnitude of total RNA input and independent of sample composition. Using synthetic mixes of known composition, we show that the GAMLSS approach can analyze differential expression with greater accuracy, higher sensitivity and specificity than six existing algorithms (DESeq2, edgeR, EBSeq, limma, DSS, voom) for the analysis of small RNA-seq data. PMID:28369495
Oestrous phase cyclicity influences judgment biasing in rats.
Barker, Timothy Hugh; Kind, Karen Lee; Groves, Peta Danielle; Howarth, Gordon Stanley; Whittaker, Alexandra Louise
2018-04-10
The identification of cognitive bias has become an important measure of animal welfare. Negative cognitive biases develop from a tendency for animals to process novel information pessimistically. Judgment-bias testing is the commonplace methodology to detect cognitive biases. However, concerns with these methods have been frequently-reported; one of which being the discrepancy between male and female cognitive expression. The current study assessed the factors of social status and oestrus, to investigate whether oestrous cycle rotation, or subordination stress encouraged an increase in pessimistic responses. Female Sprague-Dawley rats (n = 24) were trained on an active-choice judgment bias paradigm. Responses to the ambiguous probe were recorded as optimistic or pessimistic. Oestrous phase was determined by assessing vaginal cytology in stained vaginal cell smears. Rats in the dioestrous phase and those rats considered to be subordinate demonstrated an increased percentage of pessimistic responses. However, no interaction between these factors was observed. This suggests that oestrous cyclicity can influence the judgment biases of female animals; a previously unreported finding. On this basis, researchers should be encouraged to account for both oestrous phase cyclicity and social status as an additional fixed effect in study design. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Kim, Soyoung; Olejnik, Stephen
2005-01-01
The sampling distributions of five popular measures of association with and without two bias adjusting methods were examined for the single factor fixed-effects multivariate analysis of variance model. The number of groups, sample sizes, number of outcomes, and the strength of association were manipulated. The results indicate that all five…
Bias magnification in ecologic studies: a methodological investigation
Webster, Thomas F
2007-01-01
Background As ecologic studies are often inexpensive to conduct, consideration of the magnitude and direction of ecologic biases may be useful in both study design and sensitivity analysis of results. This paper examines three types of ecologic bias: confounding by group, effect measure modification by group, and non-differential exposure misclassification. Methods Bias of the risk difference on the individual and ecologic levels are compared using two-by-two tables, simple equations, and risk diagrams. Risk diagrams provide a convenient way to simultaneously display information from both levels. Results Confounding by group and effect measure modification by group act in the same direction on the individual and group levels, but have larger impact on the latter. The reduction in exposure variance caused by aggregation magnifies the individual level bias due to ignoring groups. For some studies, the magnification factor can be calculated from the ecologic data alone. Small magnification factors indicate little bias beyond that occurring at the individual level. Aggregation is also responsible for the different impacts of non-differential exposure misclassification on individual and ecologic studies. Conclusion The analytical tools developed here are useful in analyzing ecologic bias. The concept of bias magnification may be helpful in designing ecologic studies and performing sensitivity analysis of their results. PMID:17615079
Good, Nicholas; Mölter, Anna; Peel, Jennifer L; Volckens, John
2017-07-01
The AE51 micro-Aethalometer (microAeth) is a popular and useful tool for assessing personal exposure to particulate black carbon (BC). However, few users of the AE51 are aware that its measurements are biased low (by up to 70%) due to the accumulation of BC on the filter substrate over time; previous studies of personal black carbon exposure are likely to have suffered from this bias. Although methods to correct for bias in micro-Aethalometer measurements of particulate black carbon have been proposed, these methods have not been verified in the context of personal exposure assessment. Here, five Aethalometer loading correction equations based on published methods were evaluated. Laboratory-generated aerosols of varying black carbon content (ammonium sulfate, Aquadag and NIST diesel particulate matter) were used to assess the performance of these methods. Filters from a personal exposure assessment study were also analyzed to determine how the correction methods performed for real-world samples. Standard correction equations produced correction factors with root mean square errors of 0.10 to 0.13 and mean bias within ±0.10. An optimized correction equation is also presented, along with sampling recommendations for minimizing bias when assessing personal exposure to BC using the AE51 micro-Aethalometer.
Nelson, Jennifer C.; Marsh, Tracey; Lumley, Thomas; Larson, Eric B.; Jackson, Lisa A.; Jackson, Michael
2014-01-01
Objective Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased due to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. Study Design and Setting We applied two such methods, imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method’s ability to reduce bias using the control time period prior to influenza circulation. Results Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not utilize the validation sample confounders. Conclusion Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from healthcare database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which data can be imputed or reweighted using the additional validation sample information. PMID:23849144
Vos, Janet R; Hsu, Li; Brohet, Richard M; Mourits, Marian J E; de Vries, Jakob; Malone, Kathleen E; Oosterwijk, Jan C; de Bock, Geertruida H
2015-08-10
Recommendations for treating patients who carry a BRCA1/2 gene are mainly based on cumulative lifetime risks (CLTRs) of breast cancer determined from retrospective cohorts. These risks vary widely (27% to 88%), and it is important to understand why. We analyzed the effects of methods of risk estimation and bias correction and of population factors on CLTRs in this retrospective clinical cohort of BRCA1/2 carriers. The following methods to estimate the breast cancer risk of BRCA1/2 carriers were identified from the literature: Kaplan-Meier, frailty, and modified segregation analyses with bias correction consisting of including or excluding index patients combined with including or excluding first-degree relatives (FDRs) or different conditional likelihoods. These were applied to clinical data of BRCA1/2 families derived from our family cancer clinic for whom a simulation was also performed to evaluate the methods. CLTRs and 95% CIs were estimated and compared with the reference CLTRs. CLTRs ranged from 35% to 83% for BRCA1 and 41% to 86% for BRCA2 carriers at age 70 years width of 95% CIs: 10% to 35% and 13% to 46%, respectively). Relative bias varied from -38% to +16%. Bias correction with inclusion of index patients and untested FDRs gave the smallest bias: +2% (SD, 2%) in BRCA1 and +0.9% (SD, 3.6%) in BRCA2. Much of the variation in breast cancer CLTRs in retrospective clinical BRCA1/2 cohorts is due to the bias-correction method, whereas a smaller part is due to population differences. Kaplan-Meier analyses with bias correction that includes index patients and a proportion of untested FDRs provide suitable CLTRs for carriers counseled in the clinic. © 2015 by American Society of Clinical Oncology.
Free energy calculations: an efficient adaptive biasing potential method.
Dickson, Bradley M; Legoll, Frédéric; Lelièvre, Tony; Stoltz, Gabriel; Fleurat-Lessard, Paul
2010-05-06
We develop an efficient sampling and free energy calculation technique within the adaptive biasing potential (ABP) framework. By mollifying the density of states we obtain an approximate free energy and an adaptive bias potential that is computed directly from the population along the coordinates of the free energy. Because of the mollifier, the bias potential is "nonlocal", and its gradient admits a simple analytic expression. A single observation of the reaction coordinate can thus be used to update the approximate free energy at every point within a neighborhood of the observation. This greatly reduces the equilibration time of the adaptive bias potential. This approximation introduces two parameters: strength of mollification and the zero of energy of the bias potential. While we observe that the approximate free energy is a very good estimate of the actual free energy for a large range of mollification strength, we demonstrate that the errors associated with the mollification may be removed via deconvolution. The zero of energy of the bias potential, which is easy to choose, influences the speed of convergence but not the limiting accuracy. This method is simple to apply to free energy or mean force computation in multiple dimensions and does not involve second derivatives of the reaction coordinates, matrix manipulations nor on-the-fly adaptation of parameters. For the alanine dipeptide test case, the new method is found to gain as much as a factor of 10 in efficiency as compared to two basic implementations of the adaptive biasing force methods, and it is shown to be as efficient as well-tempered metadynamics with the postprocess deconvolution giving a clear advantage to the mollified density of states method.
Shen, Jiangshan J; Wang, Ting-You; Yang, Wanling
2017-11-02
Sex is an important but understudied factor in the genetics of human diseases. Analyses using a combination of gene expression data, ENCODE data, and evolutionary data of sex-biased gene expression in human tissues can give insight into the regulatory and evolutionary forces acting on sex-biased genes. In this study, we analyzed the differentially expressed genes between males and females. On the X chromosome, we used a novel method and investigated the status of genes that escape X-chromosome inactivation (escape genes), taking into account the clonality of lymphoblastoid cell lines (LCLs). To investigate the regulation of sex-biased differentially expressed genes (sDEG), we conducted pathway and transcription factor enrichment analyses on the sDEGs, as well as analyses on the genomic distribution of sDEGs. Evolutionary analyses were also conducted on both sDEGs and escape genes. Genome-wide, we characterized differential gene expression between sexes in 462 RNA-seq samples and identified 587 sex-biased genes, or 3.2% of the genes surveyed. On the X chromosome, sDEGs were distributed in evolutionary strata in a similar pattern as escape genes. We found a trend of negative correlation between the gene expression breadth and nonsynonymous over synonymous mutation (dN/dS) ratios, showing a possible pleiotropic constraint on evolution of genes. Genome-wide, nine transcription factors were found enriched in binding to the regions surrounding the transcription start sites of female-biased genes. Many pathways and protein domains were enriched in sex-biased genes, some of which hint at sex-biased physiological processes. These findings lend insight into the regulatory and evolutionary forces shaping sex-biased gene expression and their involvement in the physiological and pathological processes in human health and diseases.
Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.
Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia
2017-06-01
Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors (RSS) from the three quantitative analyses were compared. In methane gas analysis, SWLS yielded the lowest SEP and RSS among the three methods. In methane/toluene mixture gas analysis, a modification of the SWLS has been presented to tackle the bias error from other components. The SWLS without modification presents the lowest SEP in all cases but not bias and RSS. The modification of SWLS reduced the bias, which showed a lower RSS than CLS, especially for small components.
Vrijheid, Martine; Richardson, Lesley; Armstrong, Bruce K; Auvinen, Anssi; Berg, Gabriele; Carroll, Matthew; Chetrit, Angela; Deltour, Isabelle; Feychting, Maria; Giles, Graham G; Hours, Martine; Iavarone, Ivano; Lagorio, Susanna; Lönn, Stefan; McBride, Mary; Parent, Marie-Elise; Sadetzki, Siegal; Salminen, Tina; Sanchez, Marie; Schlehofer, Birgitte; Schüz, Joachim; Siemiatycki, Jack; Tynes, Tore; Woodward, Alistair; Yamaguchi, Naohito; Cardis, Elisabeth
2009-01-01
To quantitatively assess the impact of selection bias caused by nonparticipation in a multinational case-control study of mobile phone use and brain tumor. Non-response questionnaires (NRQ) were completed by a sub-set of nonparticipants. Selection bias factors were calculated based on the prevalence of mobile phone use reported by nonparticipants with NRQ data, and on scenarios of hypothetical exposure prevalence for other nonparticipants. Regular mobile phone use was reported less frequently by controls and cases who completed the NRQ (controls, 56%; cases, 50%) than by those who completed the full interview (controls, 69%; cases, 66%). This relationship was consistent across study centers, sex, and age groups. Lower education and more recent start of mobile phone use were associated with refusal to participate. Bias factors varied between 0.87 and 0.92 in the most plausible scenarios. Refusal to participate in brain tumor case-control studies seems to be related to less prevalent use of mobile phones, and this could result in a downward bias of around 10% in odds ratios for regular mobile phone use. The use of simple selection bias estimation methods in case-control studies can give important insights into the extent of any bias, even when nonparticipant information is incomplete.
Zhou, Anli Yue; Baker, Paul
2014-01-01
Upward feedback is becoming more widely used in medical training as a means of quality control. Multiple biases exist, thus the accuracy of upward feedback is debatable. This study aims to identify factors that could influence upward feedback, especially in medical training. A systematic review using a structured search strategy was performed. Thirty-five databases were searched. Results were reviewed and relevant abstracts were shortlisted. All studies in English, both medical and non-medical literature, were included. A simple pro-forma was used initially to identify the pertinent areas of upward feedback, so that a focused pro-forma could be designed for data extraction. A total of 204 articles were reviewed. Most studies on upward feedback bias were evaluative studies and only covered Kirkpatrick level 1-reaction. Most studies evaluated trainers or training, were used for formative purposes and presented quantitative data. Accountability and confidentiality were the most common overt biases, whereas method of feedback was the most commonly implied bias within articles. Although different types of bias do exist, upward feedback does have a role in evaluating medical training. Accountability and confidentiality were the most common biases. Further research is required to evaluate which types of bias are associated with specific survey characteristics and which are potentially modifiable.
Jansen, Rick J; Alexander, Bruce H; Hayes, Richard B; Miller, Anthony B; Wacholder, Sholom; Church, Timothy R
2018-01-01
When some individuals are screen-detected before the beginning of the study, but otherwise would have been diagnosed symptomatically during the study, this results in different case-ascertainment probabilities among screened and unscreened participants, referred to here as lead-time-biased case-ascertainment (LTBCA). In fact, this issue can arise even in risk-factor studies nested within a randomized screening trial; even though the screening intervention is randomly allocated to trial arms, there is no randomization to potential risk-factors and uptake of screening can differ by risk-factor strata. Under the assumptions that neither screening nor the risk factor affects underlying incidence and no other forms of bias operate, we simulate and compare the underlying cumulative incidence and that observed in the study due to LTBCA. The example used will be constructed from the randomized Prostate, Lung, Colorectal, and Ovarian cancer screening trial. The derived mathematical model is applied to simulating two nested studies to evaluate the potential for screening bias in observational lung cancer studies. Because of differential screening under plausible assumptions about preclinical incidence and duration, the simulations presented here show that LTBCA due to chest x-ray screening can significantly increase the estimated risk of lung cancer due to smoking by 1% and 50%. Traditional adjustment methods cannot account for this bias, as the influence screening has on observational study estimates involves events outside of the study observation window (enrollment and follow-up) that change eligibility for potential participants, thus biasing case ascertainment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Presa, S., E-mail: silvino.presa@tyndall.ie; School of Engineering, University College Cork, Cork; Maaskant, P. P.
We present a comprehensive study of the emission spectra and electrical characteristics of InGaN/GaN multi-quantum well light-emitting diode (LED) structures under resonant optical pumping and varying electrical bias. A 5 quantum well LED with a thin well (1.5 nm) and a relatively thick barrier (6.6 nm) shows strong bias-dependent properties in the emission spectra, poor photovoltaic carrier escape under forward bias and an increase in effective resistance when compared with a 10 quantum well LED with a thin (4 nm) barrier. These properties are due to a strong piezoelectric field in the well and associated reduced field in the thickermore » barrier. We compare the voltage ideality factors for the LEDs under electrical injection, light emission with current, photovoltaic mode (PV) and photoluminescence (PL) emission. The PV and PL methods provide similar values for the ideality which are lower than for the resistance-limited electrical method. Under optical pumping the presence of an n-type InGaN underlayer in a commercial LED sample is shown to act as a second photovoltaic source reducing the photovoltage and the extracted ideality factor to less than 1. The use of photovoltaic measurements together with bias-dependent spectrally resolved luminescence is a powerful method to provide valuable insights into the dynamics of GaN LEDs.« less
Quantifying the predictive consequences of model error with linear subspace analysis
White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.
2014-01-01
All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.
Phillips, Andrew W; Friedman, Benjamin T; Utrankar, Amol; Ta, Andrew Q; Reddy, Shalini T; Durning, Steven J
2017-02-01
To establish a baseline overall response rate for surveys of health professions trainees, determine strategies associated with improved response rates, and evaluate for the presence of nonresponse bias. The authors performed a comprehensive analysis of all articles published in Academic Medicine, Medical Education, and Advances in Health Sciences Education in 2013, recording response rates. Additionally, they reviewed nonresponse bias analyses and factors suggested in other fields to affect response rate including survey delivery method, prenotification, and incentives. The search yielded 732 total articles; of these, 356 were research articles, and of these, 185 (52.0%) used at least one survey. Of these, 66 articles (35.6%) met inclusion criteria and yielded 73 unique surveys. Of the 73 surveys used, investigators reported a response rate for 63.0% of them; response rates ranged from 26.6% to 100%, mean (standard deviation) 71.3% (19.5%). Investigators reported using incentives for only 16.4% of the 73 surveys. The only survey methodology factor significantly associated with response rate was single- vs. multi-institutional surveys (respectively, 74.6% [21.2%] vs. 62.0% [12.8%], P = .022). Notably, statistical power for all analyses was limited. No articles evaluated for nonresponse bias. Approximately half of the articles evaluated used a survey as part of their methods. Limited data are available to establish a baseline response rate among health professions trainees and inform researchers which strategies are associated with higher response rates. Journals publishing survey-based health professions education research should improve reporting of response rate, nonresponse bias, and other survey factors.
Study Protocol, Sample Characteristics, and Loss to Follow-Up: The OPPERA Prospective Cohort Study
Bair, Eric; Brownstein, Naomi C.; Ohrbach, Richard; Greenspan, Joel D.; Dubner, Ron; Fillingim, Roger B.; Maixner, William; Smith, Shad; Diatchenko, Luda; Gonzalez, Yoly; Gordon, Sharon; Lim, Pei-Feng; Ribeiro-Dasilva, Margarete; Dampier, Dawn; Knott, Charles; Slade, Gary D.
2013-01-01
When studying incidence of pain conditions such as temporomandibular disorders (TMDs), repeated monitoring is needed in prospective cohort studies. However, monitoring methods usually have limitations and, over a period of years, some loss to follow-up is inevitable. The OPPERA prospective cohort study of first-onset TMD screened for symptoms using quarterly questionnaires and examined symptomatic participants to definitively ascertain TMD incidence. During the median 2.8-year observation period, 16% of the 3,263 enrollees completed no follow-up questionnaires, others provided incomplete follow-up, and examinations were not conducted for one third of symptomatic episodes. Although screening methods and examinations were found to have excellent reliability and validity, they were not perfect. Loss to follow-up varied according to some putative TMD risk factors, although multiple imputation to correct the problem suggested that bias was minimal. A second method of multiple imputation that evaluated bias associated with omitted and dubious examinations revealed a slight underestimate of incidence and some small biases in hazard ratios used to quantify effects of risk factors. Although “bottom line” statistical conclusions were not affected, multiply-imputed estimates should be considered when evaluating the large number of risk factors under investigation in the OPPERA study. Perspective These findings support the validity of the OPPERA prospective cohort study for the purpose of investigating the etiology of first-onset TMD, providing the foundation for other papers investigating risk factors hypothesized in the OPPERA project. PMID:24275220
NASA Astrophysics Data System (ADS)
Peter, Emanuel K.
2017-12-01
In this article, we present a novel adaptive enhanced sampling molecular dynamics (MD) method for the accelerated simulation of protein folding and aggregation. We introduce a path-variable L based on the un-biased momenta p and displacements dq for the definition of the bias s applied to the system and derive 3 algorithms: general adaptive bias MD, adaptive path-sampling, and a hybrid method which combines the first 2 methodologies. Through the analysis of the correlations between the bias and the un-biased gradient in the system, we find that the hybrid methodology leads to an improved force correlation and acceleration in the sampling of the phase space. We apply our method on SPC/E water, where we find a conservation of the average water structure. We then use our method to sample dialanine and the folding of TrpCage, where we find a good agreement with simulation data reported in the literature. Finally, we apply our methodologies on the initial stages of aggregation of a hexamer of Alzheimer's amyloid β fragment 25-35 (Aβ 25-35) and find that transitions within the hexameric aggregate are dominated by entropic barriers, while we speculate that especially the conformation entropy plays a major role in the formation of the fibril as a rate limiting factor.
Peter, Emanuel K
2017-12-07
In this article, we present a novel adaptive enhanced sampling molecular dynamics (MD) method for the accelerated simulation of protein folding and aggregation. We introduce a path-variable L based on the un-biased momenta p and displacements dq for the definition of the bias s applied to the system and derive 3 algorithms: general adaptive bias MD, adaptive path-sampling, and a hybrid method which combines the first 2 methodologies. Through the analysis of the correlations between the bias and the un-biased gradient in the system, we find that the hybrid methodology leads to an improved force correlation and acceleration in the sampling of the phase space. We apply our method on SPC/E water, where we find a conservation of the average water structure. We then use our method to sample dialanine and the folding of TrpCage, where we find a good agreement with simulation data reported in the literature. Finally, we apply our methodologies on the initial stages of aggregation of a hexamer of Alzheimer's amyloid β fragment 25-35 (Aβ 25-35) and find that transitions within the hexameric aggregate are dominated by entropic barriers, while we speculate that especially the conformation entropy plays a major role in the formation of the fibril as a rate limiting factor.
Zhang, Haixia; Zhao, Junkang; Gu, Caijiao; Cui, Yan; Rong, Huiying; Meng, Fanlong; Wang, Tong
2015-05-01
The study of the medical expenditure and its influencing factors among the students enrolling in Urban Resident Basic Medical Insurance (URBMI) in Taiyuan indicated that non response bias and selection bias coexist in dependent variable of the survey data. Unlike previous studies only focused on one missing mechanism, a two-stage method to deal with two missing mechanisms simultaneously was suggested in this study, combining multiple imputation with sample selection model. A total of 1 190 questionnaires were returned by the students (or their parents) selected in child care settings, schools and universities in Taiyuan by stratified cluster random sampling in 2012. In the returned questionnaires, 2.52% existed not missing at random (NMAR) of dependent variable and 7.14% existed missing at random (MAR) of dependent variable. First, multiple imputation was conducted for MAR by using completed data, then sample selection model was used to correct NMAR in multiple imputation, and a multi influencing factor analysis model was established. Based on 1 000 times resampling, the best scheme of filling the random missing values is the predictive mean matching (PMM) method under the missing proportion. With this optimal scheme, a two stage survey was conducted. Finally, it was found that the influencing factors on annual medical expenditure among the students enrolling in URBMI in Taiyuan included population group, annual household gross income, affordability of medical insurance expenditure, chronic disease, seeking medical care in hospital, seeking medical care in community health center or private clinic, hospitalization, hospitalization canceled due to certain reason, self medication and acceptable proportion of self-paid medical expenditure. The two-stage method combining multiple imputation with sample selection model can deal with non response bias and selection bias effectively in dependent variable of the survey data.
Systematic effects on dark energy from 3D weak shear
NASA Astrophysics Data System (ADS)
Kitching, T. D.; Taylor, A. N.; Heavens, A. F.
2008-09-01
We present an investigation into the potential effect of systematics inherent in multiband wide-field surveys on the dark energy equation-of-state determination for two 3D weak lensing methods. The weak lensing methods are a geometric shear-ratio method and 3D cosmic shear. The analysis here uses an extension of the Fisher matrix framework to include jointly photometric redshift systematics, shear distortion systematics and intrinsic alignments. Using analytic parametrizations of these three primary systematic effects allows an isolation of systematic parameters of particular importance. We show that assuming systematic parameters are fixed, but possibly biased, results in potentially large biases in dark energy parameters. We quantify any potential bias by defining a Bias Figure of Merit. By marginalizing over extra systematic parameters, such biases are negated at the expense of an increase in the cosmological parameter errors. We show the effect on the dark energy Figure of Merit of marginalizing over each systematic parameter individually. We also show the overall reduction in the Figure of Merit due to all three types of systematic effects. Based on some assumption of the likely level of systematic errors, we find that the largest effect on the Figure of Merit comes from uncertainty in the photometric redshift systematic parameters. These can reduce the Figure of Merit by up to a factor of 2 to 4 in both 3D weak lensing methods, if no informative prior on the systematic parameters is applied. Shear distortion systematics have a smaller overall effect. Intrinsic alignment effects can reduce the Figure of Merit by up to a further factor of 2. This, however, is a worst-case scenario, within the assumptions of the parametrizations used. By including prior information on systematic parameters, the Figure of Merit can be recovered to a large extent, and combined constraints from 3D cosmic shear and shear ratio are robust to systematics. We conclude that, as a rule of thumb, given a realistic current understanding of intrinsic alignments and photometric redshifts, then including all three primary systematic effects reduces the Figure of Merit by at most a factor of 2.
Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media
Cooley, R.L.; Christensen, S.
2006-01-01
Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.
Competitive action video game players display rightward error bias during on-line video game play.
Roebuck, Andrew J; Dubnyk, Aurora J B; Cochran, David; Mandryk, Regan L; Howland, John G; Harms, Victoria
2017-09-12
Research in asymmetrical visuospatial attention has identified a leftward bias in the general population across a variety of measures including visual attention and line-bisection tasks. In addition, increases in rightward collisions, or bumping, during visuospatial navigation tasks have been demonstrated in real world and virtual environments. However, little research has investigated these biases beyond the laboratory. The present study uses a semi-naturalistic approach and the online video game streaming service Twitch to examine navigational errors and assaults as skilled action video game players (n = 60) compete in Counter Strike: Global Offensive. This study showed a significant rightward bias in both fatal assaults and navigational errors. Analysis using the in-game ranking system as a measure of skill failed to show a relationship between bias and skill. These results suggest that a leftward visuospatial bias may exist in skilled players during online video game play. However, the present study was unable to account for some factors such as environmental symmetry and player handedness. In conclusion, video game streaming is a promising method for behavioural research in the future, however further study is required before one can determine whether these results are an artefact of the method applied, or representative of a genuine rightward bias.
ERIC Educational Resources Information Center
Drennan, Jonathan; Hyde, Abbey
2008-01-01
Traditionally the measures used to evaluate the impact of an educational programme on student outcomes and the extent to which students change is a comparison of the student's pre-test scores with his/her post-test scores. However, this method of evaluating change may be problematic due to the confounding factor of response shift bias when student…
Analysis of case-only studies accounting for genotyping error.
Cheng, K F
2007-03-01
The case-only design provides one approach to assess possible interactions between genetic and environmental factors. It has been shown that if these factors are conditionally independent, then a case-only analysis is not only valid but also very efficient. However, a drawback of the case-only approach is that its conclusions may be biased by genotyping errors. In this paper, our main aim is to propose a method for analysis of case-only studies when these errors occur. We show that the bias can be adjusted through the use of internal validation data, which are obtained by genotyping some sampled individuals twice. Our analysis is based on a simple and yet highly efficient conditional likelihood approach. Simulation studies considered in this paper confirm that the new method has acceptable performance under genotyping errors.
Daly, Caitlin H; Higgins, Victoria; Adeli, Khosrow; Grey, Vijay L; Hamid, Jemila S
2017-12-01
To statistically compare and evaluate commonly used methods of estimating reference intervals and to determine which method is best based on characteristics of the distribution of various data sets. Three approaches for estimating reference intervals, i.e. parametric, non-parametric, and robust, were compared with simulated Gaussian and non-Gaussian data. The hierarchy of the performances of each method was examined based on bias and measures of precision. The findings of the simulation study were illustrated through real data sets. In all Gaussian scenarios, the parametric approach provided the least biased and most precise estimates. In non-Gaussian scenarios, no single method provided the least biased and most precise estimates for both limits of a reference interval across all sample sizes, although the non-parametric approach performed the best for most scenarios. The hierarchy of the performances of the three methods was only impacted by sample size and skewness. Differences between reference interval estimates established by the three methods were inflated by variability. Whenever possible, laboratories should attempt to transform data to a Gaussian distribution and use the parametric approach to obtain the most optimal reference intervals. When this is not possible, laboratories should consider sample size and skewness as factors in their choice of reference interval estimation method. The consequences of false positives or false negatives may also serve as factors in this decision. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Evaluation of normalization methods for cDNA microarray data by k-NN classification
Wu, Wei; Xing, Eric P; Myers, Connie; Mian, I Saira; Bissell, Mina J
2005-01-01
Background Non-biological factors give rise to unwanted variations in cDNA microarray data. There are many normalization methods designed to remove such variations. However, to date there have been few published systematic evaluations of these techniques for removing variations arising from dye biases in the context of downstream, higher-order analytical tasks such as classification. Results Ten location normalization methods that adjust spatial- and/or intensity-dependent dye biases, and three scale methods that adjust scale differences were applied, individually and in combination, to five distinct, published, cancer biology-related cDNA microarray data sets. Leave-one-out cross-validation (LOOCV) classification error was employed as the quantitative end-point for assessing the effectiveness of a normalization method. In particular, a known classifier, k-nearest neighbor (k-NN), was estimated from data normalized using a given technique, and the LOOCV error rate of the ensuing model was computed. We found that k-NN classifiers are sensitive to dye biases in the data. Using NONRM and GMEDIAN as baseline methods, our results show that single-bias-removal techniques which remove either spatial-dependent dye bias (referred later as spatial effect) or intensity-dependent dye bias (referred later as intensity effect) moderately reduce LOOCV classification errors; whereas double-bias-removal techniques which remove both spatial- and intensity effect reduce LOOCV classification errors even further. Of the 41 different strategies examined, three two-step processes, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, all of which removed intensity effect globally and spatial effect locally, appear to reduce LOOCV classification errors most consistently and effectively across all data sets. We also found that the investigated scale normalization methods do not reduce LOOCV classification error. Conclusion Using LOOCV error of k-NNs as the evaluation criterion, three double-bias-removal normalization strategies, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, outperform other strategies for removing spatial effect, intensity effect and scale differences from cDNA microarray data. The apparent sensitivity of k-NN LOOCV classification error to dye biases suggests that this criterion provides an informative measure for evaluating normalization methods. All the computational tools used in this study were implemented using the R language for statistical computing and graphics. PMID:16045803
Evaluation of normalization methods for cDNA microarray data by k-NN classification.
Wu, Wei; Xing, Eric P; Myers, Connie; Mian, I Saira; Bissell, Mina J
2005-07-26
Non-biological factors give rise to unwanted variations in cDNA microarray data. There are many normalization methods designed to remove such variations. However, to date there have been few published systematic evaluations of these techniques for removing variations arising from dye biases in the context of downstream, higher-order analytical tasks such as classification. Ten location normalization methods that adjust spatial- and/or intensity-dependent dye biases, and three scale methods that adjust scale differences were applied, individually and in combination, to five distinct, published, cancer biology-related cDNA microarray data sets. Leave-one-out cross-validation (LOOCV) classification error was employed as the quantitative end-point for assessing the effectiveness of a normalization method. In particular, a known classifier, k-nearest neighbor (k-NN), was estimated from data normalized using a given technique, and the LOOCV error rate of the ensuing model was computed. We found that k-NN classifiers are sensitive to dye biases in the data. Using NONRM and GMEDIAN as baseline methods, our results show that single-bias-removal techniques which remove either spatial-dependent dye bias (referred later as spatial effect) or intensity-dependent dye bias (referred later as intensity effect) moderately reduce LOOCV classification errors; whereas double-bias-removal techniques which remove both spatial- and intensity effect reduce LOOCV classification errors even further. Of the 41 different strategies examined, three two-step processes, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, all of which removed intensity effect globally and spatial effect locally, appear to reduce LOOCV classification errors most consistently and effectively across all data sets. We also found that the investigated scale normalization methods do not reduce LOOCV classification error. Using LOOCV error of k-NNs as the evaluation criterion, three double-bias-removal normalization strategies, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, outperform other strategies for removing spatial effect, intensity effect and scale differences from cDNA microarray data. The apparent sensitivity of k-NN LOOCV classification error to dye biases suggests that this criterion provides an informative measure for evaluating normalization methods. All the computational tools used in this study were implemented using the R language for statistical computing and graphics.
NASA Astrophysics Data System (ADS)
Ghorbani, A.; Farahani, M. Mahmoodi; Rabbani, M.; Aflaki, F.; Waqifhosain, Syed
2008-01-01
In this paper we propose uncertainty estimation for the analytical results we obtained from determination of Ni, Pb and Al by solidphase extraction and inductively coupled plasma optical emission spectrometry (SPE-ICP-OES). The procedure is based on the retention of analytes in the form of 8-hydroxyquinoline (8-HQ) complexes on a mini column of XAD-4 resin and subsequent elution with nitric acid. The influence of various analytical parameters including the amount of solid phase, pH, elution factors (concentration and volume of eluting solution), volume of sample solution, and amount of ligand on the extraction efficiency of analytes was investigated. To estimate the uncertainty of analytical result obtained, we propose assessing trueness by employing spiked sample. Two types of bias are calculated in the assessment of trueness: a proportional bias and a constant bias. We applied Nested design for calculating proportional bias and Youden method to calculate the constant bias. The results we obtained for proportional bias are calculated from spiked samples. In this case, the concentration found is plotted against the concentration added and the slop of standard addition curve is an estimate of the method recovery. Estimated method of average recovery in Karaj river water is: (1.004±0.0085) for Ni, (0.999±0.010) for Pb and (0.987±0.008) for Al.
O'Connor, Annette M; Totton, Sarah C; Cullen, Jonah N; Ramezani, Mahmood; Kalivarapu, Vijay; Yuan, Chaohui; Gilbert, Stephen B
2018-01-01
Systematic reviews are increasingly using data from preclinical animal experiments in evidence networks. Further, there are ever-increasing efforts to automate aspects of the systematic review process. When assessing systematic bias and unit-of-analysis errors in preclinical experiments, it is critical to understand the study design elements employed by investigators. Such information can also inform prioritization of automation efforts that allow the identification of the most common issues. The aim of this study was to identify the design elements used by investigators in preclinical research in order to inform unique aspects of assessment of bias and error in preclinical research. Using 100 preclinical experiments each related to brain trauma and toxicology, we assessed design elements described by the investigators. We evaluated Methods and Materials sections of reports for descriptions of the following design elements: 1) use of comparison group, 2) unit of allocation of the interventions to study units, 3) arrangement of factors, 4) method of factor allocation to study units, 5) concealment of the factors during allocation and outcome assessment, 6) independence of study units, and 7) nature of factors. Many investigators reported using design elements that suggested the potential for unit-of-analysis errors, i.e., descriptions of repeated measurements of the outcome (94/200) and descriptions of potential for pseudo-replication (99/200). Use of complex factor arrangements was common, with 112 experiments using some form of factorial design (complete, incomplete or split-plot-like). In the toxicology dataset, 20 of the 100 experiments appeared to use a split-plot-like design, although no investigators used this term. The common use of repeated measures and factorial designs means understanding bias and error in preclinical experimental design might require greater expertise than simple parallel designs. Similarly, use of complex factor arrangements creates novel challenges for accurate automation of data extraction and bias and error assessment in preclinical experiments.
Tang, Jian; Jiang, Xiaoliang
2017-01-01
Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.
Data analysis strategies for reducing the influence of the bias in cross-cultural research.
Sindik, Josko
2012-03-01
In cross-cultural research, researchers have to adjust the constructs and associated measurement instruments that have been developed in one culture and then imported for use in another culture. Importing concepts from other cultures is often simply reduced to language adjustment of the content in the items of the measurement instruments that define a certain (psychological) construct. In the context of cross-cultural research, test bias can be defined as a generic term for all nuisance factors that threaten the validity of cross-cultural comparisons. Bias can be an indicator that instrument scores based on the same items measure different traits and characteristics across different cultural groups. To reduce construct, method and item bias,the researcher can consider these strategies: (1) simply comparing average results in certain measuring instruments; (2) comparing only the reliability of certain dimensions of the measurement instruments, applied to the "target" and "source" samples of participants, i.e. from different cultures; (3) comparing the "framed" factor structure (fixed number of factors) of the measurement instruments, applied to the samples from the "target" and "source" cultures, using explorative factor analysis strategy on separate samples; (4) comparing the complete constructs ("unframed" factor analysis, i.e. unlimited number of factors) in relation to their best psychometric properties and the possibility of interpreting (best suited to certain cultures, applying explorative strategy of factor analysis); or (5) checking the similarity of the constructs in the samples from different cultures (using structural equation modeling approach). Each approach has its advantages and disadvantages. The advantages and lacks of each approach are discussed.
The Behavioural Biogeosciences: Moving Beyond Evolutionary Adaptation and Innate Reasoning
NASA Astrophysics Data System (ADS)
Glynn, P. D.
2014-12-01
Human biases and heuristics reflect adaptation over our evolutionary past to frequently experienced situations that affected our survival and that provided sharp distinguished feedbacks at the level of the individual. Human behavior, however, is not well adapted to the more diffusely experienced (i.e. less immediately/locally acute) problems and issues that scientists and society often seek to address today. Several human biases are identified that affect how science is conducted and used. These biases include an innate discounting of less visible phenomena/systems and of long-term perspectives; as well as a general lack of consideration of the coupling between the resources that we use and the waste that we consequently produce. Other biases include strong beliefs in human exceptionalism and separatedness from "nature". Francis Bacon (The New Organon, 1620) provided a classification of the factors, of the "idols of the mind", that bias pursuit of greater knowledge. How can we address these biases and the factors that affect behaviour and pursuit of knowledge; and ultimately impact the sustainability and resilience of human societies, resources and environments? A process for critical analysis is proposed that solicits explicit accounting and cognizance of these potential human biases and factors. Seeking a greater diversity of independant perspectives is essential: in both the conduct of science and in its application to the management of natural resources and environments. Accountability, traceability and structured processes are critical in this endeavor. The scientific methods designed during the industrial revolution are necessary, but insufficient, in addressing the issues of today. A new area of study in "the behavioral biogeosciences" is suggested that counters, or at least closely re-evaluates, our normal (i.e. adapted) human priorities of observation and study, as well as our judgements and decision-making.
Lean Keng, Soon; AlQudah, Hani Nawaf Ibrahim
2017-02-01
To raise awareness of critical care nurses' cognitive bias in decision-making, its relationship with leadership styles and its impact on care delivery. The relationship between critical care nurses' decision-making and leadership styles in hospitals has been widely studied, but the influence of cognitive bias on decision-making and leadership styles in critical care environments remains poorly understood, particularly in Jordan. Two-phase mixed methods sequential explanatory design and grounded theory. critical care unit, Prince Hamza Hospital, Jordan. Participant sampling: convenience sampling Phase 1 (quantitative, n = 96), purposive sampling Phase 2 (qualitative, n = 20). Pilot tested quantitative survey of 96 critical care nurses in 2012. Qualitative in-depth interviews, informed by quantitative results, with 20 critical care nurses in 2013. Descriptive and simple linear regression quantitative data analyses. Thematic (constant comparative) qualitative data analysis. Quantitative - correlations found between rationality and cognitive bias, rationality and task-oriented leadership styles, cognitive bias and democratic communication styles and cognitive bias and task-oriented leadership styles. Qualitative - 'being competent', 'organizational structures', 'feeling self-confident' and 'being supported' in the work environment identified as key factors influencing critical care nurses' cognitive bias in decision-making and leadership styles. Two-way impact (strengthening and weakening) of cognitive bias in decision-making and leadership styles on critical care nurses' practice performance. There is a need to heighten critical care nurses' consciousness of cognitive bias in decision-making and leadership styles and its impact and to develop organization-level strategies to increase non-biased decision-making. © 2016 John Wiley & Sons Ltd.
Cook, Thomas D; Steiner, Peter M
2010-03-01
In this article, we note the many ontological, epistemological, and methodological similarities between how Campbell and Rubin conceptualize causation. We then explore 3 differences in their written emphases about individual case matching in observational studies. We contend that (a) Campbell places greater emphasis than Rubin on the special role of pretest measures of outcome among matching variables; (b) Campbell is more explicitly concerned with unreliability in the covariates; and (c) for analyzing the outcome, only Rubin emphasizes the advantages of using propensity score over regression methods. To explore how well these 3 factors reduce bias, we reanalyze and review within-study comparisons that contrast experimental and statistically adjusted nonexperimental causal estimates from studies with the same target population and treatment content. In this context, the choice of covariates counts most for reducing selection bias, and the pretest usually plays a special role relative to all the other covariates considered singly. Unreliability in the covariates also influences bias reduction but by less. Furthermore, propensity score and regression methods produce comparable degrees of bias reduction, though these within-study comparisons may not have met the theoretically specified conditions most likely to produce differences due to analytic method.
Sex-biased phoretic mite load on two seaweed flies: Coelopa frigida and Coelopa pilipes.
Gilburn, Andre S; Stewart, Katie M; Edward, Dominic A
2009-12-01
Two hypotheses explain male-biased parasitism. Physiological costs of male sexually selected characteristics can reduce immunocompetence. Alternatively, ecological differences could generate male-biased parasitism. One method of comparing the importance of the two theories is to investigate patterns of phoresy, which are only likely to be generated by ecological rather than immunological differences between the sexes. Here we studied the pattern of phoresy of the mite, Thinoseius fucicola, on two species of seaweed fly hosts, Coelopa frigida and Coelopa pilipes. We found a highly male-biased pattern of phoresy of T. fucicola on both species. These are the first reported instances of sex-biased phoresy in a solely phoretic parasite. We also show the first two cases of size-biased phoresy. We suggest that ecological factors, particularly, male mate searching, generated male biased patterns of phoresy. We highlight the potential importance of studies of phoresy in determining the relative roles of the immunocompetence and ecological theories in generating male-biased parasitism. We suggest that more studies of patterns of phoresy are carried out to allow detailed comparisons with patterns of parasitism.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Chen; Perez, Danny; Voter, Arthur F.
Hyperdynamics is a powerful method to significantly extend the time scales amenable to molecular dynamics simulation of infrequent events. One outstanding challenge, however, is the development of the so-called bias potential required by the method. In this work, we design a bias potential using information about all minimum energy pathways (MEPs) out of the current state. While this approach is not suitable for use in an actual hyperdynamics simulation, because the pathways are generally not known in advance, it allows us to show that it is possible to come very close to the theoretical boost limit of hyperdynamics while maintainingmore » high accuracy. We demonstrate this by applying this MEP-based hyperdynamics (MEP-HD) to metallic surface diffusion systems. In most cases, MEP-HD gives boost factors that are orders of magnitude larger than the best existing bias potential, indicating that further development of hyperdynamics bias potentials could have a significant payoff. Lastly, we discuss potential practical uses of MEP-HD, including the possibility of developing MEP-HD into a true hyperdynamics.« less
Estimation of brood and nest survival: Comparative methods in the presence of heterogeneity
Manly, Bryan F.J.; Schmutz, Joel A.
2001-01-01
The Mayfield method has been widely used for estimating survival of nests and young animals, especially when data are collected at irregular observation intervals. However, this method assumes survival is constant throughout the study period, which often ignores biologically relevant variation and may lead to biased survival estimates. We examined the bias and accuracy of 1 modification to the Mayfield method that allows for temporal variation in survival, and we developed and similarly tested 2 additional methods. One of these 2 new methods is simply an iterative extension of Klett and Johnson's method, which we refer to as the Iterative Mayfield method and bears similarity to Kaplan-Meier methods. The other method uses maximum likelihood techniques for estimation and is best applied to survival of animals in groups or families, rather than as independent individuals. We also examined how robust these estimators are to heterogeneity in the data, which can arise from such sources as dependent survival probabilities among siblings, inherent differences among families, and adoption. Testing of estimator performance with respect to bias, accuracy, and heterogeneity was done using simulations that mimicked a study of survival of emperor goose (Chen canagica) goslings. Assuming constant survival for inappropriately long periods of time or use of Klett and Johnson's methods resulted in large bias or poor accuracy (often >5% bias or root mean square error) compared to our Iterative Mayfield or maximum likelihood methods. Overall, estimator performance was slightly better with our Iterative Mayfield than our maximum likelihood method, but the maximum likelihood method provides a more rigorous framework for testing covariates and explicity models a heterogeneity factor. We demonstrated use of all estimators with data from emperor goose goslings. We advocate that future studies use the new methods outlined here rather than the traditional Mayfield method or its previous modifications.
Pooling across cells to normalize single-cell RNA sequencing data with many zero counts.
Lun, Aaron T L; Bach, Karsten; Marioni, John C
2016-04-27
Normalization of single-cell RNA sequencing data is necessary to eliminate cell-specific biases prior to downstream analyses. However, this is not straightforward for noisy single-cell data where many counts are zero. We present a novel approach where expression values are summed across pools of cells, and the summed values are used for normalization. Pool-based size factors are then deconvolved to yield cell-based factors. Our deconvolution approach outperforms existing methods for accurate normalization of cell-specific biases in simulated data. Similar behavior is observed in real data, where deconvolution improves the relevance of results of downstream analyses.
Chen, Yunjie; Zhao, Bo; Zhang, Jianwei; Zheng, Yuhui
2014-09-01
Accurate segmentation of magnetic resonance (MR) images remains challenging mainly due to the intensity inhomogeneity, which is also commonly known as bias field. Recently active contour models with geometric information constraint have been applied, however, most of them deal with the bias field by using a necessary pre-processing step before segmentation of MR data. This paper presents a novel automatic variational method, which can segment brain MR images meanwhile correcting the bias field when segmenting images with high intensity inhomogeneities. We first define a function for clustering the image pixels in a smaller neighborhood. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. In order to reduce the effect of the noise, the local intensity variations are described by the Gaussian distributions with different means and variances. Then, the objective functions are integrated over the entire domain. In order to obtain the global optimal and make the results independent of the initialization of the algorithm, we reconstructed the energy function to be convex and calculated it by using the Split Bregman theory. A salient advantage of our method is that its result is independent of initialization, which allows robust and fully automated application. Our method is able to estimate the bias of quite general profiles, even in 7T MR images. Moreover, our model can also distinguish regions with similar intensity distribution with different variances. The proposed method has been rigorously validated with images acquired on variety of imaging modalities with promising results. Copyright © 2014 Elsevier Inc. All rights reserved.
Ward, Zachary J.; Long, Michael W.; Resch, Stephen C.; Gortmaker, Steven L.; Cradock, Angie L.; Giles, Catherine; Hsiao, Amber; Wang, Y. Claire
2016-01-01
Background State-level estimates from the Centers for Disease Control and Prevention (CDC) underestimate the obesity epidemic because they use self-reported height and weight. We describe a novel bias-correction method and produce corrected state-level estimates of obesity and severe obesity. Methods Using non-parametric statistical matching, we adjusted self-reported data from the Behavioral Risk Factor Surveillance System (BRFSS) 2013 (n = 386,795) using measured data from the National Health and Nutrition Examination Survey (NHANES) (n = 16,924). We validated our national estimates against NHANES and estimated bias-corrected state-specific prevalence of obesity (BMI≥30) and severe obesity (BMI≥35). We compared these results with previous adjustment methods. Results Compared to NHANES, self-reported BRFSS data underestimated national prevalence of obesity by 16% (28.67% vs 34.01%), and severe obesity by 23% (11.03% vs 14.26%). Our method was not significantly different from NHANES for obesity or severe obesity, while previous methods underestimated both. Only four states had a corrected obesity prevalence below 30%, with four exceeding 40%–in contrast, most states were below 30% in CDC maps. Conclusions Twelve million adults with obesity (including 6.7 million with severe obesity) were misclassified by CDC state-level estimates. Previous bias-correction methods also resulted in underestimates. Accurate state-level estimates are necessary to plan for resources to address the obesity epidemic. PMID:26954566
Quantitative imaging biomarkers: Effect of sample size and bias on confidence interval coverage.
Obuchowski, Nancy A; Bullen, Jennifer
2017-01-01
Introduction Quantitative imaging biomarkers (QIBs) are being increasingly used in medical practice and clinical trials. An essential first step in the adoption of a quantitative imaging biomarker is the characterization of its technical performance, i.e. precision and bias, through one or more performance studies. Then, given the technical performance, a confidence interval for a new patient's true biomarker value can be constructed. Estimating bias and precision can be problematic because rarely are both estimated in the same study, precision studies are usually quite small, and bias cannot be measured when there is no reference standard. Methods A Monte Carlo simulation study was conducted to assess factors affecting nominal coverage of confidence intervals for a new patient's quantitative imaging biomarker measurement and for change in the quantitative imaging biomarker over time. Factors considered include sample size for estimating bias and precision, effect of fixed and non-proportional bias, clustered data, and absence of a reference standard. Results Technical performance studies of a quantitative imaging biomarker should include at least 35 test-retest subjects to estimate precision and 65 cases to estimate bias. Confidence intervals for a new patient's quantitative imaging biomarker measurement constructed under the no-bias assumption provide nominal coverage as long as the fixed bias is <12%. For confidence intervals of the true change over time, linearity must hold and the slope of the regression of the measurements vs. true values should be between 0.95 and 1.05. The regression slope can be assessed adequately as long as fixed multiples of the measurand can be generated. Even small non-proportional bias greatly reduces confidence interval coverage. Multiple lesions in the same subject can be treated as independent when estimating precision. Conclusion Technical performance studies of quantitative imaging biomarkers require moderate sample sizes in order to provide robust estimates of bias and precision for constructing confidence intervals for new patients. Assumptions of linearity and non-proportional bias should be assessed thoroughly.
Speed Biases With Real-Life Video Clips
Rossi, Federica; Montanaro, Elisa; de’Sperati, Claudio
2018-01-01
We live almost literally immersed in an artificial visual world, especially motion pictures. In this exploratory study, we asked whether the best speed for reproducing a video is its original, shooting speed. By using adjustment and double staircase methods, we examined speed biases in viewing real-life video clips in three experiments, and assessed their robustness by manipulating visual and auditory factors. With the tested stimuli (short clips of human motion, mixed human-physical motion, physical motion and ego-motion), speed underestimation was the rule rather than the exception, although it depended largely on clip content, ranging on average from 2% (ego-motion) to 32% (physical motion). Manipulating display size or adding arbitrary soundtracks did not modify these speed biases. Estimated speed was not correlated with estimated duration of these same video clips. These results indicate that the sense of speed for real-life video clips can be systematically biased, independently of the impression of elapsed time. Measuring subjective visual tempo may integrate traditional methods that assess time perception: speed biases may be exploited to develop a simple, objective test of reality flow, to be used for example in clinical and developmental contexts. From the perspective of video media, measuring speed biases may help to optimize video reproduction speed and validate “natural” video compression techniques based on sub-threshold temporal squeezing. PMID:29615875
Speed Biases With Real-Life Video Clips.
Rossi, Federica; Montanaro, Elisa; de'Sperati, Claudio
2018-01-01
We live almost literally immersed in an artificial visual world, especially motion pictures. In this exploratory study, we asked whether the best speed for reproducing a video is its original, shooting speed. By using adjustment and double staircase methods, we examined speed biases in viewing real-life video clips in three experiments, and assessed their robustness by manipulating visual and auditory factors. With the tested stimuli (short clips of human motion, mixed human-physical motion, physical motion and ego-motion), speed underestimation was the rule rather than the exception, although it depended largely on clip content, ranging on average from 2% (ego-motion) to 32% (physical motion). Manipulating display size or adding arbitrary soundtracks did not modify these speed biases. Estimated speed was not correlated with estimated duration of these same video clips. These results indicate that the sense of speed for real-life video clips can be systematically biased, independently of the impression of elapsed time. Measuring subjective visual tempo may integrate traditional methods that assess time perception: speed biases may be exploited to develop a simple, objective test of reality flow, to be used for example in clinical and developmental contexts. From the perspective of video media, measuring speed biases may help to optimize video reproduction speed and validate "natural" video compression techniques based on sub-threshold temporal squeezing.
Assessing implicit gender bias in Medical Student Performance Evaluations.
Axelson, Rick D; Solow, Catherine M; Ferguson, Kristi J; Cohen, Michael B
2010-09-01
For medical schools, the increasing presence of women makes it especially important that potential sources of gender bias be identified and removed from student evaluation methods. Our study looked for patterns of gender bias in adjective data used to inform our Medical Student Performance Evaluations (MSPEs). Multigroup Confirmatory Factor Analysis (CFA) was used to model the latent structure of the adjectives attributed to students (n = 657) and to test for systematic scoring errors by gender. Gender bias was evident in two areas: (a) women were more likely than comparable men to be described as ''compassionate,'' ''sensitive,'' and ''enthusiastic'' and (b) men were more likely than comparable women to be seen as ''quick learners.'' The gender gap in ''quick learner'' attribution grows with increasing student proficiency; men's rate of increase is over twice that of women's. Technical and nontechnical approaches for ameliorating the impact of gender bias on student recommendations are suggested.
Sahi, Malvinder Singh; Mahawar, Bablesh; Rajpurohit, Sajjan
2017-01-01
Introduction Pulse oximetry is a widely used tool, unfortunately there is a paucity of data investigating its accuracy in Intensive Care Units (ICU) and if they are able to meet mandated FDA criteria as claimed by them in critically ill patients. Aim To assess bias, precision and accuracy of pulse oximeters used in ICU and factors affecting them. Materials and Methods A prospective cohort study, including 129 patients admitted to the ICU of a tertiary referral centre. Pulse oximetry and blood gas were done simultaneously. Pulse oximetry was done using two pulse oximetres: Nonin and Philips. All physiological variables like haemoglobin, lactate, use of vasopressors and blood pressure were recorded. Bland Altman curves were constructed to determine bias and limits of agreement. Effect of physiological variables on bias and difference between performance characteristics of bias was determined using SPSS. Results Pulse oximetry overestimated arterial oxygen saturation (SaO2) by 1.44%. There was negative correlation between bias and SaO2 (r=-0.32) and positive correlation with lactate (r=0.16). The Philips pulse oximeter had significant higher bias and variability than Nonin pulse oximeter. (2.49±2.99 versus 0.46±1.68, mean difference = 1.98, 95% C.I. = 1.53 – 2.43, p-value <0.001). Conclusion Pulse oximetry overestimates SaO2. Bias tends to increase with rising lactate and hypoxia. There is heterogeneity in performance of various pulse oximetry devices in ICU. PMID:28764215
DOE Office of Scientific and Technical Information (OSTI.GOV)
Disney, R.K.
1994-10-01
The methodology for handling bias and uncertainty when calculational methods are used in criticality safety evaluations (CSE`s) is a rapidly evolving technology. The changes in the methodology are driven by a number of factors. One factor responsible for changes in the methodology for handling bias and uncertainty in CSE`s within the overview of the US Department of Energy (DOE) is a shift in the overview function from a ``site`` perception to a more uniform or ``national`` perception. Other causes for change or improvement in the methodology for handling calculational bias and uncertainty are; (1) an increased demand for benchmark criticalsmore » data to expand the area (range) of applicability of existing data, (2) a demand for new data to supplement existing benchmark criticals data, (3) the increased reliance on (or need for) computational benchmarks which supplement (or replace) experimental measurements in critical assemblies, and (4) an increased demand for benchmark data applicable to the expanded range of conditions and configurations encountered in DOE site restoration and remediation.« less
Accurate B-spline-based 3-D interpolation scheme for digital volume correlation
NASA Astrophysics Data System (ADS)
Ren, Maodong; Liang, Jin; Wei, Bin
2016-12-01
An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.
Quantifying lead-time bias in risk factor studies of cancer through simulation.
Jansen, Rick J; Alexander, Bruce H; Anderson, Kristin E; Church, Timothy R
2013-11-01
Lead-time is inherent in early detection and creates bias in observational studies of screening efficacy, but its potential to bias effect estimates in risk factor studies is not always recognized. We describe a form of this bias that conventional analyses cannot address and develop a model to quantify it. Surveillance Epidemiology and End Results (SEER) data form the basis for estimates of age-specific preclinical incidence, and log-normal distributions describe the preclinical duration distribution. Simulations assume a joint null hypothesis of no effect of either the risk factor or screening on the preclinical incidence of cancer, and then quantify the bias as the risk-factor odds ratio (OR) from this null study. This bias can be used as a factor to adjust observed OR in the actual study. For this particular study design, as average preclinical duration increased, the bias in the total-physical activity OR monotonically increased from 1% to 22% above the null, but the smoking OR monotonically decreased from 1% above the null to 5% below the null. The finding of nontrivial bias in fixed risk-factor effect estimates demonstrates the importance of quantitatively evaluating it in susceptible studies. Copyright © 2013 Elsevier Inc. All rights reserved.
Forder, Julien; Malley, Juliette; Towers, Ann-Marie; Netten, Ann
2014-08-01
The aim is to describe and trial a pragmatic method to produce estimates of the incremental cost-effectiveness of care services from survey data. The main challenge is in estimating the counterfactual; that is, what the patient's quality of life would be if they did not receive that level of service. A production function method is presented, which seeks to distinguish the variation in care-related quality of life in the data that is due to service use as opposed to other factors. A problem is that relevant need factors also affect the amount of service used and therefore any missing factors could create endogeneity bias. Instrumental variable estimation can mitigate this problem. This method was applied to a survey of older people using home care as a proof of concept. In the analysis, we were able to estimate a quality-of-life production function using survey data with the expected form and robust estimation diagnostics. The practical advantages with this method are clear, but there are limitations. It is computationally complex, and there is a risk of misspecification and biased results, particularly with IV estimation. One strategy would be to use this method to produce preliminary estimates, with a full trial conducted thereafter, if indicated. Copyright © 2013 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Zhang, Xijuan; Savalei, Victoria
2016-01-01
Many psychological scales written in the Likert format include reverse worded (RW) items in order to control acquiescence bias. However, studies have shown that RW items often contaminate the factor structure of the scale by creating one or more method factors. The present study examines an alternative scale format, called the Expanded format,…
On the Performance of T2∗ Correction Methods for Quantification of Hepatic Fat Content
Reeder, Scott B.; Bice, Emily K.; Yu, Huanzhou; Hernando, Diego; Pineda, Angel R.
2014-01-01
Nonalcoholic fatty liver disease is the most prevalent chronic liver disease in Western societies. MRI can quantify liver fat, the hallmark feature of nonalcoholic fatty liver disease, so long as multiple confounding factors including T2∗ decay are addressed. Recently developed MRI methods that correct for T2∗ to improve the accuracy of fat quantification either assume a common T2∗ (single- T2∗) for better stability and noise performance or independently estimate the T2∗ for water and fat (dual- T2∗) for reduced bias, but with noise performance penalty. In this study, the tradeoff between bias and variance for different T2∗ correction methods is analyzed using the Cramér-Rao bound analysis for biased estimators and is validated using Monte Carlo experiments. A noise performance metric for estimation of fat fraction is proposed. Cramér-Rao bound analysis for biased estimators was used to compute the metric at different echo combinations. Optimization was performed for six echoes and typical T2∗ values. This analysis showed that all methods have better noise performance with very short first echo times and echo spacing of ∼π/2 for single- T2∗ correction, and ∼2π/3 for dual- T2∗ correction. Interestingly, when an echo spacing and first echo shift of ∼π/2 are used, methods without T2∗ correction have less than 5% bias in the estimates of fat fraction. PMID:21661045
[Childhood sexual behavior as an indicator of sexual abuse: professionals' criteria and biases].
González Ortega, Eva; Orgaz Baz, Begoña; López Sánchez, Félix
2012-01-01
Some sexual behaviors are related to child sexual abuse experiences, but none unequivocally. Therefore, professionals might use non-empirical-based criteria and be biased when detecting and reporting victims. To check this hypothesis, we presented 974 Spanish and Latin American professionals from different fields (Psychology, Education, Health, Social Services, Justice, and Police Force) with hypothetical situations of child sexual behavior (varying the sex, age and behavior) by using an experimental vignette method based on Factorial Survey. Participants were asked to indicate whether such behaviors are a sign of abuse and whether they would report them. We also measured demographic, academic, professional and attitude factors. According to the analysis, professionals' suspicion of abuse is more affected by personal factors, whereas their reporting intention depends more on situational factors. The main criterion adopted is the type of sexual behavior, with professionals being more likely to suspect and report in response to aggressive sexual behavior and precocious sexual knowledge. Professionals' attitudes to sexuality seem to generate biases, as those who are erotophobic are more likely to suspect abuse. None of the sexual behaviors was seen as evidence of abuse.
Variance analysis of forecasted streamflow maxima in a wet temperate climate
NASA Astrophysics Data System (ADS)
Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.
2018-05-01
Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.
Risk factors for chronic subdural haematoma formation do not account for the established male bias.
Marshman, Laurence A G; Manickam, Appukutty; Carter, Danielle
2015-04-01
The 'subdural space' is an artefact of inner dural border layer disruption: it is not anatomical but always pathological. A male bias has long been accepted for chronic subdural haematomas (CSDH), and increased male frequencies of trauma and/or alcohol abuse are often cited as likely explanations: however, no study has validated this. We investigated to see which risk factors accounted for the male bias with CSDH. Retrospective review of prospectively collected data. A male bias (M:F 97:58) for CSDH was confirmed in n=155 patients. The largest risk factor for CSDH was cerebral atrophy (M:F 94% vs. 91%): whilst a male bias prevailed in mild-moderate cases (M:F 58% vs. 41%), a female bias prevailed for severe atrophy (F:M 50% vs. 36%) (χ(2)=3.88, P=0.14). Risk factors for atrophy also demonstrated a female bias, some approached statistical significance: atrial fibrillation (P=0.05), stroke/TIA (P=0.06) and diabetes mellitus (P=0.07). There was also a trend for older age in females (F:M 72±13 years vs. 68±15 years, P=0.09). The third largest risk factor, after atrophy and trauma (i.e. anti-coagulant and anti-platelet use) was statistically significantly biased towards females (F:M 50% vs. 33%, P=0.04). No risk factor accounted for the established male bias with CSDH. In particular, a history of trauma (head injury or fall [M:F 50% vs. 57%, P=0.37]), and alcohol abuse (M:F 17% vs. 16%, P=0.89) was remarkably similar between genders. No recognised risk factor for CSDH formation accounted for the established male bias: risk factor trends generally favoured females. In particular, and in contrast to popular belief, a male CSDH bias did not relate to increased male frequencies of trauma and/or alcohol abuse. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.
Gupta, Manan; Joshi, Amitabh; Vidya, T N C
2017-01-01
Mark-recapture estimators are commonly used for population size estimation, and typically yield unbiased estimates for most solitary species with low to moderate home range sizes. However, these methods assume independence of captures among individuals, an assumption that is clearly violated in social species that show fission-fusion dynamics, such as the Asian elephant. In the specific case of Asian elephants, doubts have been raised about the accuracy of population size estimates. More importantly, the potential problem for the use of mark-recapture methods posed by social organization in general has not been systematically addressed. We developed an individual-based simulation framework to systematically examine the potential effects of type of social organization, as well as other factors such as trap density and arrangement, spatial scale of sampling, and population density, on bias in population sizes estimated by POPAN, Robust Design, and Robust Design with detection heterogeneity. In the present study, we ran simulations with biological, demographic and ecological parameters relevant to Asian elephant populations, but the simulation framework is easily extended to address questions relevant to other social species. We collected capture history data from the simulations, and used those data to test for bias in population size estimation. Social organization significantly affected bias in most analyses, but the effect sizes were variable, depending on other factors. Social organization tended to introduce large bias when trap arrangement was uniform and sampling effort was low. POPAN clearly outperformed the two Robust Design models we tested, yielding close to zero bias if traps were arranged at random in the study area, and when population density and trap density were not too low. Social organization did not have a major effect on bias for these parameter combinations at which POPAN gave more or less unbiased population size estimates. Therefore, the effect of social organization on bias in population estimation could be removed by using POPAN with specific parameter combinations, to obtain population size estimates in a social species.
A New Source Biasing Approach in ADVANTG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bevill, Aaron M; Mosher, Scott W
2012-01-01
The ADVANTG code has been developed at Oak Ridge National Laboratory to generate biased sources and weight window maps for MCNP using the CADIS and FW-CADIS methods. In preparation for an upcoming RSICC release, a new approach for generating a biased source has been developed. This improvement streamlines user input and improves reliability. Previous versions of ADVANTG generated the biased source from ADVANTG input, writing an entirely new general fixed-source definition (SDEF). Because volumetric sources were translated into SDEF-format as a finite set of points, the user had to perform a convergence study to determine whether the number of sourcemore » points used accurately represented the source region. Further, the large number of points that must be written in SDEF-format made the MCNP input and output files excessively long and difficult to debug. ADVANTG now reads SDEF-format distributions and generates corresponding source biasing cards, eliminating the need for a convergence study. Many problems of interest use complicated source regions that are defined using cell rejection. In cell rejection, the source distribution in space is defined using an arbitrarily complex cell and a simple bounding region. Source positions are sampled within the bounding region but accepted only if they fall within the cell; otherwise, the position is resampled entirely. When biasing in space is applied to sources that use rejection sampling, current versions of MCNP do not account for the rejection in setting the source weight of histories, resulting in an 'unfair game'. This problem was circumvented in previous versions of ADVANTG by translating volumetric sources into a finite set of points, which does not alter the mean history weight ({bar w}). To use biasing parameters without otherwise modifying the original cell-rejection SDEF-format source, ADVANTG users now apply a correction factor for {bar w} in post-processing. A stratified-random sampling approach in ADVANTG is under development to automatically report the correction factor with estimated uncertainty. This study demonstrates the use of ADVANTG's new source biasing method, including the application of {bar w}.« less
Joshi, Amitabh; Vidya, T. N. C.
2017-01-01
Mark-recapture estimators are commonly used for population size estimation, and typically yield unbiased estimates for most solitary species with low to moderate home range sizes. However, these methods assume independence of captures among individuals, an assumption that is clearly violated in social species that show fission-fusion dynamics, such as the Asian elephant. In the specific case of Asian elephants, doubts have been raised about the accuracy of population size estimates. More importantly, the potential problem for the use of mark-recapture methods posed by social organization in general has not been systematically addressed. We developed an individual-based simulation framework to systematically examine the potential effects of type of social organization, as well as other factors such as trap density and arrangement, spatial scale of sampling, and population density, on bias in population sizes estimated by POPAN, Robust Design, and Robust Design with detection heterogeneity. In the present study, we ran simulations with biological, demographic and ecological parameters relevant to Asian elephant populations, but the simulation framework is easily extended to address questions relevant to other social species. We collected capture history data from the simulations, and used those data to test for bias in population size estimation. Social organization significantly affected bias in most analyses, but the effect sizes were variable, depending on other factors. Social organization tended to introduce large bias when trap arrangement was uniform and sampling effort was low. POPAN clearly outperformed the two Robust Design models we tested, yielding close to zero bias if traps were arranged at random in the study area, and when population density and trap density were not too low. Social organization did not have a major effect on bias for these parameter combinations at which POPAN gave more or less unbiased population size estimates. Therefore, the effect of social organization on bias in population estimation could be removed by using POPAN with specific parameter combinations, to obtain population size estimates in a social species. PMID:28306735
Autocalibration method for non-stationary CT bias correction.
Vegas-Sánchez-Ferrero, Gonzalo; Ledesma-Carbayo, Maria J; Washko, George R; Estépar, Raúl San José
2018-02-01
Computed tomography (CT) is a widely used imaging modality for screening and diagnosis. However, the deleterious effects of radiation exposure inherent in CT imaging require the development of image reconstruction methods which can reduce exposure levels. The development of iterative reconstruction techniques is now enabling the acquisition of low-dose CT images whose quality is comparable to that of CT images acquired with much higher radiation dosages. However, the characterization and calibration of the CT signal due to changes in dosage and reconstruction approaches is crucial to provide clinically relevant data. Although CT scanners are calibrated as part of the imaging workflow, the calibration is limited to select global reference values and does not consider other inherent factors of the acquisition that depend on the subject scanned (e.g. photon starvation, partial volume effect, beam hardening) and result in a non-stationary noise response. In this work, we analyze the effect of reconstruction biases caused by non-stationary noise and propose an autocalibration methodology to compensate it. Our contributions are: 1) the derivation of a functional relationship between observed bias and non-stationary noise, 2) a robust and accurate method to estimate the local variance, 3) an autocalibration methodology that does not necessarily rely on a calibration phantom, attenuates the bias caused by noise and removes the systematic bias observed in devices from different vendors. The validation of the proposed methodology was performed with a physical phantom and clinical CT scans acquired with different configurations (kernels, doses, algorithms including iterative reconstruction). The results confirmed the suitability of the proposed methods for removing the intra-device and inter-device reconstruction biases. Copyright © 2017 Elsevier B.V. All rights reserved.
Dependence of the Energy Resolution of a Hemispherical Semiconductor Detector on the Bias Voltage
NASA Astrophysics Data System (ADS)
Samedov, V. V.
2017-12-01
It is shown that the series expansion of the amplitude and variance of the hemispherical semiconductor detector signal in inverse bias voltage allows finding the Fano factor, the product of electron lifetime and mobility, the degree of inhomogeneity of the trap density in the semiconductor material, and the relative variance of the electronic channel gain. An important advantage of the proposed method is that it is independent of the electronic channel gain and noise.
Healthy Worker Effect Phenomenon: Revisited with Emphasis on Statistical Methods – A Review
Chowdhury, Ritam; Shah, Divyang; Payal, Abhishek R.
2017-01-01
Known since 1885 but studied systematically only in the past four decades, the healthy worker effect (HWE) is a special form of selection bias common to occupational cohort studies. The phenomenon has been under debate for many years with respect to its impact, conceptual approach (confounding, selection bias, or both), and ways to resolve or account for its effect. The effect is not uniform across age groups, gender, race, and types of occupations and nor is it constant over time. Hence, assessing HWE and accounting for it in statistical analyses is complicated and requires sophisticated methods. Here, we review the HWE, factors affecting it, and methods developed so far to deal with it. PMID:29391741
Booth, Charlotte; Songco, Annabel; Parsons, Sam; Heathcote, Lauren; Vincent, John; Keers, Robert; Fox, Elaine
2017-12-29
Optimal psychological development is dependent upon a complex interplay between individual and situational factors. Investigating the development of these factors in adolescence will help to improve understanding of emotional vulnerability and resilience. The CogBIAS longitudinal study (CogBIAS-L-S) aims to combine cognitive and genetic approaches to investigate risk and protective factors associated with the development of mood and impulsivity-related outcomes in an adolescent sample. CogBIAS-L-S is a three-wave longitudinal study of typically developing adolescents conducted over 4 years, with data collection at age 12, 14 and 16. At each wave participants will undergo multiple assessments including a range of selective cognitive processing tasks (e.g. attention bias, interpretation bias, memory bias) and psychological self-report measures (e.g. anxiety, depression, resilience). Saliva samples will also be collected at the baseline assessment for genetic analyses. Multilevel statistical analyses will be performed to investigate the developmental trajectory of cognitive biases on psychological functioning, as well as the influence of genetic moderation on these relationships. CogBIAS-L-S represents the first longitudinal study to assess multiple cognitive biases across adolescent development and the largest study of its kind to collect genetic data. It therefore provides a unique opportunity to understand how genes and the environment influence the development and maintenance of cognitive biases and provide insight into risk and protective factors that may be key targets for intervention.
Assessment of bias in US waterfowl harvest estimates
Padding, Paul I.; Royle, J. Andrew
2012-01-01
Context. North American waterfowl managers have long suspected that waterfowl harvest estimates derived from national harvest surveys in the USA are biased high. Survey bias can be evaluated by comparing survey results with like estimates from independent sources. Aims. We used band-recovery data to assess the magnitude of apparent bias in duck and goose harvest estimates, using mallards (Anas platyrhynchos) and Canada geese (Branta canadensis) as representatives of ducks and geese, respectively. Methods. We compared the number of reported mallard and Canada goose band recoveries, adjusted for band reporting rates, with the estimated harvests of banded mallards and Canada geese from the national harvest surveys. Weused the results of those comparisons to develop correction factors that can be applied to annual duck and goose harvest estimates of the national harvest survey. Key results. National harvest survey estimates of banded mallards harvested annually averaged 1.37 times greater than those calculated from band-recovery data, whereas Canada goose harvest estimates averaged 1.50 or 1.63 times greater than comparable band-recovery estimates, depending on the harvest survey methodology used. Conclusions. Duck harvest estimates produced by the national harvest survey from 1971 to 2010 should be reduced by a factor of 0.73 (95% CI = 0.71–0.75) to correct for apparent bias. Survey-specific correction factors of 0.67 (95% CI = 0.65–0.69) and 0.61 (95% CI = 0.59–0.64) should be applied to the goose harvest estimates for 1971–2001 (duck stamp-based survey) and 1999–2010 (HIP-based survey), respectively. Implications. Although this apparent bias likely has not influenced waterfowl harvest management policy in the USA, it does have negative impacts on some applications of harvest estimates, such as indirect estimation of population size. For those types of analyses, we recommend applying the appropriate correction factor to harvest estimates.
Bias Factors in Mathematics Achievement Tests among Israeli Students from the Former Soviet Union
ERIC Educational Resources Information Center
Levi-Keren, Michal
2016-01-01
This study explains mathematical difficulties of students who immigrated from the Former Soviet Union (FSU) vis-à-vis Israeli students, by identifying the existing bias factors in achievement tests. These factors are irrelevant to the mathematical knowledge being measured, and therefore threaten the test results. The bias factors were identified…
Takeuchi, Yoshinori; Shinozaki, Tomohiro; Matsuyama, Yutaka
2018-01-08
Despite the frequent use of self-controlled methods in pharmacoepidemiological studies, the factors that may bias the estimates from these methods have not been adequately compared in real-world settings. Here, we comparatively examined the impact of a time-varying confounder and its interactions with time-invariant confounders, time trends in exposures and events, restrictions, and misspecification of risk period durations on the estimators from three self-controlled methods. This study analyzed self-controlled case series (SCCS), case-crossover (CCO) design, and sequence symmetry analysis (SSA) using simulated and actual electronic medical records datasets. We evaluated the performance of the three self-controlled methods in simulated cohorts for the following scenarios: 1) time-invariant confounding with interactions between the confounders, 2) time-invariant and time-varying confounding without interactions, 3) time-invariant and time-varying confounding with interactions among the confounders, 4) time trends in exposures and events, 5) restricted follow-up time based on event occurrence, and 6) patient restriction based on event history. The sensitivity of the estimators to misspecified risk period durations was also evaluated. As a case study, we applied these methods to evaluate the risk of macrolides on liver injury using electronic medical records. In the simulation analysis, time-varying confounding produced bias in the SCCS and CCO design estimates, which aggravated in the presence of interactions between the time-invariant and time-varying confounders. The SCCS estimates were biased by time trends in both exposures and events. Erroneously short risk periods introduced bias to the CCO design estimate, whereas erroneously long risk periods introduced bias to the estimates of all three methods. Restricting the follow-up time led to severe bias in the SSA estimates. The SCCS estimates were sensitive to patient restriction. The case study showed that although macrolide use was significantly associated with increased liver injury occurrence in all methods, the value of the estimates varied. The estimations of the three self-controlled methods depended on various underlying assumptions, and the violation of these assumptions may cause non-negligible bias in the resulting estimates. Pharmacoepidemiologists should select the appropriate self-controlled method based on how well the relevant key assumptions are satisfied with respect to the available data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, D; Badano, A; Sempau, J
Purpose: Variance reduction techniques (VRTs) are employed in Monte Carlo simulations to obtain estimates with reduced statistical uncertainty for a given simulation time. In this work, we study the bias and efficiency of a VRT for estimating the response of imaging detectors. Methods: We implemented Directed Sampling (DS), preferentially directing a fraction of emitted optical photons directly towards the detector by altering the isotropic model. The weight of each optical photon is appropriately modified to maintain simulation estimates unbiased. We use a Monte Carlo tool called fastDETECT2 (part of the hybridMANTIS open-source package) for optical transport, modified for VRT. Themore » weight of each photon is calculated as the ratio of original probability (no VRT) and the new probability for a particular direction. For our analysis of bias and efficiency, we use pulse height spectra, point response functions, and Swank factors. We obtain results for a variety of cases including analog (no VRT, isotropic distribution), and DS with 0.2 and 0.8 optical photons directed towards the sensor plane. We used 10,000, 25-keV primaries. Results: The Swank factor for all cases in our simplified model converged fast (within the first 100 primaries) to a stable value of 0.9. The root mean square error per pixel for DS VRT for the point response function between analog and VRT cases was approximately 5e-4. Conclusion: Our preliminary results suggest that DS VRT does not affect the estimate of the mean for the Swank factor. Our findings indicate that it may be possible to design VRTs for imaging detector simulations to increase computational efficiency without introducing bias.« less
Centrality categorization for Rp (d)+A in high-energy collisions
NASA Astrophysics Data System (ADS)
Adare, A.; Aidala, C.; Ajitanand, N. N.; Akiba, Y.; Al-Bataineh, H.; Alexander, J.; Angerami, A.; Aoki, K.; Apadula, N.; Aramaki, Y.; Atomssa, E. T.; Averbeck, R.; Awes, T. C.; Azmoun, B.; Babintsev, V.; Bai, M.; Baksay, G.; Baksay, L.; Barish, K. N.; Bassalleck, B.; Basye, A. T.; Bathe, S.; Baublis, V.; Baumann, C.; Bazilevsky, A.; Belikov, S.; Belmont, R.; Bennett, R.; Bhom, J. H.; Blau, D. S.; Bok, J. S.; Boyle, K.; Brooks, M. L.; Buesching, H.; Bumazhnov, V.; Bunce, G.; Butsyk, S.; Campbell, S.; Caringi, A.; Chen, C.-H.; Chi, C. Y.; Chiu, M.; Choi, I. J.; Choi, J. B.; Choudhury, R. K.; Christiansen, P.; Chujo, T.; Chung, P.; Chvala, O.; Cianciolo, V.; Citron, Z.; Cole, B. A.; Conesa Del Valle, Z.; Connors, M.; Csanád, M.; Csörgő, T.; Dahms, T.; Dairaku, S.; Danchev, I.; Das, K.; Datta, A.; David, G.; Dayananda, M. K.; Denisov, A.; Deshpande, A.; Desmond, E. J.; Dharmawardane, K. V.; Dietzsch, O.; Dion, A.; Donadelli, M.; Drapier, O.; Drees, A.; Drees, K. A.; Durham, J. M.; Durum, A.; Dutta, D.; D'Orazio, L.; Edwards, S.; Efremenko, Y. V.; Ellinghaus, F.; Engelmore, T.; Enokizono, A.; En'yo, H.; Esumi, S.; Fadem, B.; Fields, D. E.; Finger, M.; Finger, M.; Fleuret, F.; Fokin, S. L.; Fraenkel, Z.; Frantz, J. E.; Franz, A.; Frawley, A. D.; Fujiwara, K.; Fukao, Y.; Fusayasu, T.; Garishvili, I.; Glenn, A.; Gong, H.; Gonin, M.; Goto, Y.; Granier de Cassagnac, R.; Grau, N.; Greene, S. V.; Grim, G.; Grosse Perdekamp, M.; Gunji, T.; Gustafsson, H.-Å.; Haggerty, J. S.; Hahn, K. I.; Hamagaki, H.; Hamblen, J.; Han, R.; Hanks, J.; Haslum, E.; Hayano, R.; He, X.; Heffner, M.; Hemmick, T. K.; Hester, T.; Hill, J. C.; Hohlmann, M.; Holzmann, W.; Homma, K.; Hong, B.; Horaguchi, T.; Hornback, D.; Huang, S.; Ichihara, T.; Ichimiya, R.; Ikeda, Y.; Imai, K.; Inaba, M.; Isenhower, D.; Ishihara, M.; Issah, M.; Ivanischev, D.; Iwanaga, Y.; Jacak, B. V.; Jia, J.; Jiang, X.; Jin, J.; Johnson, B. M.; Jones, T.; Joo, K. S.; Jouan, D.; Jumper, D. S.; Kajihara, F.; Kamin, J.; Kang, J. H.; Kapustinsky, J.; Karatsu, K.; Kasai, M.; Kawall, D.; Kawashima, M.; Kazantsev, A. V.; Kempel, T.; Khanzadeev, A.; Kijima, K. M.; Kikuchi, J.; Kim, A.; Kim, B. I.; Kim, D. J.; Kim, E.-J.; Kim, Y.-J.; Kinney, E.; Kiss, Á.; Kistenev, E.; Kleinjan, D.; Kochenda, L.; Komkov, B.; Konno, M.; Koster, J.; Král, A.; Kravitz, A.; Kunde, G. J.; Kurita, K.; Kurosawa, M.; Kwon, Y.; Kyle, G. S.; Lacey, R.; Lai, Y. S.; Lajoie, J. G.; Lebedev, A.; Lee, D. M.; Lee, J.; Lee, K. B.; Lee, K. S.; Leitch, M. J.; Leite, M. A. L.; Li, X.; Lichtenwalner, P.; Liebing, P.; Linden Levy, L. A.; Liška, T.; Liu, H.; Liu, M. X.; Love, B.; Lynch, D.; Maguire, C. F.; Makdisi, Y. I.; Malik, M. D.; Manko, V. I.; Mannel, E.; Mao, Y.; Masui, H.; Matathias, F.; McCumber, M.; McGaughey, P. L.; McGlinchey, D.; Means, N.; Meredith, B.; Miake, Y.; Mibe, T.; Mignerey, A. C.; Miki, K.; Milov, A.; Mitchell, J. T.; Mohanty, A. K.; Moon, H. J.; Morino, Y.; Morreale, A.; Morrison, D. P.; Moukhanova, T. V.; Murakami, T.; Murata, J.; Nagamiya, S.; Nagle, J. L.; Naglis, M.; Nagy, M. I.; Nakagawa, I.; Nakamiya, Y.; Nakamura, K. R.; Nakamura, T.; Nakano, K.; Nam, S.; Newby, J.; Nguyen, M.; Nihashi, M.; Nouicer, R.; Nyanin, A. S.; Oakley, C.; O'Brien, E.; Oda, S. X.; Ogilvie, C. A.; Oka, M.; Okada, K.; Onuki, Y.; Orjuela Koop, J. D.; Oskarsson, A.; Ouchida, M.; Ozawa, K.; Pak, R.; Pantuev, V.; Papavassiliou, V.; Park, I. H.; Park, S. K.; Park, W. J.; Pate, S. F.; Pei, H.; Peng, J.-C.; Pereira, H.; Perepelitsa, D.; Peressounko, D. Yu.; Petti, R.; Pinkenburg, C.; Pisani, R. P.; Proissl, M.; Purschke, M. L.; Qu, H.; Rak, J.; Ravinovich, I.; Read, K. F.; Rembeczki, S.; Reygers, K.; Riabov, V.; Riabov, Y.; Richardson, E.; Roach, D.; Roche, G.; Rolnick, S. D.; Rosati, M.; Rosen, C. A.; Rosendahl, S. S. E.; Ružička, P.; Sahlmueller, B.; Saito, N.; Sakaguchi, T.; Sakashita, K.; Samsonov, V.; Sano, S.; Sato, T.; Sawada, S.; Sedgwick, K.; Seele, J.; Seidl, R.; Seto, R.; Sharma, D.; Shein, I.; Shibata, T.-A.; Shigaki, K.; Shimomura, M.; Shoji, K.; Shukla, P.; Sickles, A.; Silva, C. L.; Silvermyr, D.; Silvestre, C.; Sim, K. S.; Singh, B. K.; Singh, C. P.; Singh, V.; Slunečka, M.; Soltz, R. A.; Sondheim, W. E.; Sorensen, S. P.; Sourikova, I. V.; Stankus, P. W.; Stenlund, E.; Stoll, S. P.; Sugitate, T.; Sukhanov, A.; Sziklai, J.; Takagui, E. M.; Taketani, A.; Tanabe, R.; Tanaka, Y.; Taneja, S.; Tanida, K.; Tannenbaum, M. J.; Tarafdar, S.; Taranenko, A.; Themann, H.; Thomas, D.; Thomas, T. L.; Togawa, M.; Toia, A.; Tomášek, L.; Torii, H.; Towell, R. S.; Tserruya, I.; Tsuchimoto, Y.; Vale, C.; Valle, H.; van Hecke, H. W.; Vazquez-Zambrano, E.; Veicht, A.; Velkovska, J.; Vértesi, R.; Virius, M.; Vrba, V.; Vznuzdaev, E.; Wang, X. R.; Watanabe, D.; Watanabe, K.; Watanabe, Y.; Wei, F.; Wei, R.; Wessels, J.; White, S. N.; Winter, D.; Woody, C. L.; Wright, R. M.; Wysocki, M.; Yamaguchi, Y. L.; Yamaura, K.; Yang, R.; Yanovich, A.; Ying, J.; Yokkaichi, S.; You, Z.; Young, G. R.; Younus, I.; Yushmanov, I. E.; Zajc, W. A.; Zhou, S.; Phenix Collaboration
2014-09-01
High-energy proton- and deuteron-nucleus collisions provide an excellent tool for studying a wide array of physics effects, including modifications of parton distribution functions in nuclei, gluon saturation, and color neutralization and hadronization in a nuclear environment, among others. All of these effects are expected to have a significant dependence on the size of the nuclear target and the impact parameter of the collision, also known as the collision centrality. In this article, we detail a method for determining centrality classes in p (d)+A collisions via cuts on the multiplicity at backward rapidity (i.e., the nucleus-going direction) and for determining systematic uncertainties in this procedure. For d +Au collisions at √sNN =200 GeV we find that the connection to geometry is confirmed by measuring the fraction of events in which a neutron from the deuteron does not interact with the nucleus. As an application, we consider the nuclear modification factors Rp (d)+A, for which there is a bias in the measured centrality-dependent yields owing to auto correlations between the process of interest and the backward-rapidity multiplicity. We determine the bias-correction factors within this framework. This method is further tested using the hijing Monte Carlo generator. We find that for d +Au collisions at √sNN =200 GeV, these bias corrections are small and vary by less than 5% (10%) up to pT=10 (20) GeV/c. In contrast, for p +Pb collisions at √sNN =5.02 TeV we find that these bias factors are an order of magnitude larger and strongly pT dependent, likely attributable to the larger effect of multiparton interactions.
Using ecological propensity score to adjust for missing confounders in small area studies.
Wang, Yingbo; Pirani, Monica; Hansell, Anna L; Richardson, Sylvia; Blangiardo, Marta
2017-11-09
Small area ecological studies are commonly used in epidemiology to assess the impact of area level risk factors on health outcomes when data are only available in an aggregated form. However, the resulting estimates are often biased due to unmeasured confounders, which typically are not available from the standard administrative registries used for these studies. Extra information on confounders can be provided through external data sets such as surveys or cohorts, where the data are available at the individual level rather than at the area level; however, such data typically lack the geographical coverage of administrative registries. We develop a framework of analysis which combines ecological and individual level data from different sources to provide an adjusted estimate of area level risk factors which is less biased. Our method (i) summarizes all available individual level confounders into an area level scalar variable, which we call ecological propensity score (EPS), (ii) implements a hierarchical structured approach to impute the values of EPS whenever they are missing, and (iii) includes the estimated and imputed EPS into the ecological regression linking the risk factors to the health outcome. Through a simulation study, we show that integrating individual level data into small area analyses via EPS is a promising method to reduce the bias intrinsic in ecological studies due to unmeasured confounders; we also apply the method to a real case study to evaluate the effect of air pollution on coronary heart disease hospital admissions in Greater London. © The Author 2017. Published by Oxford University Press.
Data Linkage: A powerful research tool with potential problems
2010-01-01
Background Policy makers, clinicians and researchers are demonstrating increasing interest in using data linked from multiple sources to support measurement of clinical performance and patient health outcomes. However, the utility of data linkage may be compromised by sub-optimal or incomplete linkage, leading to systematic bias. In this study, we synthesize the evidence identifying participant or population characteristics that can influence the validity and completeness of data linkage and may be associated with systematic bias in reported outcomes. Methods A narrative review, using structured search methods was undertaken. Key words "data linkage" and Mesh term "medical record linkage" were applied to Medline, EMBASE and CINAHL databases between 1991 and 2007. Abstract inclusion criteria were; the article attempted an empirical evaluation of methodological issues relating to data linkage and reported on patient characteristics, the study design included analysis of matched versus unmatched records, and the report was in English. Included articles were grouped thematically according to patient characteristics that were compared between matched and unmatched records. Results The search identified 1810 articles of which 33 (1.8%) met inclusion criteria. There was marked heterogeneity in study methods and factors investigated. Characteristics that were unevenly distributed among matched and unmatched records were; age (72% of studies), sex (50% of studies), race (64% of studies), geographical/hospital site (93% of studies), socio-economic status (82% of studies) and health status (72% of studies). Conclusion A number of relevant patient or population factors may be associated with incomplete data linkage resulting in systematic bias in reported clinical outcomes. Readers should consider these factors in interpreting the reported results of data linkage studies. PMID:21176171
Evaluation of Bias and Variance in Low-count OSEM List Mode Reconstruction
Jian, Y; Planeta, B; Carson, R E
2016-01-01
Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization (MLEM) reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combination of subsets and iterations. Regions of interest (ROIs) were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations x subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1–5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR. PMID:25479254
Evaluation of bias and variance in low-count OSEM list mode reconstruction
NASA Astrophysics Data System (ADS)
Jian, Y.; Planeta, B.; Carson, R. E.
2015-01-01
Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combinations of subsets and iterations. Regions of interest were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations × subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1-5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR.
Wang, Chaolong; Schroeder, Kari B.; Rosenberg, Noah A.
2012-01-01
Allelic dropout is a commonly observed source of missing data in microsatellite genotypes, in which one or both allelic copies at a locus fail to be amplified by the polymerase chain reaction. Especially for samples with poor DNA quality, this problem causes a downward bias in estimates of observed heterozygosity and an upward bias in estimates of inbreeding, owing to mistaken classifications of heterozygotes as homozygotes when one of the two copies drops out. One general approach for avoiding allelic dropout involves repeated genotyping of homozygous loci to minimize the effects of experimental error. Existing computational alternatives often require replicate genotyping as well. These approaches, however, are costly and are suitable only when enough DNA is available for repeated genotyping. In this study, we propose a maximum-likelihood approach together with an expectation-maximization algorithm to jointly estimate allelic dropout rates and allele frequencies when only one set of nonreplicated genotypes is available. Our method considers estimates of allelic dropout caused by both sample-specific factors and locus-specific factors, and it allows for deviation from Hardy–Weinberg equilibrium owing to inbreeding. Using the estimated parameters, we correct the bias in the estimation of observed heterozygosity through the use of multiple imputations of alleles in cases where dropout might have occurred. With simulated data, we show that our method can (1) effectively reproduce patterns of missing data and heterozygosity observed in real data; (2) correctly estimate model parameters, including sample-specific dropout rates, locus-specific dropout rates, and the inbreeding coefficient; and (3) successfully correct the downward bias in estimating the observed heterozygosity. We find that our method is fairly robust to violations of model assumptions caused by population structure and by genotyping errors from sources other than allelic dropout. Because the data sets imputed under our model can be investigated in additional subsequent analyses, our method will be useful for preparing data for applications in diverse contexts in population genetics and molecular ecology. PMID:22851645
[Biases in the study of prognostic factors].
Delgado-Rodríguez, M
1999-01-01
The main objective is to detail the main biases in the study of prognostic factors. Confounding bias is illustrated with social class, a prognostic factor still discussed. Within selection bias several cases are commented: response bias, specially frequent when the patients of a clinical trial are used; the shortcomings in the formation of an inception cohort; the fallacy of Neyman (bias due to the duration of disease) when the study begins with a cross-sectional study; the selection bias in the treatment of survivors for the different treatment opportunity of those living longer; the bias due to the inclusion of heterogeneous diagnostic groups; and the selection bias due to differential information losses and the use of statistical multivariate procedures. Within the biases during follow-up, an empiric rule to value the impact of the number of losses is given. In information bias the Will Rogers' phenomenon and the usefulness of clinical databases are discussed. Lastly, a recommendation against the use of cutoff points yielded by bivariate analyses to select the variable to be included in multivariate analysis is given.
Lossy compression of weak lensing data
Vanderveld, R. Ali; Bernstein, Gary M.; Stoughton, Chris; ...
2011-07-12
Future orbiting observatories will survey large areas of sky in order to constrain the physics of dark matter and dark energy using weak gravitational lensing and other methods. Lossy compression of the resultant data will improve the cost and feasibility of transmitting the images through the space communication network. We evaluate the consequences of the lossy compression algorithm of Bernstein et al. (2010) for the high-precision measurement of weak-lensing galaxy ellipticities. This square-root algorithm compresses each pixel independently, and the information discarded is by construction less than the Poisson error from photon shot noise. For simulated space-based images (without cosmicmore » rays) digitized to the typical 16 bits per pixel, application of the lossy compression followed by image-wise lossless compression yields images with only 2.4 bits per pixel, a factor of 6.7 compression. We demonstrate that this compression introduces no bias in the sky background. The compression introduces a small amount of additional digitization noise to the images, and we demonstrate a corresponding small increase in ellipticity measurement noise. The ellipticity measurement method is biased by the addition of noise, so the additional digitization noise is expected to induce a multiplicative bias on the galaxies measured ellipticities. After correcting for this known noise-induced bias, we find a residual multiplicative ellipticity bias of m {approx} -4 x 10 -4. This bias is small when compared to the many other issues that precision weak lensing surveys must confront, and furthermore we expect it to be reduced further with better calibration of ellipticity measurement methods.« less
Sources of method bias in social science research and recommendations on how to control it.
Podsakoff, Philip M; MacKenzie, Scott B; Podsakoff, Nathan P
2012-01-01
Despite the concern that has been expressed about potential method biases, and the pervasiveness of research settings with the potential to produce them, there is disagreement about whether they really are a problem for researchers in the behavioral sciences. Therefore, the purpose of this review is to explore the current state of knowledge about method biases. First, we explore the meaning of the terms "method" and "method bias" and then we examine whether method biases influence all measures equally. Next, we review the evidence of the effects that method biases have on individual measures and on the covariation between different constructs. Following this, we evaluate the procedural and statistical remedies that have been used to control method biases and provide recommendations for minimizing method bias.
A general reconstruction of the recent expansion history of the universe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vitenti, S.D.P.; Penna-Lima, M., E-mail: dias@iap.fr, E-mail: pennal@apc.in2p3.fr
Distance measurements are currently the most powerful tool to study the expansion history of the universe without specifying its matter content nor any theory of gravitation. Assuming only an isotropic, homogeneous and flat universe, in this work we introduce a model-independent method to reconstruct directly the deceleration function via a piecewise function. Including a penalty factor, we are able to vary continuously the complexity of the deceleration function from a linear case to an arbitrary (n+1)-knots spline interpolation. We carry out a Monte Carlo (MC) analysis to determine the best penalty factor, evaluating the bias-variance trade-off, given the uncertainties ofmore » the SDSS-II and SNLS supernova combined sample (JLA), compilations of baryon acoustic oscillation (BAO) and H(z) data. The bias-variance analysis is done for three fiducial models with different features in the deceleration curve. We perform the MC analysis generating mock catalogs and computing their best-fit. For each fiducial model, we test different reconstructions using, in each case, more than 10{sup 4} catalogs in a total of about 5× 10{sup 5}. This investigation proved to be essential in determining the best reconstruction to study these data. We show that, evaluating a single fiducial model, the conclusions about the bias-variance ratio are misleading. We determine the reconstruction method in which the bias represents at most 10% of the total uncertainty. In all statistical analyses, we fit the coefficients of the deceleration function along with four nuisance parameters of the supernova astrophysical model. For the full sample, we also fit H{sub 0} and the sound horizon r{sub s}(z{sub d}) at the drag redshift. The bias-variance trade-off analysis shows that, apart from the deceleration function, all other estimators are unbiased. Finally, we apply the Ensemble Sampler Markov Chain Monte Carlo (ESMCMC) method to explore the posterior of the deceleration function up to redshift 1.3 (using only JLA) and 2.3 (JLA+BAO+H(z)). We obtain that the standard cosmological model agrees within 3σ level with the reconstructed results in the whole studied redshift intervals. Since our method is calibrated to minimize the bias, the error bars of the reconstructed functions are a good approximation for the total uncertainty.« less
Selection bias in rheumatic disease research.
Choi, Hyon K; Nguyen, Uyen-Sa; Niu, Jingbo; Danaei, Goodarz; Zhang, Yuqing
2014-07-01
The identification of modifiable risk factors for the development of rheumatic conditions and their sequelae is crucial for reducing the substantial worldwide burden of these diseases. However, the validity of such research can be threatened by sources of bias, including confounding, measurement and selection biases. In this Review, we discuss potentially major issues of selection bias--a type of bias frequently overshadowed by other bias and feasibility issues, despite being equally or more problematic--in key areas of rheumatic disease research. We present index event bias (a type of selection bias) as one of the potentially unifying reasons behind some unexpected findings, such as the 'risk factor paradox'--a phenomenon exemplified by the discrepant effects of certain risk factors on the development versus the progression of osteoarthritis (OA) or rheumatoid arthritis (RA). We also discuss potential selection biases owing to differential loss to follow-up in RA and OA research, as well as those due to the depletion of susceptibles (prevalent user bias) and immortal time bias. The lesson remains that selection bias can be ubiquitous and, therefore, has the potential to lead the field astray. Thus, we conclude with suggestions to help investigators avoid such issues and limit the impact on future rheumatology research.
Clare, John; McKinney, Shawn T.; DePue, John E.; Loftin, Cynthia S.
2017-01-01
It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture–recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters.
Enjeti, Anoop; Granter, Neil; Ashraf, Asma; Fletcher, Linda; Branford, Susan; Rowlings, Philip; Dooley, Susan
2015-10-01
An automated cartridge-based detection system (GeneXpert; Cepheid) is being widely adopted in low throughput laboratories for monitoring BCR-ABL1 transcript in chronic myelogenous leukaemia. This Australian study evaluated the longitudinal performance specific characteristics of the automated system.The automated cartridge-based system was compared prospectively with the manual qRT-PCR-based reference method at SA Pathology, Adelaide, over a period of 2.5 years. A conversion factor determination was followed by four re-validations. Peripheral blood samples (n = 129) with international scale (IS) values within detectable range were selected for assessment. The mean bias, proportion of results within specified fold difference (2-, 3- and 5-fold), the concordance rate of major molecular remission (MMR) and concordance across a range of IS values on paired samples were evaluated.The initial conversion factor for the automated system was determined as 0.43. Except for the second re-validation, where a negative bias of 1.9-fold was detected, all other biases fell within desirable limits. A cartridge-specific conversion factor and efficiency value was introduced and the conversion factor was confirmed to be stable in subsequent re-validation cycles. Concordance with the reference method/laboratory at >0.1-≤10 IS was 78.2% and at ≤0.001 was 80%, compared to 86.8% in the >0.01-≤0.1 IS range. The overall and MMR concordance were 85.7% and 94% respectively, for samples that fell within ± 5-fold of the reference laboratory value over the entire period of study.Conversion factor and performance specific characteristics for the automated system were longitudinally stable in the clinically relevant range, following introduction by the manufacturer of lot specific efficiency values.
A multi-source precipitation approach to fill gaps over a radar precipitation field
NASA Astrophysics Data System (ADS)
Tesfagiorgis, K. B.; Mahani, S. E.; Khanbilvardi, R.
2012-12-01
Satellite Precipitation Estimates (SPEs) may be the only available source of information for operational hydrologic and flash flood prediction due to spatial limitations of radar and gauge products. The present work develops an approach to seamlessly blend satellite, radar, climatological and gauge precipitation products to fill gaps over ground-based radar precipitation fields. To mix different precipitation products, the bias of any of the products relative to each other should be removed. For bias correction, the study used an ensemble-based method which aims to estimate spatially varying multiplicative biases in SPEs using a radar rainfall product. Bias factors were calculated for a randomly selected sample of rainy pixels in the study area. Spatial fields of estimated bias were generated taking into account spatial variation and random errors in the sampled values. A weighted Successive Correction Method (SCM) is proposed to make the merging between error corrected satellite and radar rainfall estimates. In addition to SCM, we use a Bayesian spatial method for merging the gap free radar with rain gauges, climatological rainfall sources and SPEs. We demonstrate the method using SPE Hydro-Estimator (HE), radar- based Stage-II, a climatological product PRISM and rain gauge dataset for several rain events from 2006 to 2008 over three different geographical locations of the United States. Results show that: the SCM method in combination with the Bayesian spatial model produced a precipitation product in good agreement with independent measurements. The study implies that using the available radar pixels surrounding the gap area, rain gauge, PRISM and satellite products, a radar like product is achievable over radar gap areas that benefits the scientific community.
Long-term biases in geomagnetic K and aa indices
Love, J.J.
2011-01-01
Analysis is made of the geomagnetic-activity aa index and its source K-index data from groups of ground-based observatories in Britain, and Australia, 1868.0-2009.0, solar cycles 11-23. The K data show persistent biases, especially for high (low) K-activity levels at British (Australian) observatories. From examination of multiple subsets of the K data we infer that the biases are not predominantly the result of changes in observatory location, localized induced magnetotelluric currents, changes in magnetometer technology, or the modernization of K-value estimation methods. Instead, the biases appear to be artifacts of the latitude-dependent scaling used to assign K values to particular local levels of geomagnetic activity. The biases are not effectively removed by weighting factors used to estimate aa. We show that long-term averages of the aa index, such as annual averages, are dominated by medium-level geomagnetic activity levels having K values of 3 and 4. ?? 2011 Author(s).
Verification of sex from harvested sea otters using DNA testing
Scribner, Kim T.; Green, Ben A.; Gorbics, Carol; Bodkin, James L.
2005-01-01
We used molecular genetic methods to determine the sex of 138 sea otters (Enhydra lutris) harvested from 3 regions of Alaska from 1994 to 1997, to assess the accuracy of post‐harvest field‐sexing. We also tested each of a series of factors associated with errors in field‐sexing of sea otters, including male or female bias, age‐class bias, regional bias, and bias associated with hunt characteristics. Blind control results indicated that sex was determined with 100% accuracy using polymerase chain reaction (PCR) amplification using primers that co‐amplify the zinc finger‐Y‐X gene, located on both the mammalian Y‐ and X‐chromosomes, and Testes Determining Factor (TDF), located on the mammalian Y‐chromosome. DNA‐based sexing revealed that 12.3% of the harvested sea otters were incorrectly sexed in the field, with most errors (13 of 17) occurring as males incorrectly reported as females. Thus, female harvest was overestimated. Using logistic regression analysis, we detected no statistical association of incorrect determination of sex in the field with age class, hunt region, or hunt type. The error in field‐sexing appears to be random, at least with respect to the variables evaluated in this study.
Leimu, Roosa; Koricheva, Julia
2004-01-01
Temporal changes in the magnitude of research findings have recently been recognized as a general phenomenon in ecology, and have been attributed to the delayed publication of non-significant results and disconfirming evidence. Here we introduce a method of cumulative meta-analysis which allows detection of both temporal trends and publication bias in the ecological literature. To illustrate the application of the method, we used two datasets from recently conducted meta-analyses of studies testing two plant defence theories. Our results revealed three phases in the evolution of the treatment effects. Early studies strongly supported the hypothesis tested, but the magnitude of the effect decreased considerably in later studies. In the latest studies, a trend towards an increase in effect size was observed. In one of the datasets, a cumulative meta-analysis revealed publication bias against studies reporting disconfirming evidence; such studies were published in journals with a lower impact factor compared to studies with results supporting the hypothesis tested. Correlation analysis revealed neither temporal trends nor evidence of publication bias in the datasets analysed. We thus suggest that cumulative meta-analysis should be used as a visual aid to detect temporal trends and publication bias in research findings in ecology in addition to the correlative approach. PMID:15347521
Tsukahara, Keita; Takabatake, Reona; Masubuchi, Tomoko; Futo, Satoshi; Minegishi, Yasutaka; Noguchi, Akio; Kondo, Kazunari; Nishimaki-Mogami, Tomoko; Kurashima, Takeyo; Mano, Junichi; Kitta, Kazumi
2016-01-01
A real-time PCR-based analytical method was developed for the event-specific quantification of a genetically modified (GM) soybean event, MON87701. First, a standard plasmid for MON87701 quantification was constructed. The conversion factor (C f ) required to calculate the amount of genetically modified organism (GMO) was experimentally determined for a real-time PCR instrument. The determined C f for the real-time PCR instrument was 1.24. For the evaluation of the developed method, a blind test was carried out in an inter-laboratory trial. The trueness and precision were evaluated as the bias and reproducibility of relative standard deviation (RSDr), respectively. The determined biases and the RSDr values were less than 30 and 13%, respectively, at all evaluated concentrations. The limit of quantitation of the method was 0.5%, and the developed method would thus be applicable for practical analyses for the detection and quantification of MON87701.
Selection bias in rheumatic disease research
Choi, Hyon K.; Nguyen, Uyen-Sa; Niu, Jingbo; Danaei, Goodarz; Zhang, Yuqing
2014-01-01
The identification of modifiable risk factors for the development of rheumatic conditions and their sequelae is crucial for reducing the substantial worldwide burden of these diseases. However, the validity of such research can be threatened by sources of bias, including confounding, measurement and selection biases. In this Review, we discuss potentially major issues of selection bias—a type of bias frequently overshadowed by other bias and feasibility issues, despite being equally or more problematic—in key areas of rheumatic disease research. We present index event bias (a type of selection bias) as one of the potentially unifying reasons behind some unexpected findings, such as the ‘risk factor paradox’—a phenomenon exemplified by the discrepant effects of certain risk factors on the development versus the progression of osteoarthritis (OA) or rheumatoid arthritis (RA). We also discuss potential selection biases owing to differential loss to follow-up in RA and OA research, as well as those due to the depletion of susceptibles (prevalent user bias) and immortal time bias. The lesson remains that selection bias can be ubiquitous and, therefore, has the potential to lead the field astray. Thus, we conclude with suggestions to help investigators avoid such issues and limit the impact on future rheumatology research. PMID:24686510
Podsakoff, Philip M; MacKenzie, Scott B; Lee, Jeong-Yeon; Podsakoff, Nathan P
2003-10-01
Interest in the problem of method biases has a long history in the behavioral sciences. Despite this, a comprehensive summary of the potential sources of method biases and how to control for them does not exist. Therefore, the purpose of this article is to examine the extent to which method biases influence behavioral research results, identify potential sources of method biases, discuss the cognitive processes through which method biases influence responses to measures, evaluate the many different procedural and statistical techniques that can be used to control method biases, and provide recommendations for how to select appropriate procedural and statistical remedies for different types of research settings.
Quality of reporting and risk of bias in therapeutic otolaryngology publications.
Kaper, N M; Swart, K M A; Grolman, W; Van Der Heijden, G J M G
2018-01-01
High-quality trials have the potential to influence clinical practice. Ten otolaryngology journals with the highest 2011 impact factors were selected and publications from 2010 were extracted. From all medical journals, the 20 highest impact factor journals were selected, and publications related to otolaryngology for 2010 and 2011 were extracted. For all publications, the reporting quality and risk of bias were assessed. The impact factor was 1.8-2.8 for otolaryngology journals and 6.0-101.8 for medical journals. Of 1500 otolaryngology journal articles, 262 were therapeutic studies; 94 had a high reporting quality and 5 a low risk of bias. Of 10 967 medical journal articles, 76 were therapeutic studies; 57 had a high reporting quality and 8 a low risk of bias. Reporting quality was high for 45 per cent of otolaryngology-related publications and 9 per cent met quality standards. General journals had higher impact factors than otolaryngology journals. Reporting quality was higher and risk of bias lower in general journals than in otolaryngology journals. Nevertheless, 76 per cent of articles in high impact factor journals carried a high risk of bias. Better reported and designed studies are the goal, with less risk of bias, especially in otolaryngology journals.
Why all randomised controlled trials produce biased results.
Krauss, Alexander
2018-06-01
Randomised controlled trials (RCTs) are commonly viewed as the best research method to inform public health and social policy. Usually they are thought of as providing the most rigorous evidence of a treatment's effectiveness without strong assumptions, biases and limitations. This is the first study to examine that hypothesis by assessing the 10 most cited RCT studies worldwide. These 10 RCT studies with the highest number of citations in any journal (up to June 2016) were identified by searching Scopus (the largest database of peer-reviewed journals). This study shows that these world-leading RCTs that have influenced policy produce biased results by illustrating that participants' background traits that affect outcomes are often poorly distributed between trial groups, that the trials often neglect alternative factors contributing to their main reported outcome and, among many other issues, that the trials are often only partially blinded or unblinded. The study here also identifies a number of novel and important assumptions, biases and limitations not yet thoroughly discussed in existing studies that arise when designing, implementing and analysing trials. Researchers and policymakers need to become better aware of the broader set of assumptions, biases and limitations in trials. Journals need to also begin requiring researchers to outline them in their studies. We need to furthermore better use RCTs together with other research methods. Key messages RCTs face a range of strong assumptions, biases and limitations that have not yet all been thoroughly discussed in the literature. This study assesses the 10 most cited RCTs worldwide and shows that trials inevitably produce bias. Trials involve complex processes - from randomising, blinding and controlling, to implementing treatments, monitoring participants etc. - that require many decisions and steps at different levels that bring their own assumptions and degree of bias to results.
Cheng, Irene; Zhang, Leiming
2017-01-17
Gaseous oxidized mercury (GOM) measurement uncertainties undoubtedly impact the understanding of mercury biogeochemical cycling; however, there is a lack of consensus on the uncertainty magnitude. The numerical method presented in this study provides an alternative means of estimating the uncertainties of previous GOM measurements. Weekly GOM in ambient air was predicted from measured weekly mercury wet deposition using a scavenging ratio approach, and compared against field measurements of 2-4 hly GOM to estimate the measurement biases of the Tekran speciation instruments at 13 Atmospheric Mercury Network (AMNet) sites. Multiyear average GOM measurements were estimated to be biased low by more than a factor of 2 at six sites, between a factor of 1.5 and 1.8 at six other sites, and below a factor of 1.3 at one site. The differences between predicted and observed were significantly larger during summer than other seasons potentially because of higher ozone concentrations that may interfere with GOM sampling. The analysis data collected over six years at multiple sites suggests a systematic bias in GOM measurements, supporting the need for further investigation of measurement technologies and identifying the chemical composition of GOM.
NASA Astrophysics Data System (ADS)
Simon, Patrick; Hilbert, Stefan
2018-05-01
Galaxies are biased tracers of the matter density on cosmological scales. For future tests of galaxy models, we refine and assess a method to measure galaxy biasing as a function of physical scale k with weak gravitational lensing. This method enables us to reconstruct the galaxy bias factor b(k) as well as the galaxy-matter correlation r(k) on spatial scales between 0.01 h Mpc-1 ≲ k ≲ 10 h Mpc-1 for redshift-binned lens galaxies below redshift z ≲ 0.6. In the refinement, we account for an intrinsic alignment of source ellipticities, and we correct for the magnification bias of the lens galaxies, relevant for the galaxy-galaxy lensing signal, to improve the accuracy of the reconstructed r(k). For simulated data, the reconstructions achieve an accuracy of 3-7% (68% confidence level) over the above k-range for a survey area and a typical depth of contemporary ground-based surveys. Realistically the accuracy is, however, probably reduced to about 10-15%, mainly by systematic uncertainties in the assumed intrinsic source alignment, the fiducial cosmology, and the redshift distributions of lens and source galaxies (in that order). Furthermore, our reconstruction technique employs physical templates for b(k) and r(k) that elucidate the impact of central galaxies and the halo-occupation statistics of satellite galaxies on the scale-dependence of galaxy bias, which we discuss in the paper. In a first demonstration, we apply this method to previous measurements in the Garching-Bonn Deep Survey and give a physical interpretation of the lens population.
Choi, Du Hyung; Lim, Jun Yeul; Shin, Sangmun; Choi, Won Jun; Jeong, Seong Hoon; Lee, Sangkil
2014-10-01
To investigate the effects of hydrophilic polymers on the matrix system, an experimental design method was developed to integrate response surface methodology and the time series modeling. Moreover, the relationships among polymers on the matrix system were studied with the evaluation of physical properties including water uptake, mass loss, diffusion, and gelling index. A mixture simplex lattice design was proposed while considering eight input control factors: Polyethylene glycol 6000 (x1 ), polyethylene oxide (PEO) N-10 (x2 ), PEO 301 (x3 ), PEO coagulant (x4 ), PEO 303 (x5 ), hydroxypropyl methylcellulose (HPMC) 100SR (x6 ), HPMC 4000SR (x7 ), and HPMC 10(5) SR (x8 ). With the modeling, optimal formulations were obtained depending on the four types of targets. The optimal formulations showed the four significant factors (x1 , x2 , x3 , and x8 ) and other four input factors (x4 , x5 , x6 , and x7 ) were not significant based on drug release profiles. Moreover, the optimization results were analyzed with estimated values, targets values, absolute biases, and relative biases based on observed times for the drug release rates with four different targets. The result showed that optimal solutions and target values had consistent patterns with small biases. On the basis of the physical properties of the optimal solutions, the type and ratio of the hydrophilic polymer and the relationships between polymers significantly influenced the physical properties of the system and drug release. This experimental design method is very useful in formulating a matrix system with optimal drug release. Moreover, it can distinctly confirm the relationships between excipients and the effects on the system with extensive and intensive evaluations. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Zhao, Huiying; Nyholt, Dale R; Yang, Yuanhao; Wang, Jihua; Yang, Yuedong
2017-06-14
Genome-wide association studies (GWAS) have successfully identified single variants associated with diseases. To increase the power of GWAS, gene-based and pathway-based tests are commonly employed to detect more risk factors. However, the gene- and pathway-based association tests may be biased towards genes or pathways containing a large number of single-nucleotide polymorphisms (SNPs) with small P-values caused by high linkage disequilibrium (LD) correlations. To address such bias, numerous pathway-based methods have been developed. Here we propose a novel method, DGAT-path, to divide all SNPs assigned to genes in each pathway into LD blocks, and to sum the chi-square statistics of LD blocks for assessing the significance of the pathway by permutation tests. The method was proven robust with the type I error rate >1.6 times lower than other methods. Meanwhile, the method displays a higher power and is not biased by the pathway size. The applications to the GWAS summary statistics for schizophrenia and breast cancer indicate that the detected top pathways contain more genes close to associated SNPs than other methods. As a result, the method identified 17 and 12 significant pathways containing 20 and 21 novel associated genes, respectively for two diseases. The method is available online by http://sparks-lab.org/server/DGAT-path .
The problem of bias when nursing facility staff administer customer satisfaction surveys.
Hodlewsky, R Tamara; Decker, Frederic H
2002-10-01
Customer satisfaction instruments are being used with increasing frequency to assess and monitor residents' assessments of quality of care in nursing facilities. There is no standard protocol, however, for how or by whom the instruments should be administered when anonymous, written responses are not feasible. Researchers often use outside interviewers to assess satisfaction, but cost considerations may limit the extent to which facilities are able to hire outside interviewers on a regular basis. This study was designed to investigate the existence and extent of any bias caused by staff administering customer satisfaction surveys. Customer satisfaction data were collected in 1998 from 265 residents in 21 nursing facilities in North Dakota. Half the residents in each facility were interviewed by staff members and the other half by outside consultants; scores were compared by interviewer type. In addition to a tabulation of raw scores, ordinary least-squares analysis with facility fixed effects was used to control for resident characteristics and unmeasured facility-level factors that could influence scores. Significant positive bias was found when staff members interviewed residents. The bias was not limited to questions directly affecting staff responsibilities but applied across all types of issues. The bias was robust under varying constructions of satisfaction and dissatisfaction. A uniform method of survey administration appears to be important if satisfaction data are to be used to compare facilities. Bias is an important factor that should be considered and weighed against the costs of obtaining outside interviewers when assessing customer satisfaction among long term care residents.
Wijaya, I Putu Mahendra; Nie, Tey Ju; Rodriguez, Isabel; Mhaisalkar, Subodh G
2010-06-07
The advent of a carbon nanotube liquid-gated transistor (LGFET) for biosensing applications allows the possibility of real-time and label-free detection of biomolecular interactions. The use of an aqueous solution as dielectric, however, has traditionally restricted the operating gate bias (VG) within |VG| < 1 V, due to the electrolysis of water. Here, we propose pulsed-gating as a facile method to extend the operation window of LGFETs to |VG| > 1 V. A comparison between simulation and experimental results reveals that at voltages in excess of 1 V, the LGFET sensing mechanism has a contribution from two factors: electrostatic gating as well as capacitance modulation. Furthermore, the large IDS drop observed in the |VG| > 1 V region indicates that pulsed-gating may be readily employed as a simple method to amplify the signal in the LGFET and pushes the detection limit down to attomolar concentration levels, an order of magnitude improvement over conventionally employed DC VG biasing.
Dynamic Histogram Analysis To Determine Free Energies and Rates from Biased Simulations.
Stelzl, Lukas S; Kells, Adam; Rosta, Edina; Hummer, Gerhard
2017-12-12
We present an algorithm to calculate free energies and rates from molecular simulations on biased potential energy surfaces. As input, it uses the accumulated times spent in each state or bin of a histogram and counts of transitions between them. Optimal unbiased equilibrium free energies for each of the states/bins are then obtained by maximizing the likelihood of a master equation (i.e., first-order kinetic rate model). The resulting free energies also determine the optimal rate coefficients for transitions between the states or bins on the biased potentials. Unbiased rates can be estimated, e.g., by imposing a linear free energy condition in the likelihood maximization. The resulting "dynamic histogram analysis method extended to detailed balance" (DHAMed) builds on the DHAM method. It is also closely related to the transition-based reweighting analysis method (TRAM) and the discrete TRAM (dTRAM). However, in the continuous-time formulation of DHAMed, the detailed balance constraints are more easily accounted for, resulting in compact expressions amenable to efficient numerical treatment. DHAMed produces accurate free energies in cases where the common weighted-histogram analysis method (WHAM) for umbrella sampling fails because of slow dynamics within the windows. Even in the limit of completely uncorrelated data, where WHAM is optimal in the maximum-likelihood sense, DHAMed results are nearly indistinguishable. We illustrate DHAMed with applications to ion channel conduction, RNA duplex formation, α-helix folding, and rate calculations from accelerated molecular dynamics. DHAMed can also be used to construct Markov state models from biased or replica-exchange molecular dynamics simulations. By using binless WHAM formulated as a numerical minimization problem, the bias factors for the individual states can be determined efficiently in a preprocessing step and, if needed, optimized globally afterward.
Bias Correction of MODIS AOD using DragonNET to obtain improved estimation of PM2.5
NASA Astrophysics Data System (ADS)
Gross, B.; Malakar, N. K.; Atia, A.; Moshary, F.; Ahmed, S. A.; Oo, M. M.
2014-12-01
MODIS AOD retreivals using the Dark Target algorithm is strongly affected by the underlying surface reflection properties. In particular, the operational algorithms make use of surface parameterizations trained on global datasets and therefore do not account properly for urban surface differences. This parameterization continues to show an underestimation of the surface reflection which results in a general over-biasing in AOD retrievals. Recent results using the Dragon-Network datasets as well as high resolution retrievals in the NYC area illustrate that this is even more significant at the newest C006 3 km retrievals. In the past, we used AERONET observation in the City College to obtain bias-corrected AOD, but the homogeneity assumptions using only one site for the region is clearly an issue. On the other hand, DragonNET observations provide ample opportunities to obtain better tuning the surface corrections while also providing better statistical validation. In this study we present a neural network method to obtain bias correction of the MODIS AOD using multiple factors including surface reflectivity at 2130nm, sun-view geometrical factors and land-class information. These corrected AOD's are then used together with additional WRF meteorological factors to improve estimates of PM2.5. Efforts to explore the portability to other urban areas will be discussed. In addition, annual surface ratio maps will be developed illustrating that among the land classes, the urban pixels constitute the largest deviations from the operational model.
Temperature dependent electrical characteristics of Zn/ZnSe/n-GaAs/In structure
NASA Astrophysics Data System (ADS)
Sağlam, M.; Güzeldir, B.
2016-04-01
We have reported a study of the I-V characteristics of Zn/ZnSe/n-GaAs/In sandwich structure in a wide temperature range of 80-300 K by a step of 20 K, which are prepared by Successive Ionic Layer Adsorption and Reaction (SILAR) method. The main electrical parameters, such as ideality factor and zero-bias barrier height determined from the forward bias I-V characteristics were found strongly depend on temperature and when the increased, the n decreased with increasing temperature. The ideality factor and barrier height values as a function of the sample temperature have been attributed to the presence of the lateral inhomogeneities of the barrier height. Furthermore, the series resistance have been calculated from the I-V measurements as a function of temperature dependent.
Puhl, Rebecca M.; Luedicke, Joerg; Grilo, Carlos M.
2013-01-01
Objective This study examined weight bias among students training in health disciplines and its associations with their perceptions about treating patients with obesity, causes of obesity, and observations of weight bias by instructors and peers. Design and Methods Students (N = 107) enrolled in a post-graduate health discipline (Physician Associate, Clinical Psychology, Psychiatric Residency) completed anonymous questionnaires to assess the above variables. Results Students reported that patients with obesity are a common target of negative attitudes and derogatory humor by peers (63%), health-care providers (65%), and instructors (40%). Although 80% of students felt confident to treat obesity, many reported that patients with obesity lack motivation to make changes (33%), lead to feelings of frustration (36%), and are noncompliant with treatment (36%). Students with higher weight bias expressed greater frustration in these areas. The effect of students’ weight bias on expectations for treatment compliance of patients with obesity was partially mediated by beliefs that obesity is caused by behavioral factors. Conclusions Weight bias is commonly observed by students in health disciplines, who themselves report frustrations and stereotypes about treating patients with obesity. These findings contribute new knowledge about weight bias among students and provide several targets for medical training and education. PMID:24124078
Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean
2014-01-01
MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.
Fourcade, Yoan; Engler, Jan O.; Rödder, Dennis; Secondi, Jean
2014-01-01
MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one “virtual” derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases. PMID:24818607
Implicit Motivational Processes Underlying Smoking in American and Dutch Adolescents
Larsen, Helle; Kong, Grace; Becker, Daniela; Cousijn, Janna; Boendermaker, Wouter; Cavallo, Dana; Krishnan-Sarin, Suchitra; Wiers, Reinout
2014-01-01
Introduction: Research demonstrates that cognitive biases toward drug-related stimuli are correlated with substance use. This study aimed to investigate differences in cognitive biases (i.e., approach bias, attentional bias, and memory associations) between smoking and non-smoking adolescents in the US and the Netherlands. Within the group of smokers, we examined the relative predictive value of the cognitive biases and impulsivity related constructs (including inhibition skills, working memory, and risk taking) on daily smoking and nicotine dependence. Method: A total of 125 American and Dutch adolescent smokers (n = 67) and non-smokers (n = 58) between 13 and 18 years old participated. Participants completed the smoking approach–avoidance task, the classical and emotional Stroop task, brief implicit associations task, balloon analog risk task, the self-ordering pointing task, and a questionnaire assessing level of nicotine dependence and smoking behavior. Results: The analytical sample consisted of 56 Dutch adolescents (27 smokers and 29 non-smokers) and 37 American adolescents (19 smokers and 18 non-smokers). No differences in cognitive biases between smokers and non-smokers were found. Generally, Dutch adolescents demonstrated an avoidance bias toward both smoking and neutral stimuli whereas the American adolescents did not demonstrate a bias. Within the group of smokers, regression analyses showed that stronger attentional bias and weaker inhibition skills predicted greater nicotine dependence while weak working memory predicted more daily cigarette use. Conclusion: Attentional bias, inhibition skills, and working memory might be important factors explaining smoking in adolescence. Cultural differences in approach–avoidance bias should be considered in future research. PMID:24904435
Radio weak lensing shear measurement in the visibility domain - II. Source extraction
NASA Astrophysics Data System (ADS)
Rivi, M.; Miller, L.
2018-05-01
This paper extends the method introduced in Rivi et al. (2016b) to measure galaxy ellipticities in the visibility domain for radio weak lensing surveys. In that paper, we focused on the development and testing of the method for the simple case of individual galaxies located at the phase centre, and proposed to extend it to the realistic case of many sources in the field of view by isolating visibilities of each source with a faceting technique. In this second paper, we present a detailed algorithm for source extraction in the visibility domain and show its effectiveness as a function of the source number density by running simulations of SKA1-MID observations in the band 950-1150 MHz and comparing original and measured values of galaxies' ellipticities. Shear measurements from a realistic population of 104 galaxies randomly located in a field of view of 1 \\deg ^2 (i.e. the source density expected for the current radio weak lensing survey proposal with SKA1) are also performed. At SNR ≥ 10, the multiplicative bias is only a factor 1.5 worse than what found when analysing individual sources, and is still comparable to the bias values reported for similar measurement methods at optical wavelengths. The additive bias is unchanged from the case of individual sources, but it is significantly larger than typically found in optical surveys. This bias depends on the shape of the uv coverage and we suggest that a uv-plane weighting scheme to produce a more isotropic shape could reduce and control additive bias.
Hendrickson, Carolyn M; Dobbins, Sarah; Redick, Brittney J; Greenberg, Molly D; Calfee, Carolyn S; Cohen, Mitchell Jay
2015-09-01
Adherence to rigorous research protocols for identifying adult respiratory distress syndrome (ARDS) after trauma is variable. To examine how misclassification of ARDS may bias observational studies in trauma populations, we evaluated the agreement of two methods for adjudicating ARDS after trauma: the current gold standard, direct review of chest radiographs and review of dictated radiology reports, a commonly used alternative. This nested cohort study included 123 mechanically ventilated patients between 2005 and 2008, with at least one PaO2/FIO2 less than 300 within the first 8 days of admission. Two blinded physician investigators adjudicated ARDS by two methods. The investigators directly reviewed all chest radiographs to evaluate for bilateral infiltrates. Several months later, blinded to their previous assessments, they adjudicated ARDS using a standardized rubric to classify radiology reports. A κ statistics was calculated. Regression analyses quantified the association between established risk factors as well as important clinical outcomes and ARDS determined by the aforementioned methods as well as hypoxemia as a surrogate marker. The κ was 0.47 for the observed agreement between ARDS adjudicated by direct review of chest radiographs and ARDS adjudicated by review of radiology reports. Both the magnitude and direction of bias on the estimates of association between ARDS and established risk factors as well as clinical outcomes varied by method of adjudication. Classification of ARDS by review of dictated radiology reports had only moderate agreement with the current gold standard, ARDS adjudicated by direct review of chest radiographs. While the misclassification of ARDS had varied effects on the estimates of associations with established risk factors, it tended to weaken the association of ARDS with important clinical outcomes. A standardized approach to ARDS adjudication after trauma by direct review of chest radiographs will minimize misclassification bias in future observational studies. Diagnostic study, level II.
Clare, John; McKinney, Shawn T; DePue, John E; Loftin, Cynthia S
2017-10-01
It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture-recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters. © 2017 by the Ecological Society of America.
ERIC Educational Resources Information Center
Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong
2010-01-01
This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile…
Zhao, Yongchao; Zheng, Hao; Xu, Anying; Yan, Donghua; Jiang, Zijian; Qi, Qi; Sun, Jingchen
2016-08-24
Analysis of codon usage bias is an extremely versatile method using in furthering understanding of the genetic and evolutionary paths of species. Codon usage bias of envelope glycoprotein genes in nuclear polyhedrosis virus (NPV) has remained largely unexplored at present. Hence, the codon usage bias of NPV envelope glycoprotein was analyzed here to reveal the genetic and evolutionary relationships between different viral species in baculovirus genus. A total of 9236 codons from 18 different species of NPV of the baculovirus genera were used to perform this analysis. Glycoprotein of NPV exhibits weaker codon usage bias. Neutrality plot analysis and correlation analysis of effective number of codons (ENC) values indicate that natural selection is the main factor influencing codon usage bias, and that the impact of mutation pressure is relatively smaller. Another cluster analysis shows that the kinship or evolutionary relationships of these viral species can be divided into two broad categories despite all of these 18 species are from the same baculovirus genus. There are many elements that can affect codon bias, such as the composition of amino acids, mutation pressure, natural selection, gene expression level, and etc. In the meantime, cluster analysis also illustrates that codon usage bias of virus envelope glycoprotein can serve as an effective means of evolutionary classification in baculovirus genus.
The Risk Factors of Child Lead Poisoning in China: A Meta-Analysis
Li, You; Qin, Jian; Wei, Xiao; Li, Chunhong; Wang, Jian; Jiang, Meiyu; Liang, Xue; Xia, Tianlong; Zhang, Zhiyong
2016-01-01
Background: To investigate the risk factors of child lead poisoning in China. Methods: A document retrieval was performed using MeSH (Medical subject heading terms) and key words. The Newcastle-Ottawa Scale (NOS) was used to assess the quality of the studies, and the pooled odd ratios with a 95% confidence interval were used to identify the risk factors. We employed Review Manager 5.2 and Stata 10.0 to analyze the data. Heterogeneity was assessed by both the Chi-square and I2 tests, and publication bias was evaluated using a funnel plot and Egger’s test. Results: Thirty-four articles reporting 13,587 lead-poisoned children met the inclusion criteria. Unhealthy lifestyle and behaviors, environmental pollution around the home and potential for parents’ occupational exposure to lead were risk factors of child lead poisoning in the pooled analyses. Our assessments yielded no severe publication biases. Conclusions: Seventeen risk factors are associated with child lead poisoning, which can be used to identify high-risk children. Health education and promotion campaigns should be designed in order to minimize or prevent child lead poisoning in China. PMID:27005641
Performance of three reflectance calibration methods for airborne hyperspectral spectrometer data.
Miura, Tomoaki; Huete, Alfredo R
2009-01-01
In this study, the performances and accuracies of three methods for converting airborne hyperspectral spectrometer data to reflectance factors were characterized and compared. The "reflectance mode (RM)" method, which calibrates a spectrometer against a white reference panel prior to mounting on an aircraft, resulted in spectral reflectance retrievals that were biased and distorted. The magnitudes of these bias errors and distortions varied significantly, depending on time of day and length of the flight campaign. The "linear-interpolation (LI)" method, which converts airborne spectrometer data by taking a ratio of linearly-interpolated reference values from the preflight and post-flight reference panel readings, resulted in precise, but inaccurate reflectance retrievals. These reflectance spectra were not distorted, but were subject to bias errors of varying magnitudes dependent on the flight duration length. The "continuous panel (CP)" method uses a multi-band radiometer to obtain continuous measurements over a reference panel throughout the flight campaign, in order to adjust the magnitudes of the linear-interpolated reference values from the preflight and post-flight reference panel readings. Airborne hyperspectral reflectance retrievals obtained using this method were found to be the most accurate and reliable reflectance calibration method. The performances of the CP method in retrieving accurate reflectance factors were consistent throughout time of day and for various flight durations. Based on the dataset analyzed in this study, the uncertainty of the CP method has been estimated to be 0.0025 ± 0.0005 reflectance units for the wavelength regions not affected by atmospheric absorptions. The RM method can produce reasonable results only for a very short-term flight (e.g., < 15 minutes) conducted around a local solar noon. The flight duration should be kept shorter than 30 minutes for the LI method to produce results with reasonable accuracies. An important advantage of the CP method is that the method can be used for long-duration flight campaigns (e.g., 1-2 hours). Although this study focused on reflectance calibration of airborne spectrometer data, the methods evaluated in this study and the results obtained are directly applicable to ground spectrometer measurements.
Resolving an anomaly in electron temperature measurement using double and triple Langmuir probes
NASA Astrophysics Data System (ADS)
Ghosh, Soumen; Barada, K. K.; Chattopadhyay, P. K.; Ghosh, J.; Bora, D.
2015-02-01
Langmuir probes with variants such as single, double and triple probes remain the most common method of electron temperature measurement in low-temperature laboratory plasmas. However, proper estimation of electron temperature mainly using triple probe configuration requires the proper choice of compensation factor (W). Determination of the compensating factor is not very straightforward as it depends heavily on plasma floating potential (Vf), electron temperature (Te), the type of gas used for plasma production and the bias voltage applied to probe pins, especially in cases where there are substantial variations in floating potential. In this paper we highlight the anomaly in electron temperature measurement using double and triple Langmuir probe techniques as well as the proper determination of the compensation factor (W) to overcome this anomaly. Experiments are carried out with helicon antenna producing inductive radiofrequency plasmas, where significant variation of floating potential along the axis enables a detailed study of deviations introduced in Te measurements using triple probes compared to double and single probes. It is observed that the bias voltage between the probe pins of the triple probes plays an important role in the accurate determination of the compensating factor (W) and should be in the range (5Vd2 < Vd3 < 10Vd2), where Vd2 and Vd3 are the voltage between floating probe pins 2 and 1 and the bias voltage, respectively.
Gating the holes in the Swiss cheese (part I): Expanding professor Reason's model for patient safety
Bryan Young, G.; Makhinson, Michael; Smith, Preston A.; Stobart, Kent; Croskerry, Pat
2017-01-01
Abstract Introduction Although patient safety has improved steadily, harm remains a substantial global challenge. Additionally, safety needs to be ensured not only in hospitals but also across the continuum of care. Better understanding of the complex cognitive factors influencing health care–related decisions and organizational cultures could lead to more rational approaches, and thereby to further improvement. Hypothesis A model integrating the concepts underlying Reason's Swiss cheese theory and the cognitive‐affective biases plus cascade could advance the understanding of cognitive‐affective processes that underlie decisions and organizational cultures across the continuum of care. Methods Thematic analysis, qualitative information from several sources being used to support argumentation. Discussion Complex covert cognitive phenomena underlie decisions influencing health care. In the integrated model, the Swiss cheese slices represent dynamic cognitive‐affective (mental) gates: Reason's successive layers of defence. Like firewalls and antivirus programs, cognitive‐affective gates normally allow the passage of rational decisions but block or counter unsounds ones. Gates can be breached (ie, holes created) at one or more levels of organizations, teams, and individuals, by (1) any element of cognitive‐affective biases plus (conflicts of interest and cognitive biases being the best studied) and (2) other potential error‐provoking factors. Conversely, flawed decisions can be blocked and consequences minimized; for example, by addressing cognitive biases plus and error‐provoking factors, and being constantly mindful. Informed shared decision making is a neglected but critical layer of defence (cognitive‐affective gate). The integrated model can be custom tailored to specific situations, and the underlying principles applied to all methods for improving safety. The model may also provide a framework for developing and evaluating strategies to optimize organizational cultures and decisions. Limitations The concept is abstract, the model is virtual, and the best supportive evidence is qualitative and indirect. Conclusions The proposed model may help enhance rational decision making across the continuum of care, thereby improving patient safety globally. PMID:29168290
Wellskins and slug tests: where's the bias?
NASA Astrophysics Data System (ADS)
Rovey, C. W.; Niemann, W. L.
2001-03-01
Pumping tests in an outwash sand at the Camp Dodge Site give hydraulic conductivities ( K) approximately seven times greater than conventional slug tests in the same wells. To determine if this difference is caused by skin bias, we slug tested three sets of wells, each in a progressively greater stage of development. Results were analyzed with both the conventional Bouwer-Rice method and the deconvolution method, which quantifies the skin and eliminates its effects. In 12 undeveloped wells the average skin is +4.0, causing underestimation of conventional slug-test K (Bouwer-Rice method) by approximately a factor of 2 relative to the deconvolution method. In seven nominally developed wells the skin averages just +0.34, and the Bouwer-Rice method gives K within 10% of that calculated with the deconvolution method. The Bouwer-Rice K in this group is also within 5% of that measured by natural-gradient tracer tests at the same site. In 12 intensely developed wells the average skin is <-0.82, consistent with an average skin of -1.7 measured during single-well pumping tests. At this site the maximum possible skin bias is much smaller than the difference between slug and pumping-test Ks. Moreover, the difference in K persists even in intensely developed wells with negative skins. Therefore, positive wellskins do not cause the difference in K between pumping and slug tests at this site.
O'Brien, D J; León-Vintró, L; McClean, B
2016-01-01
The use of radiotherapy fields smaller than 3 cm in diameter has resulted in the need for accurate detector correction factors for small field dosimetry. However, published factors do not always agree and errors introduced by biased reference detectors, inaccurate Monte Carlo models, or experimental errors can be difficult to distinguish. The aim of this study was to provide a robust set of detector-correction factors for a range of detectors using numerical, empirical, and semiempirical techniques under the same conditions and to examine the consistency of these factors between techniques. Empirical detector correction factors were derived based on small field output factor measurements for circular field sizes from 3.1 to 0.3 cm in diameter performed with a 6 MV beam. A PTW 60019 microDiamond detector was used as the reference dosimeter. Numerical detector correction factors for the same fields were derived based on calculations from a geant4 Monte Carlo model of the detectors and the Linac treatment head. Semiempirical detector correction factors were derived from the empirical output factors and the numerical dose-to-water calculations. The PTW 60019 microDiamond was found to over-respond at small field sizes resulting in a bias in the empirical detector correction factors. The over-response was similar in magnitude to that of the unshielded diode. Good agreement was generally found between semiempirical and numerical detector correction factors except for the PTW 60016 Diode P, where the numerical values showed a greater over-response than the semiempirical values by a factor of 3.7% for a 1.1 cm diameter field and higher for smaller fields. Detector correction factors based solely on empirical measurement or numerical calculation are subject to potential bias. A semiempirical approach, combining both empirical and numerical data, provided the most reliable results.
de Muinck, Eric J; Trosvik, Pål; Gilfillan, Gregor D; Hov, Johannes R; Sundaram, Arvind Y M
2017-07-06
Advances in sequencing technologies and bioinformatics have made the analysis of microbial communities almost routine. Nonetheless, the need remains to improve on the techniques used for gathering such data, including increasing throughput while lowering cost and benchmarking the techniques so that potential sources of bias can be better characterized. We present a triple-index amplicon sequencing strategy to sequence large numbers of samples at significantly lower c ost and in a shorter timeframe compared to existing methods. The design employs a two-stage PCR protocol, incorpo rating three barcodes to each sample, with the possibility to add a fourth-index. It also includes heterogeneity spacers to overcome low complexity issues faced when sequencing amplicons on Illumina platforms. The library preparation method was extensively benchmarked through analysis of a mock community in order to assess biases introduced by sample indexing, number of PCR cycles, and template concentration. We further evaluated the method through re-sequencing of a standardized environmental sample. Finally, we evaluated our protocol on a set of fecal samples from a small cohort of healthy adults, demonstrating good performance in a realistic experimental setting. Between-sample variation was mainly related to batch effects, such as DNA extraction, while sample indexing was also a significant source of bias. PCR cycle number strongly influenced chimera formation and affected relative abundance estimates of species with high GC content. Libraries were sequenced using the Illumina HiSeq and MiSeq platforms to demonstrate that this protocol is highly scalable to sequence thousands of samples at a very low cost. Here, we provide the most comprehensive study of performance and bias inherent to a 16S rRNA gene amplicon sequencing method to date. Triple-indexing greatly reduces the number of long custom DNA oligos required for library preparation, while the inclusion of variable length heterogeneity spacers minimizes the need for PhiX spike-in. This design results in a significant cost reduction of highly multiplexed amplicon sequencing. The biases we characterize highlight the need for highly standardized protocols. Reassuringly, we find that the biological signal is a far stronger structuring factor than the various sources of bias.
Takabatake, Reona; Masubuchi, Tomoko; Futo, Satoshi; Minegishi, Yasutaka; Noguchi, Akio; Kondo, Kazunari; Teshima, Reiko; Kurashima, Takeyo; Mano, Junichi; Kitta, Kazumi
2014-01-01
A novel real-time PCR-based analytical method was developed for the event-specific quantification of a genetically modified (GM) maize event, MIR162. We first prepared a standard plasmid for MIR162 quantification. The conversion factor (Cf) required to calculate the genetically modified organism (GMO) amount was empirically determined for two real-time PCR instruments, the Applied Biosystems 7900HT (ABI7900) and the Applied Biosystems 7500 (ABI7500) for which the determined Cf values were 0.697 and 0.635, respectively. To validate the developed method, a blind test was carried out in an interlaboratory study. The trueness and precision were evaluated as the bias and reproducibility of relative standard deviation (RSDr). The determined biases were less than 25% and the RSDr values were less than 20% at all evaluated concentrations. These results suggested that the limit of quantitation of the method was 0.5%, and that the developed method would thus be suitable for practical analyses for the detection and quantification of MIR162.
Analysis of Developmental Data: Comparison Among Alternative Methods
ERIC Educational Resources Information Center
Wilson, Ronald S.
1975-01-01
To examine the ability of the correction factor epsilon to counteract statistical bias in univariate analysis, an analysis of variance (adjusted by epsilon) and a multivariate analysis of variance were performed on the same data. The results indicated that univariate analysis is a fully protected design when used with epsilon. (JMB)
Exploring and accounting for publication bias in mental health: a brief overview of methods.
Mavridis, Dimitris; Salanti, Georgia
2014-02-01
OBJECTIVE Publication bias undermines the integrity of published research. The aim of this paper is to present a synopsis of methods for exploring and accounting for publication bias. METHODS We discussed the main features of the following methods to assess publication bias: funnel plot analysis; trim-and-fill methods; regression techniques and selection models. We applied these methods to a well-known example of antidepressants trials that compared trials submitted to the Food and Drug Administration (FDA) for regulatory approval. RESULTS The funnel plot-related methods (visual inspection, trim-and-fill, regression models) revealed an association between effect size and SE. Contours of statistical significance showed that asymmetry in the funnel plot is probably due to publication bias. Selection model found a significant correlation between effect size and propensity for publication. CONCLUSIONS Researchers should always consider the possible impact of publication bias. Funnel plot-related methods should be seen as a means of examining for small-study effects and not be directly equated with publication bias. Possible causes for funnel plot asymmetry should be explored. Contours of statistical significance may help disentangle whether asymmetry in a funnel plot is caused by publication bias or not. Selection models, although underused, could be useful resource when publication bias and heterogeneity are suspected because they address directly the problem of publication bias and not that of small-study effects.
Bias estimation for the Landsat 8 operational land imager
Morfitt, Ron; Vanderwerff, Kelly
2011-01-01
The Operational Land Imager (OLI) is a pushbroom sensor that will be a part of the Landsat Data Continuity Mission (LDCM). This instrument is the latest in the line of Landsat imagers, and will continue to expand the archive of calibrated earth imagery. An important step in producing a calibrated image from instrument data is accurately accounting for the bias of the imaging detectors. Bias variability is one factor that contributes to error in bias estimation for OLI. Typically, the bias is simply estimated by averaging dark data on a per-detector basis. However, data acquired during OLI pre-launch testing exhibited bias variation that correlated well with the variation in concurrently collected data from a special set of detectors on the focal plane. These detectors are sensitive to certain electronic effects but not directly to incoming electromagnetic radiation. A method of using data from these special detectors to estimate the bias of the imaging detectors was developed, but found not to be beneficial at typical radiance levels as the detectors respond slightly when the focal plane is illuminated. In addition to bias variability, a systematic bias error is introduced by the truncation performed by the spacecraft of the 14-bit instrument data to 12-bit integers. This systematic error can be estimated and removed on average, but the per pixel quantization error remains. This paper describes the variability of the bias, the effectiveness of a new approach to estimate and compensate for it, as well as the errors due to truncation and how they are reduced.
“Fair Play”: A Videogame Designed to Address Implicit Race Bias Through Active Perspective Taking
Kaatz, Anna; Chu, Sarah; Ramirez, Dennis; Samson-Samuel, Clem; Carnes, Molly
2014-01-01
Abstract Objective: Having diverse faculty in academic health centers will help diversify the healthcare workforce and reduce health disparities. Implicit race bias is one factor that contributes to the underrepresentation of Black faculty. We designed the videogame “Fair Play” in which players assume the role of a Black graduate student named Jamal Davis. As Jamal, players experience subtle race bias while completing “quests” to obtain a science degree. We hypothesized that participants randomly assigned to play the game would have greater empathy for Jamal and lower implicit race bias than participants randomized to read narrative text describing Jamal's experience. Materials and Methods: University of Wisconsin–Madison graduate students were recruited via e-mail and randomly assigned to play “Fair Play” or read narrative text through an online link. Upon completion, participants took an Implicit Association Test to measure implicit bias and answered survey questions assessing empathy toward Jamal and awareness of bias. Results: As hypothesized, gameplayers showed the least implicit bias but only when they also showed high empathy for Jamal (P=0.013). Gameplayers did not show greater empathy than text readers, and women in the text condition reported the greatest empathy for Jamal (P=0.008). However, high empathy only predicted lower levels of implicit bias among those who actively took Jamal's perspective through gameplay (P=0.014). Conclusions: A videogame in which players experience subtle race bias as a Black graduate student has the potential to reduce implicit bias, possibly because of a game's ability to foster empathy through active perspective taking. PMID:26192644
Online Reinforcement Learning Using a Probability Density Estimation.
Agostini, Alejandro; Celaya, Enric
2017-01-01
Function approximation in online, incremental, reinforcement learning needs to deal with two fundamental problems: biased sampling and nonstationarity. In this kind of task, biased sampling occurs because samples are obtained from specific trajectories dictated by the dynamics of the environment and are usually concentrated in particular convergence regions, which in the long term tend to dominate the approximation in the less sampled regions. The nonstationarity comes from the recursive nature of the estimations typical of temporal difference methods. This nonstationarity has a local profile, varying not only along the learning process but also along different regions of the state space. We propose to deal with these problems using an estimation of the probability density of samples represented with a gaussian mixture model. To deal with the nonstationarity problem, we use the common approach of introducing a forgetting factor in the updating formula. However, instead of using the same forgetting factor for the whole domain, we make it dependent on the local density of samples, which we use to estimate the nonstationarity of the function at any given input point. To address the biased sampling problem, the forgetting factor applied to each mixture component is modulated according to the new information provided in the updating, rather than forgetting depending only on time, thus avoiding undesired distortions of the approximation in less sampled regions.
Patterns and biases in climate change research on amphibians and reptiles: a systematic review.
Winter, Maiken; Fiedler, Wolfgang; Hochachka, Wesley M; Koehncke, Arnulf; Meiri, Shai; De la Riva, Ignacio
2016-09-01
Climate change probably has severe impacts on animal populations, but demonstrating a causal link can be difficult because of potential influences by additional factors. Assessing global impacts of climate change effects may also be hampered by narrow taxonomic and geographical research foci. We review studies on the effects of climate change on populations of amphibians and reptiles to assess climate change effects and potential biases associated with the body of work that has been conducted within the last decade. We use data from 104 studies regarding the effect of climate on 313 species, from 464 species-study combinations. Climate change effects were reported in 65% of studies. Climate change was identified as causing population declines or range restrictions in half of the cases. The probability of identifying an effect of climate change varied among regions, taxa and research methods. Climatic effects were equally prevalent in studies exclusively investigating climate factors (more than 50% of studies) and in studies including additional factors, thus bolstering confidence in the results of studies exclusively examining effects of climate change. Our analyses reveal biases with respect to geography, taxonomy and research question, making global conclusions impossible. Additional research should focus on under-represented regions, taxa and questions. Conservation and climate policy should consider the documented harm climate change causes reptiles and amphibians.
NASA Astrophysics Data System (ADS)
Pérez-Ràfols, Ignasi; Font-Ribera, Andreu; Miralda-Escudé, Jordi; Blomqvist, Michael; Bird, Simeon; Busca, Nicolás; du Mas des Bourboux, Hélion; Mas-Ribas, Lluís; Noterdaeme, Pasquier; Petitjean, Patrick; Rich, James; Schneider, Donald P.
2018-01-01
We present a measurement of the damped Ly α absorber (DLA) mean bias from the cross-correlation of DLAs and the Ly α forest, updating earlier results of Font-Ribera et al. (2012) with the final Baryon Oscillations Spectroscopic Survey data release and an improved method to address continuum fitting corrections. Our cross-correlation is well fitted by linear theory with the standard ΛCDM model, with a DLA bias of bDLA = 1.99 ± 0.11; a more conservative analysis, which removes DLA in the Ly β forest and uses only the cross-correlation at r > 10 h-1 Mpc, yields bDLA = 2.00 ± 0.19. This assumes the cosmological model from Planck Collaboration (2016) and the Ly α forest bias factors of Bautista et al. (2017) and includes only statistical errors obtained from bootstrap analysis. The main systematic errors arise from possible impurities and selection effects in the DLA catalogue and from uncertainties in the determination of the Ly α forest bias factors and a correction for effects of high column density absorbers. We find no dependence of the DLA bias on column density or redshift. The measured bias value corresponds to a host halo mass ∼4 × 1011 h-1 M⊙ if all DLAs were hosted in haloes of a similar mass. In a realistic model where host haloes over a broad mass range have a DLA cross-section Σ (M_h) ∝ M_h^{α } down to Mh > Mmin = 108.5 h-1 M⊙, we find that α > 1 is required to have bDLA > 1.7, implying a steeper relation or higher value of Mmin than is generally predicted in numerical simulations of galaxy formation.
The assessment of biases in the acoustic discrimination of individuals
Šálek, Martin
2017-01-01
Animal vocalizations contain information about individual identity that could potentially be used for the monitoring of individuals. However, the performance of individual discrimination is subjected to many biases depending on factors such as the amount of identity information, or methods used. These factors need to be taken into account when comparing results of different studies or selecting the most cost-effective solution for a particular species. In this study, we evaluate several biases associated with the discrimination of individuals. On a large sample of little owl male individuals, we assess how discrimination performance changes with methods of call description, an increasing number of individuals, and number of calls per male. Also, we test whether the discrimination performance within the whole population can be reliably estimated from a subsample of individuals in a pre-screening study. Assessment of discrimination performance at the level of the individual and at the level of call led to different conclusions. Hence, studies interested in individual discrimination should optimize methods at the level of individuals. The description of calls by their frequency modulation leads to the best discrimination performance. In agreement with our expectations, discrimination performance decreased with population size. Increasing the number of calls per individual linearly increased the discrimination of individuals (but not the discrimination of calls), likely because it allows distinction between individuals with very similar calls. The available pre-screening index does not allow precise estimation of the population size that could be reliably monitored. Overall, projects applying acoustic monitoring at the individual level in population need to consider limitations regarding the population size that can be reliably monitored and fine-tune their methods according to their needs and limitations. PMID:28486488
Unbiased symmetric metrics provide a useful measure to quickly compare two datasets, with similar interpretations for both under and overestimations. Two examples include the normalized mean bias factor and normalized mean absolute error factor. However, the original formulations...
Spearing, Natalie M; Connelly, Luke B; Nghiem, Hong S; Pobereskin, Louis
2012-11-01
This study highlights the serious consequences of ignoring reverse causality bias in studies on compensation-related factors and health outcomes and demonstrates a technique for resolving this problem of observational data. Data from an English longitudinal study on factors, including claims for compensation, associated with recovery from neck pain (whiplash) after rear-end collisions are used to demonstrate the potential for reverse causality bias. Although it is commonly believed that claiming compensation leads to worse recovery, it is also possible that poor recovery may lead to compensation claims--a point that is seldom considered and never addressed empirically. This pedagogical study compares the association between compensation claiming and recovery when reverse causality bias is ignored and when it is addressed, controlling for the same observable factors. When reverse causality is ignored, claimants appear to have a worse recovery than nonclaimants; however, when reverse causality bias is addressed, claiming compensation appears to have a beneficial effect on recovery, ceteris paribus. To avert biased policy and judicial decisions that might inadvertently disadvantage people with compensable injuries, there is an urgent need for researchers to address reverse causality bias in studies on compensation-related factors and health. Copyright © 2012 Elsevier Inc. All rights reserved.
Cognitive aspect of diagnostic errors.
Phua, Dong Haur; Tan, Nigel C K
2013-01-01
Diagnostic errors can result in tangible harm to patients. Despite our advances in medicine, the mental processes required to make a diagnosis exhibits shortcomings, causing diagnostic errors. Cognitive factors are found to be an important cause of diagnostic errors. With new understanding from psychology and social sciences, clinical medicine is now beginning to appreciate that our clinical reasoning can take the form of analytical reasoning or heuristics. Different factors like cognitive biases and affective influences can also impel unwary clinicians to make diagnostic errors. Various strategies have been proposed to reduce the effect of cognitive biases and affective influences when clinicians make diagnoses; however evidence for the efficacy of these methods is still sparse. This paper aims to introduce the reader to the cognitive aspect of diagnostic errors, in the hope that clinicians can use this knowledge to improve diagnostic accuracy and patient outcomes.
Gray, Alastair
2017-01-01
Increasing numbers of economic evaluations are conducted alongside randomised controlled trials. Such studies include factorial trials, which randomise patients to different levels of two or more factors and can therefore evaluate the effect of multiple treatments alone and in combination. Factorial trials can provide increased statistical power or assess interactions between treatments, but raise additional challenges for trial‐based economic evaluations: interactions may occur more commonly for costs and quality‐adjusted life‐years (QALYs) than for clinical endpoints; economic endpoints raise challenges for transformation and regression analysis; and both factors must be considered simultaneously to assess which treatment combination represents best value for money. This article aims to examine issues associated with factorial trials that include assessment of costs and/or cost‐effectiveness, describe the methods that can be used to analyse such studies and make recommendations for health economists, statisticians and trialists. A hypothetical worked example is used to illustrate the challenges and demonstrate ways in which economic evaluations of factorial trials may be conducted, and how these methods affect the results and conclusions. Ignoring interactions introduces bias that could result in adopting a treatment that does not make best use of healthcare resources, while considering all interactions avoids bias but reduces statistical power. We also introduce the concept of the opportunity cost of ignoring interactions as a measure of the bias introduced by not taking account of all interactions. We conclude by offering recommendations for planning, analysing and reporting economic evaluations based on factorial trials, taking increased analysis costs into account. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:28470760
An unjustified benefit: immortal time bias in the analysis of time-dependent events.
Gleiss, Andreas; Oberbauer, Rainer; Heinze, Georg
2018-02-01
Immortal time bias is a problem arising from methodologically wrong analyses of time-dependent events in survival analyses. We illustrate the problem by analysis of a kidney transplantation study. Following patients from transplantation to death, groups defined by the occurrence or nonoccurrence of graft failure during follow-up seemingly had equal overall mortality. Such naive analysis assumes that patients were assigned to the two groups at time of transplantation, which actually are a consequence of occurrence of a time-dependent event later during follow-up. We introduce landmark analysis as the method of choice to avoid immortal time bias. Landmark analysis splits the follow-up time at a common, prespecified time point, the so-called landmark. Groups are then defined by time-dependent events having occurred before the landmark, and outcome events are only considered if occurring after the landmark. Landmark analysis can be easily implemented with common statistical software. In our kidney transplantation example, landmark analyses with landmarks set at 30 and 60 months clearly identified graft failure as a risk factor for overall mortality. We give further typical examples from transplantation research and discuss strengths and limitations of landmark analysis and other methods to address immortal time bias such as Cox regression with time-dependent covariables. © 2017 Steunstichting ESOT.
Revelation of Influencing Factors in Overall Codon Usage Bias of Equine Influenza Viruses
Bhatia, Sandeep; Sood, Richa; Selvaraj, Pavulraj
2016-01-01
Equine influenza viruses (EIVs) of H3N8 subtype are culprits of severe acute respiratory infections in horses, and are still responsible for significant outbreaks worldwide. Adaptability of influenza viruses to a particular host is significantly influenced by their codon usage preference, due to an absolute dependence on the host cellular machinery for their replication. In the present study, we analyzed genome-wide codon usage patterns in 92 EIV strains, including both H3N8 and H7N7 subtypes by computing several codon usage indices and applying multivariate statistical methods. Relative synonymous codon usage (RSCU) analysis disclosed bias of preferred synonymous codons towards A/U-ended codons. The overall codon usage bias in EIVs was slightly lower, and mainly affected by the nucleotide compositional constraints as inferred from the RSCU and effective number of codon (ENc) analysis. Our data suggested that codon usage pattern in EIVs is governed by the interplay of mutation pressure, natural selection from its hosts and undefined factors. The H7N7 subtype was found less fit to its host (horse) in comparison to H3N8, by possessing higher codon bias, lower mutation pressure and much less adaptation to tRNA pool of equine cells. To the best of our knowledge, this is the first report describing the codon usage analysis of the complete genomes of EIVs. The outcome of our study is likely to enhance our understanding of factors involved in viral adaptation, evolution, and fitness towards their hosts. PMID:27119730
Luta, George; Ford, Melissa B; Bondy, Melissa; Shields, Peter G; Stamey, James D
2013-04-01
Recent research suggests that the Bayesian paradigm may be useful for modeling biases in epidemiological studies, such as those due to misclassification and missing data. We used Bayesian methods to perform sensitivity analyses for assessing the robustness of study findings to the potential effect of these two important sources of bias. We used data from a study of the joint associations of radiotherapy and smoking with primary lung cancer among breast cancer survivors. We used Bayesian methods to provide an operational way to combine both validation data and expert opinion to account for misclassification of the two risk factors and missing data. For comparative purposes we considered a "full model" that allowed for both misclassification and missing data, along with alternative models that considered only misclassification or missing data, and the naïve model that ignored both sources of bias. We identified noticeable differences between the four models with respect to the posterior distributions of the odds ratios that described the joint associations of radiotherapy and smoking with primary lung cancer. Despite those differences we found that the general conclusions regarding the pattern of associations were the same regardless of the model used. Overall our results indicate a nonsignificantly decreased lung cancer risk due to radiotherapy among nonsmokers, and a mildly increased risk among smokers. We described easy to implement Bayesian methods to perform sensitivity analyses for assessing the robustness of study findings to misclassification and missing data. Copyright © 2012 Elsevier Ltd. All rights reserved.
A Comparison of the β-Substitution Method and a Bayesian Method for Analyzing Left-Censored Data
Huynh, Tran; Quick, Harrison; Ramachandran, Gurumurthy; Banerjee, Sudipto; Stenzel, Mark; Sandler, Dale P.; Engel, Lawrence S.; Kwok, Richard K.; Blair, Aaron; Stewart, Patricia A.
2016-01-01
Classical statistical methods for analyzing exposure data with values below the detection limits are well described in the occupational hygiene literature, but an evaluation of a Bayesian approach for handling such data is currently lacking. Here, we first describe a Bayesian framework for analyzing censored data. We then present the results of a simulation study conducted to compare the β-substitution method with a Bayesian method for exposure datasets drawn from lognormal distributions and mixed lognormal distributions with varying sample sizes, geometric standard deviations (GSDs), and censoring for single and multiple limits of detection. For each set of factors, estimates for the arithmetic mean (AM), geometric mean, GSD, and the 95th percentile (X0.95) of the exposure distribution were obtained. We evaluated the performance of each method using relative bias, the root mean squared error (rMSE), and coverage (the proportion of the computed 95% uncertainty intervals containing the true value). The Bayesian method using non-informative priors and the β-substitution method were generally comparable in bias and rMSE when estimating the AM and GM. For the GSD and the 95th percentile, the Bayesian method with non-informative priors was more biased and had a higher rMSE than the β-substitution method, but use of more informative priors generally improved the Bayesian method’s performance, making both the bias and the rMSE more comparable to the β-substitution method. An advantage of the Bayesian method is that it provided estimates of uncertainty for these parameters of interest and good coverage, whereas the β-substitution method only provided estimates of uncertainty for the AM, and coverage was not as consistent. Selection of one or the other method depends on the needs of the practitioner, the availability of prior information, and the distribution characteristics of the measurement data. We suggest the use of Bayesian methods if the practitioner has the computational resources and prior information, as the method would generally provide accurate estimates and also provides the distributions of all of the parameters, which could be useful for making decisions in some applications. PMID:26209598
van Rein, Nienke; Cannegieter, Suzanne C; Rosendaal, Frits R; Reitsma, Pieter H; Lijfering, Willem M
2014-02-01
Selection bias in case-control studies occurs when control selection is inappropriate. However, selection bias due to improper case sampling is less well recognized. We describe how to recognize survivor bias (i.e., selection on exposed cases) and illustrate this with an example study. A case-control study was used to analyze the effect of statins on major bleedings during treatment with vitamin K antagonists. A total of 110 patients who experienced such bleedings were included 18-1,018 days after the bleeding complication and matched to 220 controls. A protective association of major bleeding for exposure to statins (odds ratio [OR]: 0.56; 95% confidence interval: 0.29-1.08) was found, which did not become stronger after adjustment for confounding factors. These observations lead us to suspect survivor bias. To identify this bias, results were stratified on time between bleeding event and inclusion, and repeated for a negative control (an exposure not related to survival): blood group non-O. The ORs for exposure to statins increased gradually to 1.37 with shorter time between outcome and inclusion, whereas ORs for the negative control remained constant, confirming our hypothesis. We recommend the presented method to check for overoptimistic results, that is, survivor bias in case-control studies. Copyright © 2014 Elsevier Inc. All rights reserved.
Price, A.; Peterson, James T.
2010-01-01
Stream fish managers often use fish sample data to inform management decisions affecting fish populations. Fish sample data, however, can be biased by the same factors affecting fish populations. To minimize the effect of sample biases on decision making, biologists need information on the effectiveness of fish sampling methods. We evaluated single-pass backpack electrofishing and seining combined with electrofishing by following a dual-gear, mark–recapture approach in 61 blocknetted sample units within first- to third-order streams. We also estimated fish movement out of unblocked units during sampling. Capture efficiency and fish abundances were modeled for 50 fish species by use of conditional multinomial capture–recapture models. The best-approximating models indicated that capture efficiencies were generally low and differed among species groups based on family or genus. Efficiencies of single-pass electrofishing and seining combined with electrofishing were greatest for Catostomidae and lowest for Ictaluridae. Fish body length and stream habitat characteristics (mean cross-sectional area, wood density, mean current velocity, and turbidity) also were related to capture efficiency of both methods, but the effects differed among species groups. We estimated that, on average, 23% of fish left the unblocked sample units, but net movement varied among species. Our results suggest that (1) common warmwater stream fish sampling methods have low capture efficiency and (2) failure to adjust for incomplete capture may bias estimates of fish abundance. We suggest that managers minimize bias from incomplete capture by adjusting data for site- and species-specific capture efficiency and by choosing sampling gear that provide estimates with minimal bias and variance. Furthermore, if block nets are not used, we recommend that managers adjust the data based on unconditional capture efficiency.
Scientific citations favor positive results: a systematic review and meta-analysis.
Duyx, Bram; Urlings, Miriam J E; Swaen, Gerard M H; Bouter, Lex M; Zeegers, Maurice P
2017-08-01
Citation bias concerns the selective citation of scientific articles based on their results. We brought together all available evidence on citation bias across scientific disciplines and quantified its impact. An extensive search strategy was applied to the Web of Science Core Collection and Medline, yielding 52 studies in total. We classified these studies on scientific discipline, selection method, and other variables. We also performed random-effects meta-analyses to pool the effect of positive vs. negative results on subsequent citations. Finally, we checked for other determinants of citation as reported in the citation bias literature. Evidence for the occurrence of citation bias was most prominent in the biomedical sciences and least in the natural sciences. Articles with statistically significant results were cited 1.6 (95% confidence interval [CI] 1.3-1.8) times more often than articles with nonsignificant results. Articles in which the authors explicitly conclude to have found support for their hypothesis were cited 2.7 (CI 2.0-3.7) times as often. Article results and journal impact factor were associated with citation more often than any other reported determinant. Similar to what we already know on publication bias, also citation bias can lead to an overrepresentation of positive results and unfounded beliefs. Copyright © 2017 Elsevier Inc. All rights reserved.
The lack of selection bias in a snowball sampled case-control study on drug abuse.
Lopes, C S; Rodrigues, L C; Sichieri, R
1996-12-01
Friend controls in matched case-control studies can be a potential source of bias based on the assumption that friends are more likely to share exposure factors. This study evaluates the role of selection bias in a case-control study that used the snowball sampling method based on friendship for the selection of cases and controls. The cases selected fro the study were drug abusers located in the community. Exposure was defined by the presence of at least one psychiatric diagnosis. Psychiatric and drug abuse/dependence diagnoses were made according to the Diagnostic and Statistical Manual of Mental Disorders (DSM-III-R) criteria. Cases and controls were matched on sex, age and friendship. The measurement of selection bias was made through the comparison of the proportion of exposed controls selected by exposed cases (p1) with the proportion of exposed controls selected by unexposed cases (p2). If p1 = p2 then, selection bias should not occur. The observed distribution of the 185 matched pairs having at least one psychiatric disorder showed a p1 value of 0.52 and a p2 value of 0.51, indicating no selection bias in this study. Our findings support the idea that the use of friend controls can produce a valid basis for a case-control study.
Stukel, Thérèse A.; Fisher, Elliott S; Wennberg, David E.; Alter, David A.; Gottlieb, Daniel J.; Vermeulen, Marian J.
2007-01-01
Context Comparisons of outcomes between patients treated and untreated in observational studies may be biased due to differences in patient prognosis between groups, often because of unobserved treatment selection biases. Objective To compare 4 analytic methods for removing the effects of selection bias in observational studies: multivariable model risk adjustment, propensity score risk adjustment, propensity-based matching, and instrumental variable analysis. Design, Setting, and Patients A national cohort of 122 124 patients who were elderly (aged 65–84 years), receiving Medicare, and hospitalized with acute myocardial infarction (AMI) in 1994–1995, and who were eligible for cardiac catheterization. Baseline chart reviews were taken from the Cooperative Cardiovascular Project and linked to Medicare health administrative data to provide a rich set of prognostic variables. Patients were followed up for 7 years through December 31, 2001, to assess the association between long-term survival and cardiac catheterization within 30 days of hospital admission. Main Outcome Measure Risk-adjusted relative mortality rate using each of the analytic methods. Results Patients who received cardiac catheterization (n=73 238) were younger and had lower AMI severity than those who did not. After adjustment for prognostic factors by using standard statistical risk-adjustment methods, cardiac catheterization was associated with a 50% relative decrease in mortality (for multivariable model risk adjustment: adjusted relative risk [RR], 0.51; 95% confidence interval [CI], 0.50–0.52; for propensity score risk adjustment: adjusted RR, 0.54; 95% CI, 0.53–0.55; and for propensity-based matching: adjusted RR, 0.54; 95% CI, 0.52–0.56). Using regional catheterization rate as an instrument, instrumental variable analysis showed a 16% relative decrease in mortality (adjusted RR, 0.84; 95% CI, 0.79–0.90). The survival benefits of routine invasive care from randomized clinical trials are between 8% and 21 %. Conclusions Estimates of the observational association of cardiac catheterization with long-term AMI mortality are highly sensitive to analytic method. All standard risk-adjustment methods have the same limitations regarding removal of unmeasured treatment selection biases. Compared with standard modeling, instrumental variable analysis may produce less biased estimates of treatment effects, but is more suited to answering policy questions than specific clinical questions. PMID:17227979
NASA Astrophysics Data System (ADS)
Tesfagiorgis, Kibrewossen B.
Satellite Precipitation Estimates (SPEs) may be the only available source of information for operational hydrologic and flash flood prediction due to spatial limitations of radar and gauge products in mountainous regions. The present work develops an approach to seamlessly blend satellite, available radar, climatological and gauge precipitation products to fill gaps in ground-based radar precipitation field. To mix different precipitation products, the error of any of the products relative to each other should be removed. For bias correction, the study uses a new ensemble-based method which aims to estimate spatially varying multiplicative biases in SPEs using a radar-gauge precipitation product. Bias factors were calculated for a randomly selected sample of rainy pixels in the study area. Spatial fields of estimated bias were generated taking into account spatial variation and random errors in the sampled values. In addition to biases, sometimes there is also spatial error between the radar and satellite precipitation estimates; one of them has to be geometrically corrected with reference to the other. A set of corresponding raining points between SPE and radar products are selected to apply linear registration using a regularized least square technique to minimize the dislocation error in SPEs with respect to available radar products. A weighted Successive Correction Method (SCM) is used to make the merging between error corrected satellite and radar precipitation estimates. In addition to SCM, we use a combination of SCM and Bayesian spatial method for merging the rain gauges and climatological precipitation sources with radar and SPEs. We demonstrated the method using two satellite-based, CPC Morphing (CMORPH) and Hydro-Estimator (HE), two radar-gauge based, Stage-II and ST-IV, a climatological product PRISM and rain gauge dataset for several rain events from 2006 to 2008 over different geographical locations of the United States. Results show that: (a) the method of ensembles helped reduce biases in SPEs significantly; (b) the SCM method in combination with the Bayesian spatial model produced a precipitation product in good agreement with independent measurements .The study implies that using the available radar pixels surrounding the gap area, rain gauge, PRISM and satellite products, a radar like product is achievable over radar gap areas that benefits the operational meteorology and hydrology community.
Biasogram: Visualization of Confounding Technical Bias in Gene Expression Data
Krzystanek, Marcin; Szallasi, Zoltan; Eklund, Aron C.
2013-01-01
Gene expression profiles of clinical cohorts can be used to identify genes that are correlated with a clinical variable of interest such as patient outcome or response to a particular drug. However, expression measurements are susceptible to technical bias caused by variation in extraneous factors such as RNA quality and array hybridization conditions. If such technical bias is correlated with the clinical variable of interest, the likelihood of identifying false positive genes is increased. Here we describe a method to visualize an expression matrix as a projection of all genes onto a plane defined by a clinical variable and a technical nuisance variable. The resulting plot indicates the extent to which each gene is correlated with the clinical variable or the technical variable. We demonstrate this method by applying it to three clinical trial microarray data sets, one of which identified genes that may have been driven by a confounding technical variable. This approach can be used as a quality control step to identify data sets that are likely to yield false positive results. PMID:23613961
Weighted re-randomization tests for minimization with unbalanced allocation.
Han, Baoguang; Yu, Menggang; McEntegart, Damian
2013-01-01
Re-randomization test has been considered as a robust alternative to the traditional population model-based methods for analyzing randomized clinical trials. This is especially so when the clinical trials are randomized according to minimization, which is a popular covariate-adaptive randomization method for ensuring balance among prognostic factors. Among various re-randomization tests, fixed-entry-order re-randomization is advocated as an effective strategy when a temporal trend is suspected. Yet when the minimization is applied to trials with unequal allocation, fixed-entry-order re-randomization test is biased and thus compromised in power. We find that the bias is due to non-uniform re-allocation probabilities incurred by the re-randomization in this case. We therefore propose a weighted fixed-entry-order re-randomization test to overcome the bias. The performance of the new test was investigated in simulation studies that mimic the settings of a real clinical trial. The weighted re-randomization test was found to work well in the scenarios investigated including the presence of a strong temporal trend. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Singh, J.; Sharma, R. K.; Sule, U. S.; Goutam, U. K.; Gupta, Jagannath; Gadkari, S. C.
2017-07-01
Magnesium phthalocyanine (MgPc) based Schottky diode on indium tin oxide (ITO) substrate was fabricated by thermal evaporation method. The dark current voltage characteristics of the prepared ITO-MgPc-Al heterojunction Schottky diode were measured at different temperatures. The diode showed the non-ideal rectification behavior under forward and reverse bias conditions with a rectification ratio (RR) of 56 at ±1 V at room temperature. Under forward bias, thermionic emission and space charge limited conduction (SCLC) were found to be the dominant conduction mechanisms at low (below 0.6 V) and high voltages (above 0.6 V) respectively. Under reverse bias conditions, Poole-Frenkel (field assisted thermal detrapping of carriers) was the dominant conduction mechanism. Three different approaches namely, I-V plots, Norde and Cheung methods were used to determine the diode parameters including ideality factor (n), barrier height (Φb), series resistance (R s) and were compared. SCLC mechanism showed that the trap concentration is 5.52 × 1022 m-3 and it lies at 0.46 eV above the valence band edge.
ERIC Educational Resources Information Center
Jak, Suzanne; Oort, Frans J.; Dolan, Conor V.
2013-01-01
We present a test for cluster bias, which can be used to detect violations of measurement invariance across clusters in 2-level data. We show how measurement invariance assumptions across clusters imply measurement invariance across levels in a 2-level factor model. Cluster bias is investigated by testing whether the within-level factor loadings…
The use of GRADE approach in systematic reviews of animal studies.
Wei, Dang; Tang, Kun; Wang, Qi; Estill, Janne; Yao, Liang; Wang, Xiaoqin; Chen, Yaolong; Yang, Kehu
2016-03-15
The application of GRADE (Grading of Recommendations Assessment, Development and Evaluation) in SR of animal studies can promote the translation from bench to bedside. We aim to explore the use of GRADE in systematic reviews of animal studies. We used a theoretical analysis method to explore the use of GRADE in SR of animal studies and applied in a SR of animal studies. Meanwhile, we presented and discussed our results in two international conferences. Five downgrade factors were considered as follows in systematic reviews of animal studies: 1) Risk of bias: the SYRCLE tool can be used for assessing the risk of bias of animal studies. 2) Indirectness: we can assess indirectness in systematic reviews of animal studies from the PICO. 3) Inconsistency: similarity of point estimates, extent of overlap of confidence intervals and statistical heterogeneity are also suitable to evaluate inconsistency of evidence from animal studies. 4) Imprecision: optimal information size (OIS) and 95% confidence intervals (CIs) are also suitable for systematic reviews of animal studies, like those of clinical trials. 5) Publication bias: we need to consider publication bias comprehensively through the qualitative and quantitative methods. The methods about the use of GRADE in systematic review of animal studies are explicit. However, the principle about GRADE in developing the policy based on the evidence from animal studies when there is an emergency of public health. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Good practices for quantitative bias analysis.
Lash, Timothy L; Fox, Matthew P; MacLehose, Richard F; Maldonado, George; McCandless, Lawrence C; Greenland, Sander
2014-12-01
Quantitative bias analysis serves several objectives in epidemiological research. First, it provides a quantitative estimate of the direction, magnitude and uncertainty arising from systematic errors. Second, the acts of identifying sources of systematic error, writing down models to quantify them, assigning values to the bias parameters and interpreting the results combat the human tendency towards overconfidence in research results, syntheses and critiques and the inferences that rest upon them. Finally, by suggesting aspects that dominate uncertainty in a particular research result or topic area, bias analysis can guide efficient allocation of sparse research resources. The fundamental methods of bias analyses have been known for decades, and there have been calls for more widespread use for nearly as long. There was a time when some believed that bias analyses were rarely undertaken because the methods were not widely known and because automated computing tools were not readily available to implement the methods. These shortcomings have been largely resolved. We must, therefore, contemplate other barriers to implementation. One possibility is that practitioners avoid the analyses because they lack confidence in the practice of bias analysis. The purpose of this paper is therefore to describe what we view as good practices for applying quantitative bias analysis to epidemiological data, directed towards those familiar with the methods. We focus on answering questions often posed to those of us who advocate incorporation of bias analysis methods into teaching and research. These include the following. When is bias analysis practical and productive? How does one select the biases that ought to be addressed? How does one select a method to model biases? How does one assign values to the parameters of a bias model? How does one present and interpret a bias analysis?. We hope that our guide to good practices for conducting and presenting bias analyses will encourage more widespread use of bias analysis to estimate the potential magnitude and direction of biases, as well as the uncertainty in estimates potentially influenced by the biases. © The Author 2014; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.
Forensic Child Sexual Abuse Evaluations: Assessing Subjectivity and Bias in Professional Judgements
ERIC Educational Resources Information Center
Everson, Mark D.; Sandoval, Jose Miguel
2011-01-01
Objectives: Evaluators examining the same evidence often arrive at substantially different conclusions in forensic assessments of child sexual abuse (CSA). This study attempts to identify and quantify subjective factors that contribute to such disagreements so that interventions can be devised to improve the reliability of case decisions. Methods:…
An Evaluation of Attitude-Independent Magnetometer-Bias Determination Methods
NASA Technical Reports Server (NTRS)
Hashmall, J. A.; Deutschmann, Julie
1996-01-01
Although several algorithms now exist for determining three-axis magnetometer (TAM) biases without the use of attitude data, there are few studies on the effectiveness of these methods, especially in comparison with attitude dependent methods. This paper presents the results of a comparison of three attitude independent methods and an attitude dependent method for computing TAM biases. The comparisons are based on in-flight data from the Extreme Ultraviolet Explorer (EUVE), the Upper Atmosphere Research Satellite (UARS), and the Compton Gamma Ray Observatory (GRO). The effectiveness of an algorithm is measured by the accuracy of attitudes computed using biases determined with that algorithm. The attitude accuracies are determined by comparison with known, extremely accurate, star-tracker-based attitudes. In addition, the effect of knowledge of calibration parameters other than the biases on the effectiveness of all bias determination methods is examined.
Why Don't We Ask? A Complementary Method for Assessing the Status of Great Apes
Meijaard, Erik; Mengersen, Kerrie; Buchori, Damayanti; Nurcahyo, Anton; Ancrenaz, Marc; Wich, Serge; Atmoko, Sri Suci Utami; Tjiu, Albertus; Prasetyo, Didik; Nardiyono; Hadiprakarsa, Yokyok; Christy, Lenny; Wells, Jessie; Albar, Guillaume; Marshall, Andrew J.
2011-01-01
Species conservation is difficult. Threats to species are typically high and immediate. Effective solutions for counteracting these threats, however, require synthesis of high quality evidence, appropriately targeted activities, typically costly implementation, and rapid re-evaluation and adaptation. Conservation management can be ineffective if there is insufficient understanding of the complex ecological, political, socio-cultural, and economic factors that underlie conservation threats. When information about these factors is incomplete, conservation managers may be unaware of the most urgent threats or unable to envision all consequences of potential management strategies. Conservation research aims to address the gap between what is known and what knowledge is needed for effective conservation. Such research, however, generally addresses a subset of the factors that underlie conservation threats, producing a limited, simplistic, and often biased view of complex, real world situations. A combination of approaches is required to provide the complete picture necessary to engage in effective conservation. Orangutan conservation (Pongo spp.) offers an example: standard conservation assessments employ survey methods that focus on ecological variables, but do not usually address the socio-cultural factors that underlie threats. Here, we evaluate a complementary survey method based on interviews of nearly 7,000 people in 687 villages in Kalimantan, Indonesia. We address areas of potential methodological weakness in such surveys, including sampling and questionnaire design, respondent biases, statistical analyses, and sensitivity of resultant inferences. We show that interview-based surveys can provide cost-effective and statistically robust methods to better understand poorly known populations of species that are relatively easily identified by local people. Such surveys provide reasonably reliable estimates of relative presence and relative encounter rates of such species, as well as quantifying the main factors that threaten them. We recommend more extensive use of carefully designed and implemented interview surveys, in conjunction with more traditional field methods. PMID:21483859
Phelan, Sean M; Dovidio, John F; Puhl, Rebecca M; Burgess, Diana J; Nelson, David B; Yeazel, Mark W; Hardeman, Rachel; Perry, Sylvia; van Ryn, Michelle
2014-04-01
To examine the magnitude of explicit and implicit weight biases compared to biases against other groups; and identify student factors predicting bias in a large national sample of medical students. A web-based survey was completed by 4,732 1st year medical students from 49 medical schools as part of a longitudinal study of medical education. The survey included a validated measure of implicit weight bias, the implicit association test, and 2 measures of explicit bias: a feeling thermometer and the anti-fat attitudes test. A majority of students exhibited implicit (74%) and explicit (67%) weight bias. Implicit weight bias scores were comparable to reported bias against racial minorities. Explicit attitudes were more negative toward obese people than toward racial minorities, gays, lesbians, and poor people. In multivariate regression models, implicit and explicit weight bias was predicted by lower BMI, male sex, and non-Black race. Either implicit or explicit bias was also predicted by age, SES, country of birth, and specialty choice. Implicit and explicit weight bias is common among 1st year medical students, and varies across student factors. Future research should assess implications of biases and test interventions to reduce their impact. Copyright © 2013 The Obesity Society.
Biro, Peter A
2013-02-01
Sampling animals from the wild for study is something nearly every biologist has done, but despite our best efforts to obtain random samples of animals, 'hidden' trait biases may still exist. For example, consistent behavioral traits can affect trappability/catchability, independent of obvious factors such as size and gender, and these traits are often correlated with other repeatable physiological and/or life history traits. If so, systematic sampling bias may exist for any of these traits. The extent to which this is a problem, of course, depends on the magnitude of bias, which is presently unknown because the underlying trait distributions in populations are usually unknown, or unknowable. Indeed, our present knowledge about sampling bias comes from samples (not complete population censuses), which can possess bias to begin with. I had the unique opportunity to create naturalized populations of fish by seeding each of four small fishless lakes with equal densities of slow-, intermediate-, and fast-growing fish. Using sampling methods that are not size-selective, I observed that fast-growing fish were up to two-times more likely to be sampled than slower-growing fish. This indicates substantial and systematic bias with respect to an important life history trait (growth rate). If correlations between behavioral, physiological and life-history traits are as widespread as the literature suggests, then many animal samples may be systematically biased with respect to these traits (e.g., when collecting animals for laboratory use), and affect our inferences about population structure and abundance. I conclude with a discussion on ways to minimize sampling bias for particular physiological/behavioral/life-history types within animal populations.
Don't panic: interpretation bias is predictive of new onsets of panic disorder.
Woud, Marcella L; Zhang, Xiao Chi; Becker, Eni S; McNally, Richard J; Margraf, Jürgen
2014-01-01
Psychological models of panic disorder postulate that interpretation of ambiguous material as threatening is an important maintaining factor for the disorder. However, demonstrations of whether such a bias predicts onset of panic disorder are missing. In the present study, we used data from the Dresden Prediction Study, in which a epidemiologic sample of young German women was tested at two time points approximately 17 months apart, allowing the study of biased interpretation as a potential risk factor. At time point one, participants completed an Interpretation Questionnaire including two types of ambiguous scenarios: panic-related and general threat-related. Analyses revealed that a panic-related interpretation bias predicted onset of panic disorder, even after controlling for two established risk factors: anxiety sensitivity and fear of bodily sensations. This is the first prospective study demonstrating the incremental validity of interpretation bias as a predictor of panic disorder onset. Copyright © 2013 Elsevier Ltd. All rights reserved.
Saska, Pavel; van der Werf, Wopke; Hemerik, Lia; Luff, Martin L; Hatten, Timothy D; Honek, Alois; Pocock, Michael
2013-02-01
Carabids and other epigeal arthropods make important contributions to biodiversity, food webs and biocontrol of invertebrate pests and weeds. Pitfall trapping is widely used for sampling carabid populations, but this technique yields biased estimates of abundance ('activity-density') because individual activity - which is affected by climatic factors - affects the rate of catch. To date, the impact of temperature on pitfall catches, while suspected to be large, has not been quantified, and no method is available to account for it. This lack of knowledge and the unavailability of a method for bias correction affect the confidence that can be placed on results of ecological field studies based on pitfall data.Here, we develop a simple model for the effect of temperature, assuming a constant proportional change in the rate of catch per °C change in temperature, r , consistent with an exponential Q 10 response to temperature. We fit this model to 38 time series of pitfall catches and accompanying temperature records from the literature, using first differences and other detrending methods to account for seasonality. We use meta-analysis to assess consistency of the estimated parameter r among studies.The mean rate of increase in total catch across data sets was 0·0863 ± 0·0058 per °C of maximum temperature and 0·0497 ± 0·0107 per °C of minimum temperature. Multiple regression analyses of 19 data sets showed that temperature is the key climatic variable affecting total catch. Relationships between temperature and catch were also identified at species level. Correction for temperature bias had substantial effects on seasonal trends of carabid catches. Synthesis and Applications . The effect of temperature on pitfall catches is shown here to be substantial and worthy of consideration when interpreting results of pitfall trapping. The exponential model can be used both for effect estimation and for bias correction of observed data. Correcting for temperature-related trapping bias is straightforward and enables population estimates to be more comparable. It may thus improve data interpretation in ecological, conservation and monitoring studies, and assist in better management and conservation of habitats and ecosystem services. Nevertheless, field ecologists should remain vigilant for other sources of bias.
Mismeasurement and the resonance of strong confounders: correlated errors.
Marshall, J R; Hastrup, J L; Ross, J S
1999-07-01
Confounding in epidemiology, and the limits of standard methods of control for an imperfectly measured confounder, have been understood for some time. However, most treatments of this problem are based on the assumption that errors of measurement in confounding and confounded variables are independent. This paper considers the situation in which a strong risk factor (confounder) and an inconsequential but suspected risk factor (confounded) are each measured with errors that are correlated; the situation appears especially likely to occur in the field of nutritional epidemiology. Error correlation appears to add little to measurement error as a source of bias in estimating the impact of a strong risk factor: it can add to, diminish, or reverse the bias induced by measurement error in estimating the impact of the inconsequential risk factor. Correlation of measurement errors can add to the difficulty involved in evaluating structures in which confounding and measurement error are present. In its presence, observed correlations among risk factors can be greater than, less than, or even opposite to the true correlations. Interpretation of multivariate epidemiologic structures in which confounding is likely requires evaluation of measurement error structures, including correlations among measurement errors.
Bera, Bidhan Ch; Virmani, Nitin; Kumar, Naveen; Anand, Taruna; Pavulraj, S; Rash, Adam; Elton, Debra; Rash, Nicola; Bhatia, Sandeep; Sood, Richa; Singh, Raj Kumar; Tripathi, Bhupendra Nath
2017-08-23
Equine influenza is a major health problem of equines worldwide. The polymerase genes of influenza virus have key roles in virus replication, transcription, transmission between hosts and pathogenesis. Hence, the comprehensive genetic and codon usage bias of polymerase genes of equine influenza virus (EIV) were analyzed to elucidate the genetic and evolutionary relationships in a novel perspective. The group - specific consensus amino acid substitutions were identified in all polymerase genes of EIVs that led to divergence of EIVs into various clades. The consistent amino acid changes were also detected in the Florida clade 2 EIVs circulating in Europe and Asia since 2007. To study the codon usage patterns, a total of 281,324 codons of polymerase genes of EIV H3N8 isolates from 1963 to 2015 were systemically analyzed. The polymerase genes of EIVs exhibit a weak codon usage bias. The ENc-GC3s and Neutrality plots indicated that natural selection is the major influencing factor of codon usage bias, and that the impact of mutation pressure is comparatively minor. The methods for estimating host imposed translation pressure suggested that the polymerase acidic (PA) gene seems to be under less translational pressure compared to polymerase basic 1 (PB1) and polymerase basic 2 (PB2) genes. The multivariate statistical analysis of polymerase genes divided EIVs into four evolutionary diverged clusters - Pre-divergent, Eurasian, Florida sub-lineage 1 and 2. Various lineage specific amino acid substitutions observed in all polymerase genes of EIVs and especially, clade 2 EIVs underwent major variations which led to the emergence of a phylogenetically distinct group of EIVs originating from Richmond/1/07. The codon usage bias was low in all the polymerase genes of EIVs that was influenced by the multiple factors such as the nucleotide compositions, mutation pressure, aromaticity and hydropathicity. However, natural selection was the major influencing factor in defining the codon usage patterns and evolution of polymerase genes of EIVs.
Zuba, Anna; Warschburger, Petra
2018-06-01
Anti-fat bias is widespread and is linked to the internalization of weight bias and psychosocial problems. The purpose of this study was to examine the internalization of weight bias among children across weight categories and to evaluate the psychometric properties of the Weight Bias Internalization Scale for Children (WBIS-C). Data were collected from 1484 primary school children and their parents. WBIS-C demonstrated good internal consistency (α = .86) after exclusion of Item 1. The unitary factor structure was supported using exploratory and confirmatory factor analyses (factorial validity). Girls and overweight children reported higher WBIS-C scores in comparison to boys and non-overweight peers (known-groups validity). Convergent validity was shown by significant correlations with psychosocial problems. Internalization of weight bias explained additional variance in different indicators of psychosocial well-being. The results suggest that the WBIS-C is a psychometrically sound and informative tool to assess weight bias internalization among children. Copyright © 2018 Elsevier Ltd. All rights reserved.
A Study of Item Bias for Attitudinal Measurement Using Maximum Likelihood Factor Analysis.
ERIC Educational Resources Information Center
Mayberry, Paul W.
A technique for detecting item bias that is responsive to attitudinal measurement considerations is a maximum likelihood factor analysis procedure comparing multivariate factor structures across various subpopulations, often referred to as SIFASP. The SIFASP technique allows for factorial model comparisons in the testing of various hypotheses…
Berger, Lawrence M; Bruch, Sarah K; Johnson, Elizabeth I; James, Sigrid; Rubin, David
2009-01-01
This study used data on 2,453 children aged 4-17 from the National Survey of Child and Adolescent Well-Being and 5 analytic methods that adjust for selection factors to estimate the impact of out-of-home placement on children's cognitive skills and behavior problems. Methods included ordinary least squares (OLS) regressions and residualized change, simple change, difference-in-difference, and fixed effects models. Models were estimated using the full sample and a matched sample generated by propensity scoring. Although results from the unmatched OLS and residualized change models suggested that out-of-home placement is associated with increased child behavior problems, estimates from models that more rigorously adjust for selection bias indicated that placement has little effect on children's cognitive skills or behavior problems.
Investigating the Stability of Four Methods for Estimating Item Bias.
ERIC Educational Resources Information Center
Perlman, Carole L.; And Others
The reliability of item bias estimates was studied for four methods: (1) the transformed delta method; (2) Shepard's modified delta method; (3) Rasch's one-parameter residual analysis; and (4) the Mantel-Haenszel procedure. Bias statistics were computed for each sample using all methods. Data were from administration of multiple-choice items from…
Distribution of model uncertainty across multiple data streams
NASA Astrophysics Data System (ADS)
Wutzler, Thomas
2014-05-01
When confronting biogeochemical models with a diversity of observational data streams, we are faced with the problem of weighing the data streams. Without weighing or multiple blocked cost functions, model uncertainty is allocated to the sparse data streams and possible bias in processes that are strongly constraint is exported to processes that are constrained by sparse data streams only. In this study we propose an approach that aims at making model uncertainty a factor of observations uncertainty, that is constant over all data streams. Further we propose an implementation based on Monte-Carlo Markov chain sampling combined with simulated annealing that is able to determine this variance factor. The method is exemplified both with very simple models, artificial data and with an inversion of the DALEC ecosystem carbon model against multiple observations of Howland forest. We argue that the presented approach is able to help and maybe resolve the problem of bias export to sparse data streams.
Parental and Family Factors as Predictors of Threat Bias in Anxious Youth
Blossom, Jennifer B.; Ginsburg, Golda S.; Birmaher, Boris; Walkup, John T.; Kendall, Philip C.; Keeton, Courtney P.; Langley, Audra K.; Piacentini, John C.; Sakolsky, Dara; Albano, Anne Marie
2014-01-01
The present study examined the relative predictive value of parental anxiety, parents' expectation of child threat bias, and family dysfunction on child's threat bias in a clinical sample of anxious youth. Participants (N = 488) were part of the Child/Adolescent Anxiety Multi-modal study (CAMS), ages 7–17 years (M = 10.69; SD = 2.80). Children met diagnostic criteria for generalized anxiety disorder, separation anxiety and/or social phobia. Children and caregivers completed questionnaires assessing child threat bias, child anxiety, parent anxiety and family functioning. Child age, child anxiety, parental anxiety, parents' expectation of child's threat bias and child-reported family dysfunction were significantly associated with child threat bias. Controlling for child's age and anxiety, regression analyses indicated that parents' expectation of child's threat bias and child-reported family dysfunction were significant positive predictors of child's self-reported threat bias. Findings build on previous literature by clarifying parent and family factors that appear to play a role in the development or maintenance of threat bias and may inform etiological models of child anxiety. PMID:25328258
Age and gender differences in depression across adolescence: real or 'bias'?
van Beek, Yolanda; Hessen, David J; Hutteman, Roos; Verhulp, Esmée E; van Leuven, Mirande
2012-09-01
Since developmental psychologists are interested in explaining age and gender differences in depression across adolescence, it is important to investigate to what extent these observed differences can be attributed to measurement bias. Measurement bias may arise when the phenomenology of depression varies with age or gender, i.e., when younger versus older adolescents or girls versus boys differ in the way depression is experienced or expressed. The Children's Depression Inventory (CDI) was administered to a large school population (N = 4048) aged 8-17 years. A 4-factor model was selected by means of factor analyses for ordered categorical measures. For each of the four factor scales measurement invariance with respect to gender and age (late childhood, early and middle adolescence) was tested using item response theory analyses. Subsequently, to examine which items contributed to measurement bias, all items were studied for differential item functioning (DIF). Finally, it was investigated how developmental patterns changed if measurement biases were accounted for. For each of the factors Self-Deprecation, Dysphoria, School Problems, and Social Problems measurement bias with respect to both gender and age was found and many items showed DIF. Developmental patterns changed profoundly when measurement bias was taken into account. The CDI seemed to particularly overestimate depression in late childhood, and underestimate depression in middle adolescent boys. For scientific as well as clinical use of the CDI, measurement bias with respect to gender and age should be accounted for. © 2012 The Authors. Journal of Child Psychology and Psychiatry © 2012 Association for Child and Adolescent Mental Health.
Patterns and biases in climate change research on amphibians and reptiles: a systematic review
2016-01-01
Climate change probably has severe impacts on animal populations, but demonstrating a causal link can be difficult because of potential influences by additional factors. Assessing global impacts of climate change effects may also be hampered by narrow taxonomic and geographical research foci. We review studies on the effects of climate change on populations of amphibians and reptiles to assess climate change effects and potential biases associated with the body of work that has been conducted within the last decade. We use data from 104 studies regarding the effect of climate on 313 species, from 464 species–study combinations. Climate change effects were reported in 65% of studies. Climate change was identified as causing population declines or range restrictions in half of the cases. The probability of identifying an effect of climate change varied among regions, taxa and research methods. Climatic effects were equally prevalent in studies exclusively investigating climate factors (more than 50% of studies) and in studies including additional factors, thus bolstering confidence in the results of studies exclusively examining effects of climate change. Our analyses reveal biases with respect to geography, taxonomy and research question, making global conclusions impossible. Additional research should focus on under-represented regions, taxa and questions. Conservation and climate policy should consider the documented harm climate change causes reptiles and amphibians. PMID:27703684
Systematic evaluation of bias in microbial community profiles induced by whole genome amplification.
Direito, Susana O L; Zaura, Egija; Little, Miranda; Ehrenfreund, Pascale; Röling, Wilfred F M
2014-03-01
Whole genome amplification methods facilitate the detection and characterization of microbial communities in low biomass environments. We examined the extent to which the actual community structure is reliably revealed and factors contributing to bias. One widely used [multiple displacement amplification (MDA)] and one new primer-free method [primase-based whole genome amplification (pWGA)] were compared using a polymerase chain reaction (PCR)-based method as control. Pyrosequencing of an environmental sample and principal component analysis revealed that MDA impacted community profiles more strongly than pWGA and indicated that this related to species GC content, although an influence of DNA integrity could not be excluded. Subsequently, biases by species GC content, DNA integrity and fragment size were separately analysed using defined mixtures of DNA from various species. We found significantly less amplification of species with the highest GC content for MDA-based templates and, to a lesser extent, for pWGA. DNA fragmentation also interfered severely: species with more fragmented DNA were less amplified with MDA and pWGA. pWGA was unable to amplify low molecular weight DNA (< 1.5 kb), whereas MDA was inefficient. We conclude that pWGA is the most promising method for characterization of microbial communities in low-biomass environments and for currently planned astrobiological missions to Mars. © 2013 Society for Applied Microbiology and John Wiley & Sons Ltd.
The small-x gluon distribution in centrality biased pA and pp collisions
NASA Astrophysics Data System (ADS)
Dumitru, Adrian; Kapilevich, Gary; Skokov, Vladimir
2018-06-01
The nuclear modification factor RpA (pT) provides information on the small-x gluon distribution of a nucleus at hadron colliders. Several experiments have recently measured the nuclear modification factor not only in minimum bias but also for central pA collisions. In this paper we analyze the bias on the configurations of soft gluon fields introduced by a centrality selection via the number of hard particles. Such bias can be viewed as reweighting of configurations of small-x gluons. We find that the biased nuclear modification factor QpA (pT) for central collisions is above RpA (pT) for minimum bias events, and that it may redevelop a "Cronin peak" even at small x. The magnitude of the peak is predicted to increase approximately like 1 /A⊥ ν, ν ∼ 0.6 ± 0.1, if one is able to select more compact configurations of the projectile proton where its gluons occupy a smaller transverse area A⊥. We predict an enhanced Qpp (pT) - 1 ∼ 1 /(pT2) ν and a Cronin peak even for central pp collisions.
PBPK-Based Probabilistic Risk Assessment for Total Chlorotriazines in Drinking Water
Breckenridge, Charles B.; Campbell, Jerry L.; Clewell, Harvey J.; Andersen, Melvin E.; Valdez-Flores, Ciriaco; Sielken, Robert L.
2016-01-01
The risk of human exposure to total chlorotriazines (TCT) in drinking water was evaluated using a physiologically based pharmacokinetic (PBPK) model. Daily TCT (atrazine, deethylatrazine, deisopropylatrazine, and diaminochlorotriazine) chemographs were constructed for 17 frequently monitored community water systems (CWSs) using linear interpolation and Krieg estimates between observed TCT values. Synthetic chemographs were created using a conservative bias factor of 3 to generate intervening peaks between measured values. Drinking water consumption records from 24-h diaries were used to calculate daily exposure. Plasma TCT concentrations were updated every 30 minutes using the PBPK model output for each simulated calendar year from 2006 to 2010. Margins of exposure (MOEs) were calculated (MOE = [Human Plasma TCTPOD] ÷ [Human Plasma TCTEXP]) based on the toxicological point of departure (POD) and the drinking water-derived exposure to TCT. MOEs were determined based on 1, 2, 3, 4, 7, 14, 28, or 90 days of rolling average exposures and plasma TCT Cmax, or the area under the curve (AUC). Distributions of MOE were determined and the 99.9th percentile was used for risk assessment. MOEs for all 17 CWSs were >1000 at the 99.9th percentile. The 99.9th percentile of the MOE distribution was 2.8-fold less when the 3-fold synthetic chemograph bias factor was used. MOEs were insensitive to interpolation method, the consumer’s age, the water consumption database used and the duration of time over which the rolling average plasma TCT was calculated, for up to 90 days. MOEs were sensitive to factors that modified the toxicological, or hyphenated appropriately no-observed-effects level (NOEL), including rat strain, endpoint used, method of calculating the NOEL, and the pharmacokinetics of elimination, as well as the magnitude of exposure (CWS, calendar year, and use of bias factors). PMID:26794141
PBPK-Based Probabilistic Risk Assessment for Total Chlorotriazines in Drinking Water.
Breckenridge, Charles B; Campbell, Jerry L; Clewell, Harvey J; Andersen, Melvin E; Valdez-Flores, Ciriaco; Sielken, Robert L
2016-04-01
The risk of human exposure to total chlorotriazines (TCT) in drinking water was evaluated using a physiologically based pharmacokinetic (PBPK) model. Daily TCT (atrazine, deethylatrazine, deisopropylatrazine, and diaminochlorotriazine) chemographs were constructed for 17 frequently monitored community water systems (CWSs) using linear interpolation and Krieg estimates between observed TCT values. Synthetic chemographs were created using a conservative bias factor of 3 to generate intervening peaks between measured values. Drinking water consumption records from 24-h diaries were used to calculate daily exposure. Plasma TCT concentrations were updated every 30 minutes using the PBPK model output for each simulated calendar year from 2006 to 2010. Margins of exposure (MOEs) were calculated (MOE = [Human Plasma TCTPOD] ÷ [Human Plasma TCTEXP]) based on the toxicological point of departure (POD) and the drinking water-derived exposure to TCT. MOEs were determined based on 1, 2, 3, 4, 7, 14, 28, or 90 days of rolling average exposures and plasma TCT Cmax, or the area under the curve (AUC). Distributions of MOE were determined and the 99.9th percentile was used for risk assessment. MOEs for all 17 CWSs were >1000 at the 99.9(th)percentile. The 99.9(th)percentile of the MOE distribution was 2.8-fold less when the 3-fold synthetic chemograph bias factor was used. MOEs were insensitive to interpolation method, the consumer's age, the water consumption database used and the duration of time over which the rolling average plasma TCT was calculated, for up to 90 days. MOEs were sensitive to factors that modified the toxicological, or hyphenated appropriately no-observed-effects level (NOEL), including rat strain, endpoint used, method of calculating the NOEL, and the pharmacokinetics of elimination, as well as the magnitude of exposure (CWS, calendar year, and use of bias factors). © The Author 2016. Published by Oxford University Press on behalf of the Society of Toxicology.
C. Che-Castaldo; C. M. Crisafulli; J. G. Bishop; W. F. Fagan
2015-01-01
PREMISE OF THE STUDY: Females often outnumber males in Salix populations, although the mechanisms behind female bias are not well understood and could be caused by both genetic and ecological factors. We investigated several ecological factors that could bias secondary sex ratios of Salix sitchensis colonizing Mount St. Helens after the 1980 eruption.M ETHODS...
Reactions to the Implicit Association Test as an Educational Tool: A Mixed Methods Study
ERIC Educational Resources Information Center
Hillard, Amy L.; Ryan, Carey S.; Gervais, Sarah J.
2013-01-01
We examined reactions to the Race Implicit Association Test (IAT), which has been widely used but rarely examined as an educational tool to raise awareness about racial bias. College students (N = 172) were assigned to read that the IAT reflected either personal beliefs or both personal and extrapersonal factors (single vs. multiple explanation…
Community Organizations and Sense of Community: Further Development in Theory and Measurement
ERIC Educational Resources Information Center
Peterson, N. Andrew; Speer, Paul W.; Hughey, Joseph; Armstead, Theresa L.; Schneider, John E.; Sheffer, Megan A.
2008-01-01
The Community Organization Sense of Community Scale (COSOC) is a frequently used or cited measure of the construct in community psychology and other disciplines, despite a lack of confirmation of its underlying 4-factor framework. Two studies were conducted to test the hypothesized structure of the COSOC, the potential effects of method bias on…
Mizukami, Naoki; Clark, Martyn P.; Gutmann, Ethan D.; Mendoza, Pablo A.; Newman, Andrew J.; Nijssen, Bart; Livneh, Ben; Hay, Lauren E.; Arnold, Jeffrey R.; Brekke, Levi D.
2016-01-01
Continental-domain assessments of climate change impacts on water resources typically rely on statistically downscaled climate model outputs to force hydrologic models at a finer spatial resolution. This study examines the effects of four statistical downscaling methods [bias-corrected constructed analog (BCCA), bias-corrected spatial disaggregation applied at daily (BCSDd) and monthly scales (BCSDm), and asynchronous regression (AR)] on retrospective hydrologic simulations using three hydrologic models with their default parameters (the Community Land Model, version 4.0; the Variable Infiltration Capacity model, version 4.1.2; and the Precipitation–Runoff Modeling System, version 3.0.4) over the contiguous United States (CONUS). Biases of hydrologic simulations forced by statistically downscaled climate data relative to the simulation with observation-based gridded data are presented. Each statistical downscaling method produces different meteorological portrayals including precipitation amount, wet-day frequency, and the energy input (i.e., shortwave radiation), and their interplay affects estimations of precipitation partitioning between evapotranspiration and runoff, extreme runoff, and hydrologic states (i.e., snow and soil moisture). The analyses show that BCCA underestimates annual precipitation by as much as −250 mm, leading to unreasonable hydrologic portrayals over the CONUS for all models. Although the other three statistical downscaling methods produce a comparable precipitation bias ranging from −10 to 8 mm across the CONUS, BCSDd severely overestimates the wet-day fraction by up to 0.25, leading to different precipitation partitioning compared to the simulations with other downscaled data. Overall, the choice of downscaling method contributes to less spread in runoff estimates (by a factor of 1.5–3) than the choice of hydrologic model with use of the default parameters if BCCA is excluded.
Hoffmann, Mikael
2017-01-01
Aims To describe and assess current effectiveness studies published up to 2014 using Swedish Prescribed Drug Register (SPDR) data. Methods Study characteristics were extracted. Each study was assessed concerning the clinical relevance of the research question, the risk of bias according to a structured checklist, and as to whether its findings contributed to new knowledge. The biases encountered and ways of handling these were retrieved. Results A total of 24 effectiveness studies were included in the review, the majority on cardiovascular or psychiatric disease (n = 17; 71%). The articles linked data from four (interquartile range: three to four) registers, and were published in 21 different journals with an impact factor ranging from 1.58 to 51.66. All articles had a clinically relevant research question. According to the systematic quality assessments, the overall risk of bias was low in one (4%), moderate in eight (33%) and high in 15 (62%) studies. Overall, two (8%) studies were assessed as contributing to new knowledge. Frequently occurring problems were selection bias making the comparison groups incomparable, treatment bias with suboptimal handling of drug exposure and an intention‐to‐treat approach, and assessment bias including immortal time bias. Good examples of how to handle bias problems included propensity score matching and sensitivity analyses. Conclusion Although this review illustrates that effectiveness studies based on dispensed drug register data can contribute to new evidence of intended effects of drug treatment in clinical practice, the expectations of such data to provide valuable information need to be tempered due to methodological issues. PMID:27928842
Rochat, Philippe; Tone, Erin B.; Baron, Andrew S.
2017-01-01
Implicit intergroup bias emerges early in development, are typically pro-ingroup, and remain stable across the lifespan. Such findings have been interpreted in terms of an automatic ingroup bias similar to what is observed with minimal groups paradigms. These studies are typically conducted with groups of high cultural standing (e.g., Caucasians in North America and Europe). Research conducted among culturally lower status groups (e.g., African-Americans, Latino-Americans) reveals a notable absence of an implicit ingroup bias. Understanding the environmental factors that contribute to the absence of an implicit ingroup bias among people from culturally lower status groups is critical for advancing theories of implicit intergroup cognition. The present study aimed to elucidate the factors that shape racial group bias among African-American children and young adults by examining their relationship with age, school composition (predominantly Black schools or racially mixed schools), parental racial attitudes and socialization messages among African-American children (N = 86) and young adults (N = 130). Age, school-type and parents’ racial socialization messages were all found to be related to the strength of pro-Black (ingroup) bias. We also found that relationships between implicit and explicit bias and frequency of parents' racial socialization messages depended on the type of school participants attended. Our results highlight the importance of considering environmental factors in shaping the magnitude and direction of implicit and explicit race bias among African-Americans rather than treating them as a monolithic group. PMID:28957353
Gonzalez, Araceli; Rozenman, Michelle; Langley, Audra K; Kendall, Philip C; Ginsburg, Golda S; Compton, Scott; Walkup, John T; Birmaher, Boris; Albano, Anne Marie; Piacentini, John
2017-06-01
Anxiety disorders are among the most common mental health problems in youth, and faulty interpretation bias has been positively linked to anxiety severity, even within anxiety-disordered youth. Quick, reliable assessment of interpretation bias may be useful in identifying youth with certain types of anxiety or assessing changes on cognitive bias during intervention. This study examined the factor structure, reliability, and validity of the Self-report of Ambiguous Social Situations for Youth (SASSY) scale, a self-report measure developed to assess interpretation bias in youth. Participants (N=488, age 7 to 17) met diagnostic criteria for Social Phobia, Generalized Anxiety Disorder, and/or Separation Anxiety Disorder. An exploratory factor analysis was performed on baseline data from youth participating in a large randomized clinical trial. Exploratory factor analysis yielded two factors (Accusation/Blame, Social Rejection). The SASSY full scale and Social Rejection factor demonstrated adequate internal consistency, convergent validity with social anxiety, and discriminant validity as evidenced by non-significant correlations with measures of non-social anxiety. Further, the SASSY Social Rejection factor accurately distinguished children and adolescents with Social Phobia from those with other anxiety disorders, supporting its criterion validity, and revealed sensitivity to changes with treatment. Given the relevance to youth with social phobia, pre- and post-intervention data were examined for youth social phobia to test sensitivity to treatment effects; results suggested that SASSY scores reduced for treatment responders. Findings suggest the potential utility of the SASSY Social Rejection factor as a quick, reliable, and efficient way of assessing interpretation bias in anxious youth, particularly as related to social concerns, in research and clinical settings.
Reducing Sensor Noise in MEG and EEG Recordings Using Oversampled Temporal Projection.
Larson, Eric; Taulu, Samu
2018-05-01
Here, we review the theory of suppression of spatially uncorrelated, sensor-specific noise in electro- and magentoencephalography (EEG and MEG) arrays, and introduce a novel method for suppression. Our method requires only that the signals of interest are spatially oversampled, which is a reasonable assumption for many EEG and MEG systems. Our method is based on a leave-one-out procedure using overlapping temporal windows in a mathematical framework to project spatially uncorrelated noise in the temporal domain. This method, termed "oversampled temporal projection" (OTP), has four advantages over existing methods. First, sparse channel-specific artifacts are suppressed while limiting mixing with other channels, whereas existing linear, time-invariant spatial operators can spread such artifacts to other channels with a spatial distribution which can be mistaken for one produced by an electrophysiological source. Second, OTP minimizes distortion of the spatial configuration of the data. During source localization (e.g., dipole fitting), many spatial methods require corresponding modification of the forward model to avoid bias, while OTP does not. Third, noise suppression factors at the sensor level are maintained during source localization, whereas bias compensation removes the denoising benefit for spatial methods that require such compensation. Fourth, OTP uses a time-window duration parameter to control the tradeoff between noise suppression and adaptation to time-varying sensor characteristics. OTP efficiently optimizes noise suppression performance while controlling for spatial bias of the signal of interest. This is important in applications where sensor noise significantly limits the signal-to-noise ratio, such as high-frequency brain oscillations.
Development of a Prototype Miniature Silicon Microgyroscope
Xia, Dunzhu; Chen, Shuling; Wang, Shourong
2009-01-01
A miniature vacuum-packaged silicon microgyroscope (SMG) with symmetrical and decoupled structure was designed to prevent unintended coupling between drive and sense modes. To ensure high resonant stability and strong disturbance resisting capacity, a self-oscillating closed-loop circuit including an automatic gain control (AGC) loop based on electrostatic force feedback is adopted in drive mode, while, dual-channel decomposition and reconstruction closed loops are applied in sense mode. Moreover, the temperature effect on its zero bias was characterized experimentally and a practical compensation method is given. The testing results demonstrate that the useful signal and quadrature signal will not interact with each other because their phases are decoupled. Under a scale factor condition of 9.6 mV/°/s, in full measurement range of ± 300 deg/s, the zero bias stability reaches 15°/h with worse-case nonlinearity of 400 ppm, and the temperature variation trend of the SMG bias is thus largely eliminated, so that the maximum bias value is reduced to one tenth of the original after compensation from -40 °C to 80 °C. PMID:22408543
Self-Awareness and Cultural Identity as an Effort to Reduce Bias in Medicine.
White, Augustus A; Logghe, Heather J; Goodenough, Dan A; Barnes, Linda L; Hallward, Anne; Allen, Irving M; Green, David W; Krupat, Edward; Llerena-Quinn, Roxana
2018-02-01
In response to persistently documented health disparities based on race and other demographic factors, medical schools have implemented "cultural competency" coursework. While many of these courses have focused on strategies for treating patients of different cultural backgrounds, very few have addressed the impact of the physician's own cultural background and offered methods to overcome his or her own unconscious biases. In hopes of training physicians to contextualize the impact of their own cultural background on their ability to provide optimal patient care, the authors created a 14-session course on culture, self-reflection, and medicine. After completing the course, students reported an increased awareness of their blind spots and that providing equitable care and treatment would require lifelong reflection and attention to these biases. In this article, the authors describe the formation and implementation of a novel medical school course on self-awareness and cultural identity designed to reduce unconscious bias in medicine. Finally, we discuss our observations and lessons learned after more than 10 years of experience teaching the course.
Sex Bias in Infectious Disease Epidemiology: Patterns and Processes
Guerra-Silveira, Felipe; Abad-Franch, Fernando
2013-01-01
Background Infectious disease incidence is often male-biased. Two main hypotheses have been proposed to explain this observation. The physiological hypothesis (PH) emphasizes differences in sex hormones and genetic architecture, while the behavioral hypothesis (BH) stresses gender-related differences in exposure. Surprisingly, the population-level predictions of these hypotheses are yet to be thoroughly tested in humans. Methods and Findings For ten major pathogens, we tested PH and BH predictions about incidence and exposure-prevalence patterns. Compulsory-notification records (Brazil, 2006–2009) were used to estimate age-stratified ♂:♀ incidence rate ratios for the general population and across selected sociological contrasts. Exposure-prevalence odds ratios were derived from 82 published surveys. We estimated summary effect-size measures using random-effects models; our analyses encompass ∼0.5 million cases of disease or exposure. We found that, after puberty, disease incidence is male-biased in cutaneous and visceral leishmaniasis, schistosomiasis, pulmonary tuberculosis, leptospirosis, meningococcal meningitis, and hepatitis A. Severe dengue is female-biased, and no clear pattern is evident for typhoid fever. In leprosy, milder tuberculoid forms are female-biased, whereas more severe lepromatous forms are male-biased. For most diseases, male bias emerges also during infancy, when behavior is unbiased but sex steroid levels transiently rise. Behavioral factors likely modulate male–female differences in some diseases (the leishmaniases, tuberculosis, leptospirosis, or schistosomiasis) and age classes; however, average exposure-prevalence is significantly sex-biased only for Schistosoma and Leptospira. Conclusions Our results closely match some key PH predictions and contradict some crucial BH predictions, suggesting that gender-specific behavior plays an overall secondary role in generating sex bias. Physiological differences, including the crosstalk between sex hormones and immune effectors, thus emerge as the main candidate drivers of gender differences in infectious disease susceptibility. PMID:23638062
Karim, A K M Rezaul; Proulx, Michael J; Likova, Lora T
2016-09-01
Orientation bias and directionality bias are two fundamental functional characteristics of the visual system. Reviewing the relevant literature in visual psychophysics and visual neuroscience we propose here a three-stage model of directionality bias in visuospatial functioning. We call this model the 'Perception-Action-Laterality' (PAL) hypothesis. We analyzed the research findings for a wide range of visuospatial tasks, showing that there are two major directionality trends in perceptual preference: clockwise versus anticlockwise. It appears these preferences are combinatorial, such that a majority of people fall in the first category demonstrating a preference for stimuli/objects arranged from left-to-right rather than from right-to-left, while people in the second category show an opposite trend. These perceptual biases can guide sensorimotor integration and action, creating two corresponding turner groups in the population. In support of PAL, we propose another model explaining the origins of the biases - how the neurogenetic factors and the cultural factors interact in a biased competition framework to determine the direction and extent of biases. This dynamic model can explain not only the two major categories of biases in terms of direction and strength, but also the unbiased, unreliably biased or mildly biased cases in visuosptial functioning. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
STACCATO: a novel solution to supernova photometric classification with biased training sets
NASA Astrophysics Data System (ADS)
Revsbech, E. A.; Trotta, R.; van Dyk, D. A.
2018-01-01
We present a new solution to the problem of classifying Type Ia supernovae from their light curves alone given a spectroscopically confirmed but biased training set, circumventing the need to obtain an observationally expensive unbiased training set. We use Gaussian processes (GPs) to model the supernovae's (SN's) light curves, and demonstrate that the choice of covariance function has only a small influence on the GPs ability to accurately classify SNe. We extend and improve the approach of Richards et al. - a diffusion map combined with a random forest classifier - to deal specifically with the case of biased training sets. We propose a novel method called Synthetically Augmented Light Curve Classification (STACCATO) that synthetically augments a biased training set by generating additional training data from the fitted GPs. Key to the success of the method is the partitioning of the observations into subgroups based on their propensity score of being included in the training set. Using simulated light curve data, we show that STACCATO increases performance, as measured by the area under the Receiver Operating Characteristic curve (AUC), from 0.93 to 0.96, close to the AUC of 0.977 obtained using the 'gold standard' of an unbiased training set and significantly improving on the previous best result of 0.88. STACCATO also increases the true positive rate for SNIa classification by up to a factor of 50 for high-redshift/low-brightness SNe.
Accuracy of the Omron HEM-705 CP for blood pressure measurement in large epidemiologic studies.
Vera-Cala, Lina M; Orostegui, Myriam; Valencia-Angel, Laura I; López, Nahyr; Bautista, Leonelo E
2011-05-01
Accurate measurement of blood pressure is of utmost importance in hypertension research. In the context of epidemiologic and clinical studies, oscillometric devices offer important advantages to overcome some of the limitations of the auscultatory method. Even though their accuracy has been evaluated in multiple studies in the clinical setting, there is little evidence of their performance in large epidemiologic studies. We evaluated the accuracy of the Omron HEM-705-CP, an automatic device for blood pressure (BP) measurement, as compared to the standard auscultatory method with a mercury sphygmomanometer in a large cohort study. We made three auscultatory measurements, followed by two measurements with the Omron device in 1,084 subjects. Bias was estimated as the average of the two Omron minus the average of the last two auscultatory measurements, with its corresponding 95% limits of agreement (LA). The Omron overestimated systolic blood pressure (SBP) by 1.8 mmHg (LA:-10.1, 13.7) and underestimated diastolic blood pressure (DBP) by 1.6 mmHg (LA:-12.3, 9.2). Bias was significantly larger in men. Bias in SBP increased with age and decreased with BP level, while bias in DBP decreased with age and increased with BP level. The sensitivity and specificity of the Omron to detect hypertension were 88.2% and 98.6%, respectively. Minimum bias in the estimates of the effects of several factors resulted from the use of Omron measurements. Our results showed that the Omron HEM-705-CP could be used for measuring BP in large epidemiology studies without compromising study validity or precision.
Darbani, Behrooz; Stewart, C Neal; Noeparvar, Shahin; Borg, Søren
2014-10-20
This report investigates for the first time the potential inter-treatment bias source of cell number for gene expression studies. Cell-number bias can affect gene expression analysis when comparing samples with unequal total cellular RNA content or with different RNA extraction efficiencies. For maximal reliability of analysis, therefore, comparisons should be performed at the cellular level. This could be accomplished using an appropriate correction method that can detect and remove the inter-treatment bias for cell-number. Based on inter-treatment variations of reference genes, we introduce an analytical approach to examine the suitability of correction methods by considering the inter-treatment bias as well as the inter-replicate variance, which allows use of the best correction method with minimum residual bias. Analyses of RNA sequencing and microarray data showed that the efficiencies of correction methods are influenced by the inter-treatment bias as well as the inter-replicate variance. Therefore, we recommend inspecting both of the bias sources in order to apply the most efficient correction method. As an alternative correction strategy, sequential application of different correction approaches is also advised. Copyright © 2014 Elsevier B.V. All rights reserved.
A method for the quantification of biased signalling at constitutively active receptors.
Hall, David A; Giraldo, Jesús
2018-06-01
Biased agonism, the ability of an agonist to differentially activate one of several signal transduction pathways when acting at a given receptor, is an increasingly recognized phenomenon at many receptors. The Black and Leff operational model lacks a way to describe constitutive receptor activity and hence inverse agonism. Thus, it is impossible to analyse the biased signalling of inverse agonists using this model. In this theoretical work, we develop and illustrate methods for the analysis of biased inverse agonism. Methods were derived for quantifying biased signalling in systems that demonstrate constitutive activity using the modified operational model proposed by Slack and Hall. The methods were illustrated using Monte Carlo simulations. The Monte Carlo simulations demonstrated that, with an appropriate experimental design, the model parameters are 'identifiable'. The method is consistent with methods based on the measurement of intrinsic relative activity (RA i ) (ΔΔlogR or ΔΔlog(τ/K a )) proposed by Ehlert and Kenakin and their co-workers but has some advantages. In particular, it allows the quantification of ligand bias independently of 'system bias' removing the requirement to normalize to a standard ligand. In systems with constitutive activity, the Slack and Hall model provides methods for quantifying the absolute bias of agonists and inverse agonists. This provides an alternative to methods based on RA i and is complementary to the ΔΔlog(τ/K a ) method of Kenakin et al. in systems where use of that method is inappropriate due to the presence of constitutive activity. © 2018 The British Pharmacological Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Putter, Roland; Doré, Olivier; Das, Sudeep
2014-01-10
Cross correlations between the galaxy number density in a lensing source sample and that in an overlapping spectroscopic sample can in principle be used to calibrate the lensing source redshift distribution. In this paper, we study in detail to what extent this cross-correlation method can mitigate the loss of cosmological information in upcoming weak lensing surveys (combined with a cosmic microwave background prior) due to lack of knowledge of the source distribution. We consider a scenario where photometric redshifts are available and find that, unless the photometric redshift distribution p(z {sub ph}|z) is calibrated very accurately a priori (bias andmore » scatter known to ∼0.002 for, e.g., EUCLID), the additional constraint on p(z {sub ph}|z) from the cross-correlation technique to a large extent restores the cosmological information originally lost due to the uncertainty in dn/dz(z). Considering only the gain in photo-z accuracy and not the additional cosmological information, enhancements of the dark energy figure of merit of up to a factor of four (40) can be achieved for a SuMIRe-like (EUCLID-like) combination of lensing and redshift surveys, where SuMIRe stands for Subaru Measurement of Images and Redshifts). However, the success of the method is strongly sensitive to our knowledge of the galaxy bias evolution in the source sample and we find that a percent level bias prior is needed to optimize the gains from the cross-correlation method (i.e., to approach the cosmology constraints attainable if the bias was known exactly).« less
Sahota, Tarjinder; Danhof, Meindert; Della Pasqua, Oscar
2015-06-01
Current toxicity protocols relate measures of systemic exposure (i.e. AUC, Cmax) as obtained by non-compartmental analysis to observed toxicity. A complicating factor in this practice is the potential bias in the estimates defining safe drug exposure. Moreover, it prevents the assessment of variability. The objective of the current investigation was therefore (a) to demonstrate the feasibility of applying nonlinear mixed effects modelling for the evaluation of toxicokinetics and (b) to assess the bias and accuracy in summary measures of systemic exposure for each method. Here, simulation scenarios were evaluated, which mimic toxicology protocols in rodents. To ensure differences in pharmacokinetic properties are accounted for, hypothetical drugs with varying disposition properties were considered. Data analysis was performed using non-compartmental methods and nonlinear mixed effects modelling. Exposure levels were expressed as area under the concentration versus time curve (AUC), peak concentrations (Cmax) and time above a predefined threshold (TAT). Results were then compared with the reference values to assess the bias and precision of parameter estimates. Higher accuracy and precision were observed for model-based estimates (i.e. AUC, Cmax and TAT), irrespective of group or treatment duration, as compared with non-compartmental analysis. Despite the focus of guidelines on establishing safety thresholds for the evaluation of new molecules in humans, current methods neglect uncertainty, lack of precision and bias in parameter estimates. The use of nonlinear mixed effects modelling for the analysis of toxicokinetics provides insight into variability and should be considered for predicting safe exposure in humans.
An advanced method to assess the diet of free-ranging large carnivores based on scats.
Wachter, Bettina; Blanc, Anne-Sophie; Melzheimer, Jörg; Höner, Oliver P; Jago, Mark; Hofer, Heribert
2012-01-01
The diet of free-ranging carnivores is an important part of their ecology. It is often determined from prey remains in scats. In many cases, scat analyses are the most efficient method but they require correction for potential biases. When the diet is expressed as proportions of consumed mass of each prey species, the consumed prey mass to excrete one scat needs to be determined and corrected for prey body mass because the proportion of digestible to indigestible matter increases with prey body mass. Prey body mass can be corrected for by conducting feeding experiments using prey of various body masses and fitting a regression between consumed prey mass to excrete one scat and prey body mass (correction factor 1). When the diet is expressed as proportions of consumed individuals of each prey species and includes prey animals not completely consumed, the actual mass of each prey consumed by the carnivore needs to be controlled for (correction factor 2). No previous study controlled for this second bias. Here we use an extended series of feeding experiments on a large carnivore, the cheetah (Acinonyx jubatus), to establish both correction factors. In contrast to previous studies which fitted a linear regression for correction factor 1, we fitted a biologically more meaningful exponential regression model where the consumed prey mass to excrete one scat reaches an asymptote at large prey sizes. Using our protocol, we also derive correction factor 1 and 2 for other carnivore species and apply them to published studies. We show that the new method increases the number and proportion of consumed individuals in the diet for large prey animals compared to the conventional method. Our results have important implications for the interpretation of scat-based studies in feeding ecology and the resolution of human-wildlife conflicts for the conservation of large carnivores.
An Advanced Method to Assess the Diet of Free-Ranging Large Carnivores Based on Scats
Wachter, Bettina; Blanc, Anne-Sophie; Melzheimer, Jörg; Höner, Oliver P.; Jago, Mark; Hofer, Heribert
2012-01-01
Background The diet of free-ranging carnivores is an important part of their ecology. It is often determined from prey remains in scats. In many cases, scat analyses are the most efficient method but they require correction for potential biases. When the diet is expressed as proportions of consumed mass of each prey species, the consumed prey mass to excrete one scat needs to be determined and corrected for prey body mass because the proportion of digestible to indigestible matter increases with prey body mass. Prey body mass can be corrected for by conducting feeding experiments using prey of various body masses and fitting a regression between consumed prey mass to excrete one scat and prey body mass (correction factor 1). When the diet is expressed as proportions of consumed individuals of each prey species and includes prey animals not completely consumed, the actual mass of each prey consumed by the carnivore needs to be controlled for (correction factor 2). No previous study controlled for this second bias. Methodology/Principal Findings Here we use an extended series of feeding experiments on a large carnivore, the cheetah (Acinonyx jubatus), to establish both correction factors. In contrast to previous studies which fitted a linear regression for correction factor 1, we fitted a biologically more meaningful exponential regression model where the consumed prey mass to excrete one scat reaches an asymptote at large prey sizes. Using our protocol, we also derive correction factor 1 and 2 for other carnivore species and apply them to published studies. We show that the new method increases the number and proportion of consumed individuals in the diet for large prey animals compared to the conventional method. Conclusion/Significance Our results have important implications for the interpretation of scat-based studies in feeding ecology and the resolution of human-wildlife conflicts for the conservation of large carnivores. PMID:22715373
Eyre-Walker, Adam; Stoletzki, Nina
2013-10-01
The assessment of scientific publications is an integral part of the scientific process. Here we investigate three methods of assessing the merit of a scientific paper: subjective post-publication peer review, the number of citations gained by a paper, and the impact factor of the journal in which the article was published. We investigate these methods using two datasets in which subjective post-publication assessments of scientific publications have been made by experts. We find that there are moderate, but statistically significant, correlations between assessor scores, when two assessors have rated the same paper, and between assessor score and the number of citations a paper accrues. However, we show that assessor score depends strongly on the journal in which the paper is published, and that assessors tend to over-rate papers published in journals with high impact factors. If we control for this bias, we find that the correlation between assessor scores and between assessor score and the number of citations is weak, suggesting that scientists have little ability to judge either the intrinsic merit of a paper or its likely impact. We also show that the number of citations a paper receives is an extremely error-prone measure of scientific merit. Finally, we argue that the impact factor is likely to be a poor measure of merit, since it depends on subjective assessment. We conclude that the three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased, and expensive method by which to assess merit. We argue that the impact factor may be the most satisfactory of the methods we have considered, since it is a form of pre-publication review. However, we emphasise that it is likely to be a very error-prone measure of merit that is qualitative, not quantitative.
Eyre-Walker, Adam; Stoletzki, Nina
2013-01-01
The assessment of scientific publications is an integral part of the scientific process. Here we investigate three methods of assessing the merit of a scientific paper: subjective post-publication peer review, the number of citations gained by a paper, and the impact factor of the journal in which the article was published. We investigate these methods using two datasets in which subjective post-publication assessments of scientific publications have been made by experts. We find that there are moderate, but statistically significant, correlations between assessor scores, when two assessors have rated the same paper, and between assessor score and the number of citations a paper accrues. However, we show that assessor score depends strongly on the journal in which the paper is published, and that assessors tend to over-rate papers published in journals with high impact factors. If we control for this bias, we find that the correlation between assessor scores and between assessor score and the number of citations is weak, suggesting that scientists have little ability to judge either the intrinsic merit of a paper or its likely impact. We also show that the number of citations a paper receives is an extremely error-prone measure of scientific merit. Finally, we argue that the impact factor is likely to be a poor measure of merit, since it depends on subjective assessment. We conclude that the three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased, and expensive method by which to assess merit. We argue that the impact factor may be the most satisfactory of the methods we have considered, since it is a form of pre-publication review. However, we emphasise that it is likely to be a very error-prone measure of merit that is qualitative, not quantitative. PMID:24115908
Jeromin, Franziska; Nyenhuis, Nele; Barke, Antonia
2016-03-01
Background and aims Internet Gaming Disorder is included in the Diagnostic and statistical manual of mental disorders (5 th edition) as a disorder that merits further research. The diagnostic criteria are based on those for Substance Use Disorder and Gambling Disorder. Excessive gamblers and persons with Substance Use Disorder show attentional biases towards stimuli related to their addictions. We investigated whether excessive Internet gamers show a similar attentional bias, by using two established experimental paradigms. Methods We measured reaction times of excessive Internet gamers and non-gamers (N = 51, 23.7 ± 2.7 years) by using an addiction Stroop with computer-related and neutral words, as well as a visual probe with computer-related and neutral pictures. Mixed design analyses of variance with the between-subjects factor group (gamer/non-gamer) and the within-subjects factor stimulus type (computer-related/neutral) were calculated for the reaction times as well as for valence and familiarity ratings of the stimulus material. Results In the addiction Stroop, an interaction for group × word type was found: Only gamers showed longer reaction times to computer-related words compared to neutral words, thus exhibiting an attentional bias. In the visual probe, no differences in reaction time between computer-related and neutral pictures were found in either group, but the gamers were faster overall. Conclusions An attentional bias towards computer-related stimuli was found in excessive Internet gamers, by using an addiction Stroop but not by using a visual probe. A possible explanation for the discrepancy could lie in the fact that the visual probe may have been too easy for the gamers.
Jeromin, Franziska; Nyenhuis, Nele; Barke, Antonia
2016-01-01
Background and aims Internet Gaming Disorder is included in the Diagnostic and statistical manual of mental disorders (5th edition) as a disorder that merits further research. The diagnostic criteria are based on those for Substance Use Disorder and Gambling Disorder. Excessive gamblers and persons with Substance Use Disorder show attentional biases towards stimuli related to their addictions. We investigated whether excessive Internet gamers show a similar attentional bias, by using two established experimental paradigms. Methods We measured reaction times of excessive Internet gamers and non-gamers (N = 51, 23.7 ± 2.7 years) by using an addiction Stroop with computer-related and neutral words, as well as a visual probe with computer-related and neutral pictures. Mixed design analyses of variance with the between-subjects factor group (gamer/non-gamer) and the within-subjects factor stimulus type (computer-related/neutral) were calculated for the reaction times as well as for valence and familiarity ratings of the stimulus material. Results In the addiction Stroop, an interaction for group × word type was found: Only gamers showed longer reaction times to computer-related words compared to neutral words, thus exhibiting an attentional bias. In the visual probe, no differences in reaction time between computer-related and neutral pictures were found in either group, but the gamers were faster overall. Conclusions An attentional bias towards computer-related stimuli was found in excessive Internet gamers, by using an addiction Stroop but not by using a visual probe. A possible explanation for the discrepancy could lie in the fact that the visual probe may have been too easy for the gamers. PMID:28092198
Detecting and removing multiplicative spatial bias in high-throughput screening technologies.
Caraus, Iurie; Mazoure, Bogdan; Nadon, Robert; Makarenkov, Vladimir
2017-10-15
Considerable attention has been paid recently to improve data quality in high-throughput screening (HTS) and high-content screening (HCS) technologies widely used in drug development and chemical toxicity research. However, several environmentally- and procedurally-induced spatial biases in experimental HTS and HCS screens decrease measurement accuracy, leading to increased numbers of false positives and false negatives in hit selection. Although effective bias correction methods and software have been developed over the past decades, almost all of these tools have been designed to reduce the effect of additive bias only. Here, we address the case of multiplicative spatial bias. We introduce three new statistical methods meant to reduce multiplicative spatial bias in screening technologies. We assess the performance of the methods with synthetic and real data affected by multiplicative spatial bias, including comparisons with current bias correction methods. We also describe a wider data correction protocol that integrates methods for removing both assay and plate-specific spatial biases, which can be either additive or multiplicative. The methods for removing multiplicative spatial bias and the data correction protocol are effective in detecting and cleaning experimental data generated by screening technologies. As our protocol is of a general nature, it can be used by researchers analyzing current or next-generation high-throughput screens. The AssayCorrector program, implemented in R, is available on CRAN. makarenkov.vladimir@uqam.ca. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
A Comparison of the β-Substitution Method and a Bayesian Method for Analyzing Left-Censored Data.
Huynh, Tran; Quick, Harrison; Ramachandran, Gurumurthy; Banerjee, Sudipto; Stenzel, Mark; Sandler, Dale P; Engel, Lawrence S; Kwok, Richard K; Blair, Aaron; Stewart, Patricia A
2016-01-01
Classical statistical methods for analyzing exposure data with values below the detection limits are well described in the occupational hygiene literature, but an evaluation of a Bayesian approach for handling such data is currently lacking. Here, we first describe a Bayesian framework for analyzing censored data. We then present the results of a simulation study conducted to compare the β-substitution method with a Bayesian method for exposure datasets drawn from lognormal distributions and mixed lognormal distributions with varying sample sizes, geometric standard deviations (GSDs), and censoring for single and multiple limits of detection. For each set of factors, estimates for the arithmetic mean (AM), geometric mean, GSD, and the 95th percentile (X0.95) of the exposure distribution were obtained. We evaluated the performance of each method using relative bias, the root mean squared error (rMSE), and coverage (the proportion of the computed 95% uncertainty intervals containing the true value). The Bayesian method using non-informative priors and the β-substitution method were generally comparable in bias and rMSE when estimating the AM and GM. For the GSD and the 95th percentile, the Bayesian method with non-informative priors was more biased and had a higher rMSE than the β-substitution method, but use of more informative priors generally improved the Bayesian method's performance, making both the bias and the rMSE more comparable to the β-substitution method. An advantage of the Bayesian method is that it provided estimates of uncertainty for these parameters of interest and good coverage, whereas the β-substitution method only provided estimates of uncertainty for the AM, and coverage was not as consistent. Selection of one or the other method depends on the needs of the practitioner, the availability of prior information, and the distribution characteristics of the measurement data. We suggest the use of Bayesian methods if the practitioner has the computational resources and prior information, as the method would generally provide accurate estimates and also provides the distributions of all of the parameters, which could be useful for making decisions in some applications. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Sources of Sampling Bias in Long-Screened Well
Results obtained from ground-water sampling in long-screened wells are often influenced by physical factors such as geologic heterogeneity and vertical hydraulic gradients. These factors often serve to bias results and increase uncertainty in the representativeness of the sample...
Gao, Xinliu; Lin, Hui; Krantz, Carsten; Garnier, Arlette; Flarakos, Jimmy; Tse, Francis L S; Li, Wenkui
2016-01-01
Factor P (Properdin), an endogenous glycoprotein, plays a key role in innate immune defense. Its quantification is important for understanding the pharmacodynamics (PD) of drug candidate(s). In the present work, an immunoaffinity capturing LC-MS/MS method has been developed and validated for the first time for the quantification of factor P in monkey serum with a dynamic range of 125 to 25,000 ng/ml using the calibration standards and QCs prepared in factor P depleted monkey serum. The intra- and inter-run precision was ≤7.2% (CV) and accuracy within ±16.8% (%Bias) across all QC levels evaluated. Results of other evaluations (e.g., stability) all met the acceptance criteria. The validated method was robust and implemented in support of a preclinical PK/PD study.
Karim, A.K.M. Rezaul; Proulx, Michael J.; Likova, Lora T.
2016-01-01
Reviewing the relevant literature in visual psychophysics and visual neuroscience we propose a three-stage model of directionality bias in visuospatial functioning. We call this model the ‘Perception-Action-Laterality’ (PAL) hypothesis. We analyzed the research findings for a wide range of visuospatial tasks, showing that there are two major directionality trends: clockwise versus anticlockwise. It appears these preferences are combinatorial, such that a majority of people fall in the first category demonstrating a preference for stimuli/objects arranged from left-to-right rather than from right-to-left, while people in the second category show an opposite trend. These perceptual biases can guide sensorimotor integration and action, creating two corresponding turner groups in the population. In support of PAL, we propose another model explaining the origins of the biases– how the neurogenetic factors and the cultural factors interact in a biased competition framework to determine the direction and extent of biases. This dynamic model can explain not only the two major categories of biases, but also the unbiased, unreliably biased or mildly biased cases in visuosptial functioning. PMID:27350096
LD Score Regression Distinguishes Confounding from Polygenicity in Genome-Wide Association Studies
Bulik-Sullivan, Brendan K.; Loh, Po-Ru; Finucane, Hilary; Ripke, Stephan; Yang, Jian; Patterson, Nick; Daly, Mark J.; Price, Alkes L.; Neale, Benjamin M.
2015-01-01
Both polygenicity (i.e., many small genetic effects) and confounding biases, such as cryptic relatedness and population stratification, can yield an inflated distribution of test statistics in genome-wide association studies (GWAS). However, current methods cannot distinguish between inflation from true polygenic signal and bias. We have developed an approach, LD Score regression, that quantifies the contribution of each by examining the relationship between test statistics and linkage disequilibrium (LD). The LD Score regression intercept can be used to estimate a more powerful and accurate correction factor than genomic control. We find strong evidence that polygenicity accounts for the majority of test statistic inflation in many GWAS of large sample size. PMID:25642630
Iterative Magnetometer Calibration
NASA Technical Reports Server (NTRS)
Sedlak, Joseph
2006-01-01
This paper presents an iterative method for three-axis magnetometer (TAM) calibration that makes use of three existing utilities recently incorporated into the attitude ground support system used at NASA's Goddard Space Flight Center. The method combines attitude-independent and attitude-dependent calibration algorithms with a new spinning spacecraft Kalman filter to solve for biases, scale factors, nonorthogonal corrections to the alignment, and the orthogonal sensor alignment. The method is particularly well-suited to spin-stabilized spacecraft, but may also be useful for three-axis stabilized missions given sufficient data to provide observability.
Lau, Jennifer Y F; Waters, Allison M
2017-04-01
Anxiety and depression occurring during childhood and adolescence are common and costly. While early-emerging anxiety and depression can arise through a complex interplay of 'distal' factors such as genetic and environmental influences, temperamental characteristics and brain circuitry, the more proximal mechanisms that transfer risks on symptoms are poorly delineated. Information-processing biases, which differentiate youth with and without anxiety and/or depression, could act as proximal mechanisms that mediate more distal risks on symptoms. This article reviews the literature on information-processing biases, their associations with anxiety and depression symptoms in youth and with other distal risk factors, to provide direction for further research. Based on strategic searches of the literature, we consider how youth with and without anxiety and/or depression vary in how they deploy attention to social-affective stimuli, discriminate between threat and safety cues, retain memories of negative events and appraise ambiguous information. We discuss how these information-processing biases are similarly or differentially expressed on anxiety and depression and whether these biases are linked to genetic and environmental factors, temperamental characteristics and patterns of brain circuitry functioning implicated in anxiety and depression. Biases in attention and appraisal characterise both youth anxiety and depression but with some differences in how these are expressed for each symptom type. Difficulties in threat-safety cue discrimination characterise anxiety and are understudied in depression, while biases in the retrieval of negative and overgeneral memories have been observed in depression but are understudied in anxiety. Information-processing biases have been studied in relation to some distal factors but not systematically, so relationships remain inconclusive. Biases in attention, threat-safety cue discrimination, memory and appraisal may characterise anxiety and/or depression risk. We discuss future research directions that can more systematically test whether these biases act as proximal mechanisms that mediate other distal risk factors. © 2016 The Authors. Journal of Child Psychology and Psychiatry published by John Wiley & Sons Ltd on behalf of Association for Child and Adolescent Mental Health.
Consideration of VT5 etch-based OPC modeling
NASA Astrophysics Data System (ADS)
Lim, ChinTeong; Temchenko, Vlad; Kaiser, Dieter; Meusel, Ingo; Schmidt, Sebastian; Schneider, Jens; Niehoff, Martin
2008-03-01
Including etch-based empirical data during OPC model calibration is a desired yet controversial decision for OPC modeling, especially for process with a large litho to etch biasing. While many OPC software tools are capable of providing this functionality nowadays; yet few were implemented in manufacturing due to various risks considerations such as compromises in resist and optical effects prediction, etch model accuracy or even runtime concern. Conventional method of applying rule-based alongside resist model is popular but requires a lot of lengthy code generation to provide a leaner OPC input. This work discusses risk factors and their considerations, together with introduction of techniques used within Mentor Calibre VT5 etch-based modeling at sub 90nm technology node. Various strategies are discussed with the aim of better handling of large etch bias offset without adding complexity into final OPC package. Finally, results were presented to assess the advantages and limitations of the final method chosen.
Measurement Error and Environmental Epidemiology: A Policy Perspective
Edwards, Jessie K.; Keil, Alexander P.
2017-01-01
Purpose of review Measurement error threatens public health by producing bias in estimates of the population impact of environmental exposures. Quantitative methods to account for measurement bias can improve public health decision making. Recent findings We summarize traditional and emerging methods to improve inference under a standard perspective, in which the investigator estimates an exposure response function, and a policy perspective, in which the investigator directly estimates population impact of a proposed intervention. Summary Under a policy perspective, the analysis must be sensitive to errors in measurement of factors that modify the effect of exposure on outcome, must consider whether policies operate on the true or measured exposures, and may increasingly need to account for potentially dependent measurement error of two or more exposures affected by the same policy or intervention. Incorporating approaches to account for measurement error into such a policy perspective will increase the impact of environmental epidemiology. PMID:28138941
Guo, Pi; Zeng, Fangfang; Hu, Xiaomin; Zhang, Dingmei; Zhu, Shuming; Deng, Yu; Hao, Yuantao
2015-01-01
Objectives In epidemiological studies, it is important to identify independent associations between collective exposures and a health outcome. The current stepwise selection technique ignores stochastic errors and suffers from a lack of stability. The alternative LASSO-penalized regression model can be applied to detect significant predictors from a pool of candidate variables. However, this technique is prone to false positives and tends to create excessive biases. It remains challenging to develop robust variable selection methods and enhance predictability. Material and methods Two improved algorithms denoted the two-stage hybrid and bootstrap ranking procedures, both using a LASSO-type penalty, were developed for epidemiological association analysis. The performance of the proposed procedures and other methods including conventional LASSO, Bolasso, stepwise and stability selection models were evaluated using intensive simulation. In addition, methods were compared by using an empirical analysis based on large-scale survey data of hepatitis B infection-relevant factors among Guangdong residents. Results The proposed procedures produced comparable or less biased selection results when compared to conventional variable selection models. In total, the two newly proposed procedures were stable with respect to various scenarios of simulation, demonstrating a higher power and a lower false positive rate during variable selection than the compared methods. In empirical analysis, the proposed procedures yielding a sparse set of hepatitis B infection-relevant factors gave the best predictive performance and showed that the procedures were able to select a more stringent set of factors. The individual history of hepatitis B vaccination, family and individual history of hepatitis B infection were associated with hepatitis B infection in the studied residents according to the proposed procedures. Conclusions The newly proposed procedures improve the identification of significant variables and enable us to derive a new insight into epidemiological association analysis. PMID:26214802
Directed acyclic graphs (DAGs): an aid to assess confounding in dental research.
Merchant, Anwar T; Pitiphat, Waranuch
2002-12-01
Confounding, a special type of bias, occurs when an extraneous factor is associated with the exposure and independently affects the outcome. In order to get an unbiased estimate of the exposure-outcome relationship, we need to identify potential confounders, collect information on them, design appropriate studies, and adjust for confounding in data analysis. However, it is not always clear which variables to collect information on and adjust for in the analyses. Inappropriate adjustment for confounding can even introduce bias where none existed. Directed acyclic graphs (DAGs) provide a method to select potential confounders and minimize bias in the design and analysis of epidemiological studies. DAGs have been used extensively in expert systems and robotics. Robins (1987) introduced the application of DAGs in epidemiology to overcome shortcomings of traditional methods to control for confounding, especially as they related to unmeasured confounding. DAGs provide a quick and visual way to assess confounding without making parametric assumptions. We introduce DAGs, starting with definitions and rules for basic manipulation, stressing more on applications than theory. We then demonstrate their application in the control of confounding through examples of observational and cross-sectional epidemiological studies.
Sociocultural and Familial Factors Associated with Weight Bias Internalization
Pearl, Rebecca L.; Wadden, Thomas A.; Shaw Tronieri, Jena; Chao, Ariana M.; Alamuddin, Naji; Bakizada, Zayna M.; Pinkasavage, Emilie; Berkowitz, Robert I.
2018-01-01
Background/Aims Sociocultural and familial factors associated with weight bias internalization (WBI) are currently unknown. The present study explored the relationship between interpersonal sources of weight stigma, family weight history, and WBI. Methods Participants with obesity (N = 178, 87.6% female, 71.3% black) completed questionnaires that assessed the frequency with which they experienced weight stigma from various interpersonal sources. Participants also reported the weight status of their family members and completed measures of WBI, depression, and demographics. Participant height and weight were measured to calculate body mass index (BMI). Results Linear regression results (controlling for demographics, BMI, and depression) showed that stigmatizing experiences from family and work predicted greater WBI. Experiencing weight stigma at work was associated with WBI above and beyond the effects of other sources of stigma. Participants who reported higher BMIs for their mothers had lower levels of WBI. Conclusion Experiencing weight stigma from family and at work may heighten WBI, while having a mother with a higher BMI may be a protective factor against WBI. Prospective research is needed to understand WBI's developmental course and identify mechanisms that increase or mitigate its risk. PMID:29656285
Junctionless Diode Enabled by Self-Bias Effect of Ion Gel in Single-Layer MoS2 Device.
Khan, Muhammad Atif; Rathi, Servin; Park, Jinwoo; Lim, Dongsuk; Lee, Yoontae; Yun, Sun Jin; Youn, Doo-Hyeb; Kim, Gil-Ho
2017-08-16
The self-biasing effects of ion gel from source and drain electrodes on electrical characteristics of single layer and few layer molybdenum disulfide (MoS 2 ) field-effect transistor (FET) have been studied. The self-biasing effect of ion gel is tested for two different configurations, covered and open, where ion gel is in contact with either one or both, source and drain electrodes, respectively. In open configuration, the linear output characteristics of the pristine device becomes nonlinear and on-off ratio drops by 3 orders of magnitude due to the increase in "off" current for both single and few layer MoS 2 FETs. However, the covered configuration results in a highly asymmetric output characteristics with a rectification of around 10 3 and an ideality factor of 1.9. This diode like behavior has been attributed to the reduction of Schottky barrier width by the electric field of self-biased ion gel, which enables an efficient injection of electrons by tunneling at metal-MoS 2 interface. Finally, finite element method based simulations are carried out and the simulated results matches well in principle with the experimental analysis. These self-biased diodes can perform a crucial role in the development of high-frequency optoelectronic and valleytronic devices.
Zeng, Chan; Newcomer, Sophia R; Glanz, Jason M; Shoup, Jo Ann; Daley, Matthew F; Hambidge, Simon J; Xu, Stanley
2013-12-15
The self-controlled case series (SCCS) method is often used to examine the temporal association between vaccination and adverse events using only data from patients who experienced such events. Conditional Poisson regression models are used to estimate incidence rate ratios, and these models perform well with large or medium-sized case samples. However, in some vaccine safety studies, the adverse events studied are rare and the maximum likelihood estimates may be biased. Several bias correction methods have been examined in case-control studies using conditional logistic regression, but none of these methods have been evaluated in studies using the SCCS design. In this study, we used simulations to evaluate 2 bias correction approaches-the Firth penalized maximum likelihood method and Cordeiro and McCullagh's bias reduction after maximum likelihood estimation-with small sample sizes in studies using the SCCS design. The simulations showed that the bias under the SCCS design with a small number of cases can be large and is also sensitive to a short risk period. The Firth correction method provides finite and less biased estimates than the maximum likelihood method and Cordeiro and McCullagh's method. However, limitations still exist when the risk period in the SCCS design is short relative to the entire observation period.
How does bias correction of RCM precipitation affect modelled runoff?
NASA Astrophysics Data System (ADS)
Teng, J.; Potter, N. J.; Chiew, F. H. S.; Zhang, L.; Vaze, J.; Evans, J. P.
2014-09-01
Many studies bias correct daily precipitation from climate models to match the observed precipitation statistics, and the bias corrected data are then used for various modelling applications. This paper presents a review of recent methods used to bias correct precipitation from regional climate models (RCMs). The paper then assesses four bias correction methods applied to the weather research and forecasting (WRF) model simulated precipitation, and the follow-on impact on modelled runoff for eight catchments in southeast Australia. Overall, the best results are produced by either quantile mapping or a newly proposed two-state gamma distribution mapping method. However, the difference between the tested methods is small in the modelling experiments here (and as reported in the literature), mainly because of the substantial corrections required and inconsistent errors over time (non-stationarity). The errors remaining in bias corrected precipitation are typically amplified in modelled runoff. The tested methods cannot overcome limitation of RCM in simulating precipitation sequence, which affects runoff generation. Results further show that whereas bias correction does not seem to alter change signals in precipitation means, it can introduce additional uncertainty to change signals in high precipitation amounts and, consequently, in runoff. Future climate change impact studies need to take this into account when deciding whether to use raw or bias corrected RCM results. Nevertheless, RCMs will continue to improve and will become increasingly useful for hydrological applications as the bias in RCM simulations reduces.
[Gender perspective in socio-health care needs].
Vázquez-Santiago, Soledad; Garrido Peña, Francisco
2016-01-01
Social conditions are the first environment that modulate external factors which impact on health. In turn gender is a decisive factor in these social determinants of health. This paper analyzes gender bias in the health system as a relevant part in social determinants. We can distinguish three types of bias: cognitive, social, and institutional. In the institutional biases, we analyze the risks of gender and costs originated from the coordination between the health system and the system of social protection. Finally, we suggest a series of measures to minimize these biases and risks. Copyright © 2015 Elsevier España, S.L.U. All rights reserved.
The small-x gluon distribution in centrality biased pA and pp collisions
Dumitru, Adrian; Kapilevich, Gary; Skokov, Vladimir
2018-04-04
Here, the nuclear modification factor R pA(p T) provides information on the small- x gluon distribution of a nucleus at hadron colliders. Several experiments have recently measured the nuclear modification factor not only in minimum bias but also for central pA collisions. In this paper we analyze the bias on the configurations of soft gluon fields introduced by a centrality selection via the number of hard particles. Such bias can be viewed as reweighting of configurations of small- x gluons. We find that the biased nuclear modification factor Q pA(p T) for central collisions is above R pA(p T) formore » minimum bias events, and that it may redevelop a “Cronin peak” even at small x . The magnitude of the peak is predicted to increase approximately like 1/A ⊥ ν, ν~0.6±0.1 , if one is able to select more compact configurations of the projectile proton where its gluons occupy a smaller transverse area A ⊥. We predict an enhanced Q pp(p T)–1~1/(p T 2) ν and a Cronin peak even for central pp collisions.« less
The small-x gluon distribution in centrality biased pA and pp collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumitru, Adrian; Kapilevich, Gary; Skokov, Vladimir
Here, the nuclear modification factor R pA(p T) provides information on the small- x gluon distribution of a nucleus at hadron colliders. Several experiments have recently measured the nuclear modification factor not only in minimum bias but also for central pA collisions. In this paper we analyze the bias on the configurations of soft gluon fields introduced by a centrality selection via the number of hard particles. Such bias can be viewed as reweighting of configurations of small- x gluons. We find that the biased nuclear modification factor Q pA(p T) for central collisions is above R pA(p T) formore » minimum bias events, and that it may redevelop a “Cronin peak” even at small x . The magnitude of the peak is predicted to increase approximately like 1/A ⊥ ν, ν~0.6±0.1 , if one is able to select more compact configurations of the projectile proton where its gluons occupy a smaller transverse area A ⊥. We predict an enhanced Q pp(p T)–1~1/(p T 2) ν and a Cronin peak even for central pp collisions.« less
Takabatake, Reona; Koiwa, Tomohiro; Kasahara, Masaki; Takashima, Kaori; Futo, Satoshi; Minegishi, Yasutaka; Akiyama, Hiroshi; Teshima, Reiko; Oguchi, Taichi; Mano, Junichi; Furui, Satoshi; Kitta, Kazumi
2011-01-01
To reduce the cost and time required to routinely perform the genetically modified organism (GMO) test, we developed a duplex quantitative real-time PCR method for a screening analysis simultaneously targeting an event-specific segment for GA21 and Cauliflower Mosaic Virus 35S promoter (P35S) segment [Oguchi et al., J. Food Hyg. Soc. Japan, 50, 117-125 (2009)]. To confirm the validity of the method, an interlaboratory collaborative study was conducted. In the collaborative study, conversion factors (Cfs), which are required to calculate the GMO amount (%), were first determined for two real-time PCR instruments, the ABI PRISM 7900HT and the ABI PRISM 7500. A blind test was then conducted. The limit of quantitation for both GA21 and P35S was estimated to be 0.5% or less. The trueness and precision were evaluated as the bias and reproducibility of the relative standard deviation (RSD(R)). The determined bias and RSD(R) were each less than 25%. We believe the developed method would be useful for the practical screening analysis of GM maize.
TRASYS form factor matrix normalization
NASA Technical Reports Server (NTRS)
Tsuyuki, Glenn T.
1992-01-01
A method has been developed for adjusting a TRASYS enclosure form factor matrix to unity. This approach is not limited to closed geometries, and in fact, it is primarily intended for use with open geometries. The purpose of this approach is to prevent optimistic form factors to space. In this method, nodal form factor sums are calculated within 0.05 of unity using TRASYS, although deviations as large as 0.10 may be acceptable, and then, a process is employed to distribute the difference amongst the nodes. A specific example has been analyzed with this method, and a comparison was performed with a standard approach for calculating radiation conductors. In this comparison, hot and cold case temperatures were determined. Exterior nodes exhibited temperature differences as large as 7 C and 3 C for the hot and cold cases, respectively when compared with the standard approach, while interior nodes demonstrated temperature differences from 0 C to 5 C. These results indicate that temperature predictions can be artificially biased if the form factor computation error is lumped into the individual form factors to space.
ERIC Educational Resources Information Center
Austin, Bryan S.; Leahy, Michael J.
2015-01-01
Purpose: To construct and validate a new self-report instrument, the Clinical Judgment Skill Inventory (CJSI), inclusive of clinical judgment skill competencies that address counselor biases and evidence-based strategies. Method: An Internet-based survey design was used and an exploratory factor analysis was performed on a sample of rehabilitation…
Krishna P. Poudel; Temesgen Hailemariam
2016-01-01
Using data from destructively sampled Douglas-fir and lodgepole pine trees, we evaluated the performance of regional volume and component biomass equations in terms of bias and RMSE. The volume and component biomass equations were calibrated using three different adjustment methods that used: (a) a correction factor based on ordinary least square regression through...
Daly, Alison M; Parsons, Jacqueline E; Wood, Nerissa A; Gill, Tiffany K; Taylor, Anne W
2010-12-01
Risk factor surveillance is an integral part of public health, and can provide a ready-made sample for further research. This study assessed the utility of mixed-methodology research using telephone and postal surveys. Adult respondents to telephone surveys in South Australia and Western Australia were recruited to a postal survey about food consumption, in particular, relating to fruit and vegetables. Responses to the two surveys were compared. Around 60% of eligible telephone survey respondents participated in the postal survey. There was fair to poor agreement between the results from the two methods for serves of fruit and vegetables consumed. There was excellent agreement between the two methods for self-reported height and weight. The telephone survey was a useful way to recruit people to the postal survey; this could be due to the high level of trust gained through the telephone interview, or social desirability bias. It is difficult to ascertain why different results on fruit and vegetable intake were obtained, but it may be associated with understanding of the parameters of a 'serve', recall bias or the time taken to calculate an answer.
Federated Tensor Factorization for Computational Phenotyping
Kim, Yejin; Sun, Jimeng; Yu, Hwanjo; Jiang, Xiaoqian
2017-01-01
Tensor factorization models offer an effective approach to convert massive electronic health records into meaningful clinical concepts (phenotypes) for data analysis. These models need a large amount of diverse samples to avoid population bias. An open challenge is how to derive phenotypes jointly across multiple hospitals, in which direct patient-level data sharing is not possible (e.g., due to institutional policies). In this paper, we developed a novel solution to enable federated tensor factorization for computational phenotyping without sharing patient-level data. We developed secure data harmonization and federated computation procedures based on alternating direction method of multipliers (ADMM). Using this method, the multiple hospitals iteratively update tensors and transfer secure summarized information to a central server, and the server aggregates the information to generate phenotypes. We demonstrated with real medical datasets that our method resembles the centralized training model (based on combined datasets) in terms of accuracy and phenotypes discovery while respecting privacy. PMID:29071165
Raman spectrum method for characterization of pull-in voltages of graphene capacitive shunt switches
NASA Astrophysics Data System (ADS)
Li, Peng; You, Zheng; Cui, Tianhong
2012-12-01
An approach using Raman spectrum method is reported to measure pull-in voltages of graphene capacitive shunt switches. When the bias excesses the pull-in voltage, the Raman spectrum's intensity largely decreases. Two factors that contribute to the intensity reduction are investigated. Moreover, by monitoring the frequency shift of G peak and 2D band, we are able to detect the pull-in voltage and measure the strain change in graphene beams during switching.
RELIC: a novel dye-bias correction method for Illumina Methylation BeadChip.
Xu, Zongli; Langie, Sabine A S; De Boever, Patrick; Taylor, Jack A; Niu, Liang
2017-01-03
The Illumina Infinium HumanMethylation450 BeadChip and its successor, Infinium MethylationEPIC BeadChip, have been extensively utilized in epigenome-wide association studies. Both arrays use two fluorescent dyes (Cy3-green/Cy5-red) to measure methylation level at CpG sites. However, performance difference between dyes can result in biased estimates of methylation levels. Here we describe a novel method, called REgression on Logarithm of Internal Control probes (RELIC) to correct for dye bias on whole array by utilizing the intensity values of paired internal control probes that monitor the two color channels. We evaluate the method in several datasets against other widely used dye-bias correction methods. Results on data quality improvement showed that RELIC correction statistically significantly outperforms alternative dye-bias correction methods. We incorporated the method into the R package ENmix, which is freely available from the Bioconductor website ( https://www.bioconductor.org/packages/release/bioc/html/ENmix.html ). RELIC is an efficient and robust method to correct for dye-bias in Illumina Methylation BeadChip data. It outperforms other alternative methods and conveniently implemented in R package ENmix to facilitate DNA methylation studies.
Viana, Andres G; Gratz, Kim L; Bierman, Karen L
2013-01-01
Temperamental vulnerabilities (e.g., behavioral inhibition, anxiety sensitivity) and cognitive biases (e.g., interpretive and judgment biases) may exacerbate feelings of stress and anxiety, particularly among late adolescents during the early years of college. The goal of the present study was to apply person-centered analyses to explore possible heterogeneity in the patterns of these four risk factors in late adolescence, and to examine associations with several anxiety outcomes (i.e., worry, anxiety symptoms, and trait anxiety). Cluster analyses in a college sample of 855 late adolescents revealed a Low-Risk group, along with four reliable clusters with distinct profiles of risk factors and anxiety outcomes (Inhibited, Sensitive, Cognitively-Biased, and Multi-Risk). Of the risk profiles, Multi-Risk youth experienced the highest levels of anxiety outcomes, whereas Inhibited youth experienced the lowest levels of anxiety outcomes. Sensitive and Cognitively-Biased youth experienced comparable levels of anxiety-related outcomes, despite different constellations of risk factors. Implications for interventions and future research are discussed.
McDermott, Máirtín S; Sharma, Rajeev
2017-12-01
The methods employed to measure behaviour in research testing the theories of reasoned action/planned behaviour (TRA/TPB) within the context of health behaviours have the potential to significantly bias findings. One bias yet to be examined in that literature is that due to common method variance (CMV). CMV introduces a variance in scores attributable to the method used to measure a construct, rather than the construct it represents. The primary aim of this study was to evaluate the impact of method bias on the associations of health behaviours with TRA/TPB variables. Data were sourced from four meta-analyses (177 studies). The method used to measure behaviour for each effect size was coded for susceptibility to bias. The moderating impact of method type was assessed using meta-regression. Method type significantly moderated the associations of intentions, attitudes and social norms with behaviour, but not that between perceived behavioural control and behaviour. The magnitude of the moderating effect of method type appeared consistent between cross-sectional and prospective studies, but varied across behaviours. The current findings strongly suggest that method bias significantly inflates associations in TRA/TPB research, and poses a potentially serious validity threat to the cumulative findings reported in that field.
Publication bias in dermatology systematic reviews and meta-analyses.
Atakpo, Paul; Vassar, Matt
2016-05-01
Systematic reviews and meta-analyses in dermatology provide high-level evidence for clinicians and policy makers that influence clinical decision making and treatment guidelines. One methodological problem with systematic reviews is the under representation of unpublished studies. This problem is due in part to publication bias. Omission of statistically non-significant data from meta-analyses may result in overestimation of treatment effect sizes which may lead to clinical consequences. Our goal was to assess whether systematic reviewers in dermatology evaluate and report publication bias. Further, we wanted to conduct our own evaluation of publication bias on meta-analyses that failed to do so. Our study considered systematic reviews and meta-analyses from ten dermatology journals from 2006 to 2016. A PubMed search was conducted, and all full-text articles that met our inclusion criteria were retrieved and coded by the primary author. 293 articles were included in our analysis. Additionally, we formally evaluated publication bias in meta-analyses that failed to do so using trim and fill and cumulative meta-analysis by precision methods. Publication bias was mentioned in 107 articles (36.5%) and was formally evaluated in 64 articles (21.8%). Visual inspection of a funnel plot was the most common method of evaluating publication bias. Publication bias was present in 45 articles (15.3%), not present in 57 articles (19.5%) and not determined in 191 articles (65.2%). Using the trim and fill method, 7 meta-analyses (33.33%) showed evidence of publication bias. Although the trim and fill method only found evidence of publication bias in 7 meta-analyses, the cumulative meta-analysis by precision method found evidence of publication bias in 15 meta-analyses (71.4%). Many of the reviews in our study did not mention or evaluate publication bias. Further, of the 42 articles that stated following PRISMA reporting guidelines, 19 (45.2%) evaluated for publication bias. In comparison to other studies, we found that systematic reviews in dermatology were less likely to evaluate for publication bias. Evaluating and reporting the likelihood of publication bias should be standard practice in systematic reviews when appropriate. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Data harmonization of environmental variables: from simple to general solutions
NASA Astrophysics Data System (ADS)
Baume, O.
2009-04-01
European data platforms often contain measurements from different regional or national networks. As standards and protocols - e.g. type of measurement devices, sensors or measurement site classification, laboratory analysis and post-processing methods, vary between networks, discontinuities will appear when mapping the target variable at an international scale. Standardisation is generally a costly solution and does not allow classical statistical analysis of previously reported values. As an alternative, harmonization should be envisaged as an integrated step in mapping procedures across borders. In this paper, several harmonization solutions developed under the INTAMAP FP6 project are presented. The INTAMAP FP6 project is currently developing an interoperable framework for real-time automatic mapping of critical environmental variables by extending spatial statistical methods to web-based implementations. Harmonization is often considered as a pre-processing step in statistical data analysis workflow. If biases are assessed with little knowledge about the target variable - in particular when no explanatory covariate is integrated, a harmonization procedure along borders or between regionally overlapping networks may be adopted (Skøien et al., 2007). In this case, bias is estimated as the systematic difference between line or local predictions. On the other hand, when covariates can be included in spatial prediction, the harmonization step is integrated in the whole model estimation procedure, and, therefore, is no longer an independent pre-processing step of the automatic mapping process (Baume et al., 2007). In this case, bias factors become integrated parameters of the geostatistical model and are estimated alongside the other model parameters. The harmonization methods developed within the INTAMAP project were first applied within the field of radiation, where the European Radiological Data Exchange Platform (EURDEP) - http://eurdep.jrc.ec.europa.eu/ - has been active for all member states for more than a decade (de Cort and de Vries, 1997). This database contains biases because of the different networks processes used in data reporting (Bossew et al., 2007). In a comparison study, monthly averaged Gamma dose measurements from eight European countries were using the methods described above. Baume et al. (2008) showed that both methods yield similar results and can detect and remove bias from the EURDEP database. To broaden the potential of the methods developed within the INTAMAP project, another application example taken from soil science is presented in this paper. The Carbon/Nitrogen (C/N) ratio of forest soils is one of the best predictors for evaluating soil functions such as used in climate change issues. Although soil samples were analyzed according to a common European laboratory method, Carré et al. (2008) concluded that systematic errors are introduced in the measurements due to calibration issues and instability of the sample. The application of the harmonization procedures showed that bias could be adequately removed, although the procedures have difficulty to distinguish real differences from bias.
Driven Metadynamics: Reconstructing Equilibrium Free Energies from Driven Adaptive-Bias Simulations
2013-01-01
We present a novel free-energy calculation method that constructively integrates two distinct classes of nonequilibrium sampling techniques, namely, driven (e.g., steered molecular dynamics) and adaptive-bias (e.g., metadynamics) methods. By employing nonequilibrium work relations, we design a biasing protocol with an explicitly time- and history-dependent bias that uses on-the-fly work measurements to gradually flatten the free-energy surface. The asymptotic convergence of the method is discussed, and several relations are derived for free-energy reconstruction and error estimation. Isomerization reaction of an atomistic polyproline peptide model is used to numerically illustrate the superior efficiency and faster convergence of the method compared with its adaptive-bias and driven components in isolation. PMID:23795244
Force analysis of magnetic bearings with power-saving controls
NASA Technical Reports Server (NTRS)
Johnson, Dexter; Brown, Gerald V.; Inman, Daniel J.
1992-01-01
Most magnetic bearing control schemes use a bias current with a superimposed control current to linearize the relationship between the control current and the force it delivers. For most operating conditions, the existence of the bias current requires more power than alternative methods that do not use conventional bias. Two such methods are examined which diminish or eliminate bias current. In the typical bias control scheme it is found that for a harmonic control force command into a voltage limited transconductance amplifier, the desired force output is obtained only up to certain combinations of force amplitude and frequency. Above these values, the force amplitude is reduced and a phase lag occurs. The power saving alternative control schemes typically exhibit such deficiencies at even lower command frequencies and amplitudes. To assess the severity of these effects, a time history analysis of the force output is performed for the bias method and the alternative methods. Results of the analysis show that the alternative approaches may be viable. The various control methods examined were mathematically modeled using nondimensionalized variables to facilitate comparison of the various methods.
Can quantile mapping improve precipitation extremes from regional climate models?
NASA Astrophysics Data System (ADS)
Tani, Satyanarayana; Gobiet, Andreas
2015-04-01
The ability of quantile mapping to accurately bias correct regard to precipitation extremes is investigated in this study. We developed new methods by extending standard quantile mapping (QMα) to improve the quality of bias corrected extreme precipitation events as simulated by regional climate model (RCM) output. The new QM version (QMβ) was developed by combining parametric and nonparametric bias correction methods. The new nonparametric method is tested with and without a controlling shape parameter (Qmβ1 and Qmβ0, respectively). Bias corrections are applied on hindcast simulations for a small ensemble of RCMs at six different locations over Europe. We examined the quality of the extremes through split sample and cross validation approaches of these three bias correction methods. This split-sample approach mimics the application to future climate scenarios. A cross validation framework with particular focus on new extremes was developed. Error characteristics, q-q plots and Mean Absolute Error (MAEx) skill scores are used for evaluation. We demonstrate the unstable behaviour of correction function at higher quantiles with QMα, whereas the correction functions with for QMβ0 and QMβ1 are smoother, with QMβ1 providing the most reasonable correction values. The result from q-q plots demonstrates that, all bias correction methods are capable of producing new extremes but QMβ1 reproduces new extremes with low biases in all seasons compared to QMα, QMβ0. Our results clearly demonstrate the inherent limitations of empirical bias correction methods employed for extremes, particularly new extremes, and our findings reveals that the new bias correction method (Qmß1) produces more reliable climate scenarios for new extremes. These findings present a methodology that can better capture future extreme precipitation events, which is necessary to improve regional climate change impact studies.
Padoan, Andrea; Antonelli, Giorgia; Aita, Ada; Sciacovelli, Laura; Plebani, Mario
2017-10-26
The present study was prompted by the ISO 15189 requirements that medical laboratories should estimate measurement uncertainty (MU). The method used to estimate MU included the: a) identification of quantitative tests, b) classification of tests in relation to their clinical purpose, and c) identification of criteria to estimate the different MU components. Imprecision was estimated using long-term internal quality control (IQC) results of the year 2016, while external quality assessment schemes (EQAs) results obtained in the period 2015-2016 were used to estimate bias and bias uncertainty. A total of 263 measurement procedures (MPs) were analyzed. On the basis of test purpose, in 51 MPs imprecision only was used to estimate MU; in the remaining MPs, the bias component was not estimable for 22 MPs because EQAs results did not provide reliable statistics. For a total of 28 MPs, two or more MU values were calculated on the basis of analyte concentration levels. Overall, results showed that uncertainty of bias is a minor factor contributing to MU, the bias component being the most relevant contributor to all the studied sample matrices. The model chosen for MU estimation allowed us to derive a standardized approach for bias calculation, with respect to the fitness-for-purpose of test results. Measurement uncertainty estimation could readily be implemented in medical laboratories as a useful tool in monitoring the analytical quality of test results since they are calculated using a combination of both the long-term imprecision IQC results and bias, on the basis of EQAs results.
Rong, Xing; Du, Yong; Frey, Eric C
2012-06-21
Quantitative Yttrium-90 ((90)Y) bremsstrahlung single photon emission computed tomography (SPECT) imaging has shown great potential to provide reliable estimates of (90)Y activity distribution for targeted radionuclide therapy dosimetry applications. One factor that potentially affects the reliability of the activity estimates is the choice of the acquisition energy window. In contrast to imaging conventional gamma photon emitters where the acquisition energy windows are usually placed around photopeaks, there has been great variation in the choice of the acquisition energy window for (90)Y imaging due to the continuous and broad energy distribution of the bremsstrahlung photons. In quantitative imaging of conventional gamma photon emitters, previous methods for optimizing the acquisition energy window assumed unbiased estimators and used the variance in the estimates as a figure of merit (FOM). However, for situations, such as (90)Y imaging, where there are errors in the modeling of the image formation process used in the reconstruction there will be bias in the activity estimates. In (90)Y bremsstrahlung imaging this will be especially important due to the high levels of scatter, multiple scatter, and collimator septal penetration and scatter. Thus variance will not be a complete measure of reliability of the estimates and thus is not a complete FOM. To address this, we first aimed to develop a new method to optimize the energy window that accounts for both the bias due to model-mismatch and the variance of the activity estimates. We applied this method to optimize the acquisition energy window for quantitative (90)Y bremsstrahlung SPECT imaging in microsphere brachytherapy. Since absorbed dose is defined as the absorbed energy from the radiation per unit mass of tissues in this new method we proposed a mass-weighted root mean squared error of the volume of interest (VOI) activity estimates as the FOM. To calculate this FOM, two analytical expressions were derived for calculating the bias due to model-mismatch and the variance of the VOI activity estimates, respectively. To obtain the optimal acquisition energy window for general situations of interest in clinical (90)Y microsphere imaging, we generated phantoms with multiple tumors of various sizes and various tumor-to-normal activity concentration ratios using a digital phantom that realistically simulates human anatomy, simulated (90)Y microsphere imaging with a clinical SPECT system and typical imaging parameters using a previously validated Monte Carlo simulation code, and used a previously proposed method for modeling the image degrading effects in quantitative SPECT reconstruction. The obtained optimal acquisition energy window was 100-160 keV. The values of the proposed FOM were much larger than the FOM taking into account only the variance of the activity estimates, thus demonstrating in our experiment that the bias of the activity estimates due to model-mismatch was a more important factor than the variance in terms of limiting the reliability of activity estimates.
Gram-Schmidt algorithms for covariance propagation
NASA Technical Reports Server (NTRS)
Thornton, C. L.; Bierman, G. J.
1977-01-01
This paper addresses the time propagation of triangular covariance factors. Attention is focused on the square-root free factorization, P = UD(transpose of U), where U is unit upper triangular and D is diagonal. An efficient and reliable algorithm for U-D propagation is derived which employs Gram-Schmidt orthogonalization. Partitioning the state vector to distinguish bias and coloured process noise parameters increase mapping efficiency. Cost comparisons of the U-D, Schmidt square-root covariance and conventional covariance propagation methods are made using weighted arithmetic operation counts. The U-D time update is shown to be less costly than the Schmidt method; and, except in unusual circumstances, it is within 20% of the cost of conventional propagation.
Gram-Schmidt algorithms for covariance propagation
NASA Technical Reports Server (NTRS)
Thornton, C. L.; Bierman, G. J.
1975-01-01
This paper addresses the time propagation of triangular covariance factors. Attention is focused on the square-root free factorization, P = UDU/T/, where U is unit upper triangular and D is diagonal. An efficient and reliable algorithm for U-D propagation is derived which employs Gram-Schmidt orthogonalization. Partitioning the state vector to distinguish bias and colored process noise parameters increases mapping efficiency. Cost comparisons of the U-D, Schmidt square-root covariance and conventional covariance propagation methods are made using weighted arithmetic operation counts. The U-D time update is shown to be less costly than the Schmidt method; and, except in unusual circumstances, it is within 20% of the cost of conventional propagation.
NASA Astrophysics Data System (ADS)
Mahala, Pramila; Patel, Malkeshkumar; Gupta, Navneet; Kim, Joondong; Lee, Byung Ha
2018-05-01
Studying the performance limiting parameters of the Schottky device is an urgent issue, which are addressed herein by thermally stable silver nanowire (AgNW) embedded metal oxide/p-Si Schottky device. Temperature and bias dependent junction interfacial properties of AgNW-ITO/Si Schottky photoelectric device are reported. The current-voltage-temperature (I-V-T), capacitance-voltage-temperature (C-V-T) and impedance analysis have been carried out in the high-temperature region. The ideality factor and barrier height of Schottky junction are assessed using I-V-T characteristics and thermionic emission, to reveal the decrease of ideality factor and increase of barrier height by the increasing of temperature. The extracted values of laterally homogeneous Schottky (ϕb) and ideality factor (n) are approximately 0.73 eV and 1.58, respectively. Series resistance (Rs) assessed using Cheung's method and found that it decreases with the increase of temperature. A linear response of Rs of AgNW-ITO/Si Schottky junction is observed with respect to change in forward bias, i.e. dRS/dV from 0 to 0.7 V is in the range of 36.12-36.43 Ω with a rate of 1.44 Ω/V. Impedance spectroscopy is used to study the effect of bias voltage and temperature on intrinsic Schottky properties which are responsible for photoconversion efficiency. These systematic analyses are useful for the AgNWs-embedding Si solar cells or photoelectrochemical cells.
Rosner, Bernard; Colditz, Graham A.
2011-01-01
Purpose Age at menopause, a major marker in the reproductive life, may bias results for evaluation of breast cancer risk after menopause. Methods We follow 38,948 premenopausal women in 1980 and identify 2,586 who reported hysterectomy without bilateral oophorectomy, and 31,626 who reported natural menopause during 22 years of follow-up. We evaluate risk factors for natural menopause, impute age at natural menopause for women reporting hysterectomy without bilateral oophorectomy and estimate the hazard of reaching natural menopause in the next 2 years. We apply this imputed age at menopause to both increase sample size and to evaluate the relation between postmenopausal exposures and risk of breast cancer. Results Age, cigarette smoking, age at menarche, pregnancy history, body mass index, history of benign breast disease, and history of breast cancer were each significantly related to age at natural menopause; duration of oral contraceptive use and family history of breast cancer were not. The imputation increased sample size substantially and although some risk factors after menopause were weaker in the expanded model (height, and alcohol use), use of hormone therapy is less biased. Conclusions Imputing age at menopause increases sample size, broadens generalizability making it applicable to women with hysterectomy, and reduces bias. PMID:21441037
Jeffrey H. Gove
2003-01-01
Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...
Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15
ERIC Educational Resources Information Center
Zhang, Jinming
2005-01-01
Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…
Bias of averages in life-cycle footprinting of infrastructure: truck and bus case studies.
Taptich, Michael N; Horvath, Arpad
2014-11-18
The life-cycle output (e.g., level of service) of infrastructure systems heavily influences their normalized environmental footprint. Many studies and tools calculate emission factors based on average productivity; however, the performance of these systems varies over time and space. We evaluate the appropriate use of emission factors based on average levels of service by comparing them to those reflecting a distribution of system outputs. For the provision of truck and bus services where fuel economy is assumed constant over levels of service, emission factor estimation biases, described by Jensen's inequality, always result in larger-than-expected environmental impacts (3%-400%) and depend strongly on the variability and skew of truck payloads and bus ridership. Well-to-wheel greenhouse gas emission factors for diesel trucks in California range from 87 to 1,500 g of CO2 equivalents per ton-km, depending on the size and type of trucks and the services performed. Along a bus route in San Francisco, well-to-wheel emission factors ranged between 53 and 940 g of CO2 equivalents per passenger-km. The use of biased emission factors can have profound effects on various policy decisions. If average emission rates must be used, reflecting a distribution of productivity can reduce emission factor biases.
Taguchi method of experimental design in materials education
NASA Technical Reports Server (NTRS)
Weiser, Martin W.
1993-01-01
Some of the advantages and disadvantages of the Taguchi Method of experimental design as applied to Materials Science will be discussed. This is a fractional factorial method that employs the minimum number of experimental trials for the information obtained. The analysis is also very simple to use and teach, which is quite advantageous in the classroom. In addition, the Taguchi loss function can be easily incorporated to emphasize that improvements in reproducibility are often at least as important as optimization of the response. The disadvantages of the Taguchi Method include the fact that factor interactions are normally not accounted for, there are zero degrees of freedom if all of the possible factors are used, and randomization is normally not used to prevent environmental biasing. In spite of these disadvantages it is felt that the Taguchi Method is extremely useful for both teaching experimental design and as a research tool, as will be shown with a number of brief examples.
ERIC Educational Resources Information Center
Wood, John; Kiggins, Ryan; Kickham, Kenneth
2017-01-01
Within the broader literature concerned with potential bias in student measures of instructor effectiveness, two broad types of bias have been shown to operate in a course: internal and external. Missing is an assessment of the relative influence of each bias type in the classroom. Do internal or external types of bias matter more or less to…
How does bias correction of regional climate model precipitation affect modelled runoff?
NASA Astrophysics Data System (ADS)
Teng, J.; Potter, N. J.; Chiew, F. H. S.; Zhang, L.; Wang, B.; Vaze, J.; Evans, J. P.
2015-02-01
Many studies bias correct daily precipitation from climate models to match the observed precipitation statistics, and the bias corrected data are then used for various modelling applications. This paper presents a review of recent methods used to bias correct precipitation from regional climate models (RCMs). The paper then assesses four bias correction methods applied to the weather research and forecasting (WRF) model simulated precipitation, and the follow-on impact on modelled runoff for eight catchments in southeast Australia. Overall, the best results are produced by either quantile mapping or a newly proposed two-state gamma distribution mapping method. However, the differences between the methods are small in the modelling experiments here (and as reported in the literature), mainly due to the substantial corrections required and inconsistent errors over time (non-stationarity). The errors in bias corrected precipitation are typically amplified in modelled runoff. The tested methods cannot overcome limitations of the RCM in simulating precipitation sequence, which affects runoff generation. Results further show that whereas bias correction does not seem to alter change signals in precipitation means, it can introduce additional uncertainty to change signals in high precipitation amounts and, consequently, in runoff. Future climate change impact studies need to take this into account when deciding whether to use raw or bias corrected RCM results. Nevertheless, RCMs will continue to improve and will become increasingly useful for hydrological applications as the bias in RCM simulations reduces.
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
2018-06-01
The realized stochastic volatility model has been introduced to estimate more accurate volatility by using both daily returns and realized volatility. The main advantage of the model is that no special bias-correction factor for the realized volatility is required a priori. Instead, the model introduces a bias-correction parameter responsible for the bias hidden in realized volatility. We empirically investigate the bias-correction parameter for realized volatilities calculated at various sampling frequencies for six stocks on the Tokyo Stock Exchange, and then show that the dynamic behavior of the bias-correction parameter as a function of sampling frequency is qualitatively similar to that of the Hansen-Lunde bias-correction factor although their values are substantially different. Under the stochastic diffusion assumption of the return dynamics, we investigate the accuracy of estimated volatilities by examining the standardized returns. We find that while the moments of the standardized returns from low-frequency realized volatilities are consistent with the expectation from the Gaussian variables, the deviation from the expectation becomes considerably large at high frequencies. This indicates that the realized stochastic volatility model itself cannot completely remove bias at high frequencies.
An entropy correction method for unsteady full potential flows with strong shocks
NASA Technical Reports Server (NTRS)
Whitlow, W., Jr.; Hafez, M. M.; Osher, S. J.
1986-01-01
An entropy correction method for the unsteady full potential equation is presented. The unsteady potential equation is modified to account for entropy jumps across shock waves. The conservative form of the modified equation is solved in generalized coordinates using an implicit, approximate factorization method. A flux-biasing differencing method, which generates the proper amounts of artificial viscosity in supersonic regions, is used to discretize the flow equations in space. Comparisons between the present method and solutions of the Euler equations and between the present method and experimental data are presented. The comparisons show that the present method more accurately models solutions of the Euler equations and experiment than does the isentropic potential formulation.
Information bias in health research: definition, pitfalls, and adjustment methods
Althubaiti, Alaa
2016-01-01
As with other fields, medical sciences are subject to different sources of bias. While understanding sources of bias is a key element for drawing valid conclusions, bias in health research continues to be a very sensitive issue that can affect the focus and outcome of investigations. Information bias, otherwise known as misclassification, is one of the most common sources of bias that affects the validity of health research. It originates from the approach that is utilized to obtain or confirm study measurements. This paper seeks to raise awareness of information bias in observational and experimental research study designs as well as to enrich discussions concerning bias problems. Specifying the types of bias can be essential to limit its effects and, the use of adjustment methods might serve to improve clinical evaluation and health care practice. PMID:27217764
Coleman, Brandon G; Johnson, Thomas M; Erley, Kenneth J; Topolski, Richard; Rethman, Michael; Lancaster, Douglas D
2016-10-01
In recent years, evidence-based dentistry has become the ideal for research, academia, and clinical practice. However, barriers to implementation are many, including the complexity of interpreting conflicting evidence as well as difficulties in accessing it. Furthermore, many proponents of evidence-based care seem to assume that good evidence consistently exists and that clinicians can and will objectively evaluate data so as to apply the best evidence to individual patients' needs. The authors argue that these shortcomings may mislead many clinicians and that students should be adequately prepared to cope with some of the more complex issues surrounding evidence-based practice. Cognitive biases and heuristics shape every aspect of our lives, including our professional behavior. This article reviews literature from medicine, psychology, and behavioral economics to explore the barriers to implementing evidence-based dentistry. Internal factors include biases that affect clinical decision making: hindsight bias, optimism bias, survivor bias, and blind-spot bias. External factors include publication bias, corporate bias, and lack of transparency that may skew the available evidence in the peer-reviewed literature. Raising awareness of how these biases exert subtle influence on decision making and patient care can lead to a more nuanced discussion of addressing and overcoming barriers to evidence-based practice.
Prochwicz, Katarzyna; Kłosowska, Joanna
2018-04-13
Negative emotions and cognitive biases are important factors underlying psychotic symptoms and psychotic-like experiences (PLEs); however, it is not clear whether these factors interact when they influence psychotic phenomena. The aim of our study was to investigate whether psychosis-related cognitive biases moderate the relationship between negative affective states, i.e. anxiety and depression, and psychotic-like experiences. The study sample contains 251 participants who have never been diagnosed with psychiatric disorders. Anxiety, depression, cognitive biases, and psychotic-like experiences were assessed with self-report questionnaires. A moderation analysis was performed to examine the relationship between the study variables. The analyses revealed that the link between anxiety and positive PLEs is moderated by External Attribution bias, whereas the relationship between depression and positive PLEs is moderated by Attention to Threat bias. Attributional bias was also found to moderate the association between depression and negative subclinical symptoms; Jumping to Conclusions bias served as a moderator in the link between anxiety and depression and negative PLEs. Further studies in clinical samples are required to verify the moderating role of individual cognitive biases on the relationship between negative emotional states and full-blown psychotic symptoms. Copyright © 2018 Elsevier B.V. All rights reserved.
Pendleton, G.W.; Ralph, C. John; Sauer, John R.; Droege, Sam
1995-01-01
Many factors affect the use of point counts for monitoring bird populations, including sampling strategies, variation in detection rates, and independence of sample points. The most commonly used sampling plans are stratified sampling, cluster sampling, and systematic sampling. Each of these might be most useful for different objectives or field situations. Variation in detection probabilities and lack of independence among sample points can bias estimates and measures of precision. All of these factors should be con-sidered when using point count methods.
Lin, Muqing; Chan, Siwa; Chen, Jeon-Hor; Chang, Daniel; Nie, Ke; Chen, Shih-Ting; Lin, Cheng-Ju; Shih, Tzu-Ching; Nalcioglu, Orhan; Su, Min-Ying
2011-01-01
Quantitative breast density is known as a strong risk factor associated with the development of breast cancer. Measurement of breast density based on three-dimensional breast MRI may provide very useful information. One important step for quantitative analysis of breast density on MRI is the correction of field inhomogeneity to allow an accurate segmentation of the fibroglandular tissue (dense tissue). A new bias field correction method by combining the nonparametric nonuniformity normalization (N3) algorithm and fuzzy-C-means (FCM)-based inhomogeneity correction algorithm is developed in this work. The analysis is performed on non-fat-sat T1-weighted images acquired using a 1.5 T MRI scanner. A total of 60 breasts from 30 healthy volunteers was analyzed. N3 is known as a robust correction method, but it cannot correct a strong bias field on a large area. FCM-based algorithm can correct the bias field on a large area, but it may change the tissue contrast and affect the segmentation quality. The proposed algorithm applies N3 first, followed by FCM, and then the generated bias field is smoothed using Gaussian kernal and B-spline surface fitting to minimize the problem of mistakenly changed tissue contrast. The segmentation results based on the N3+FCM corrected images were compared to the N3 and FCM alone corrected images and another method, coherent local intensity clustering (CLIC), corrected images. The segmentation quality based on different correction methods were evaluated by a radiologist and ranked. The authors demonstrated that the iterative N3+FCM correction method brightens the signal intensity of fatty tissues and that separates the histogram peaks between the fibroglandular and fatty tissues to allow an accurate segmentation between them. In the first reading session, the radiologist found (N3+FCM > N3 > FCM) ranking in 17 breasts, (N3+FCM > N3 = FCM) ranking in 7 breasts, (N3+FCM = N3 > FCM) in 32 breasts, (N3+FCM = N3 = FCM) in 2 breasts, and (N3 > N3+FCM > FCM) in 2 breasts. The results of the second reading session were similar. The performance in each pairwise Wilcoxon signed-rank test is significant, showing N3+FCM superior to both N3 and FCM, and N3 superior to FCM. The performance of the new N3+FCM algorithm was comparable to that of CLIC, showing equivalent quality in 57/60 breasts. Choosing an appropriate bias field correction method is a very important preprocessing step to allow an accurate segmentation of fibroglandular tissues based on breast MRI for quantitative measurement of breast density. The proposed algorithm combining N3+FCM and CLIC both yield satisfactory results.
College students' perceived risk of sexual victimization and the role of optimistic bias.
Saling Untied, Amy; Dulaney, Cynthia L
2015-05-01
Many college women believe that their chances of experiencing a sexual assault are less than their peers. This phenomenon, called optimistic bias, has been hypothesized to be one important element to address in sexual assault risk reduction and awareness programs aimed at reducing women's chances of experiencing a sexual assault. The present study examined the role that participants' (N = 89) perceived similarity to a narrator (portraying a sexual assault survivor) describing an assault plays in reducing this bias. The age of the narrator was manipulated (similar or dissimilar to age of participants) with the aim of assessing whether the program could produce reductions in optimistic bias for those participants who watched a video of someone similar to them in age. A significant interaction between pre- and post-program and age similarity indicated a significant decrease in optimistic bias from pre- to posttest for the similar group. Furthermore, an exploratory analysis indicated optimistic bias for White participants decreased from pre- to posttest, whereas optimistic bias for the Black participants increased. These results suggest that some factors such as age similarity may reduce optimistic bias in sexual assault risk reduction and awareness programs. However, a race dissimilarity may increase optimistic bias. Thus, more research is needed to understand the factors that affect optimistic bias with regard to sexual assault awareness. © The Author(s) 2014.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bieler, Noah S.; Hünenberger, Philippe H., E-mail: phil@igc.phys.chem.ethz.ch
2014-11-28
In a recent article [Bieler et al., J. Chem. Theory Comput. 10, 3006–3022 (2014)], we introduced a combination of the λ-dynamics (λD) approach for calculating alchemical free-energy differences and of the local-elevation umbrella-sampling (LEUS) memory-based biasing method to enhance the sampling along the alchemical coordinate. The combined scheme, referred to as λ-LEUS, was applied to the perturbation of hydroquinone to benzene in water as a test system, and found to represent an improvement over thermodynamic integration (TI) in terms of sampling efficiency at equivalent accuracy. However, the preoptimization of the biasing potential required in the λ-LEUS method requires “filling up”more » all the basins in the potential of mean force. This introduces a non-productive pre-sampling time that is system-dependent, and generally exceeds the corresponding equilibration time in a TI calculation. In this letter, a remedy is proposed to this problem, termed the slow growth memory guessing (SGMG) approach. Instead of initializing the biasing potential to zero at the start of the preoptimization, an approximate potential of mean force is estimated from a short slow growth calculation, and its negative used to construct the initial memory. Considering the same test system as in the preceding article, it is shown that of the application of SGMG in λ-LEUS permits to reduce the preoptimization time by about a factor of four.« less
NASA Astrophysics Data System (ADS)
Bieler, Noah S.; Hünenberger, Philippe H.
2014-11-01
In a recent article [Bieler et al., J. Chem. Theory Comput. 10, 3006-3022 (2014)], we introduced a combination of the λ-dynamics (λD) approach for calculating alchemical free-energy differences and of the local-elevation umbrella-sampling (LEUS) memory-based biasing method to enhance the sampling along the alchemical coordinate. The combined scheme, referred to as λ-LEUS, was applied to the perturbation of hydroquinone to benzene in water as a test system, and found to represent an improvement over thermodynamic integration (TI) in terms of sampling efficiency at equivalent accuracy. However, the preoptimization of the biasing potential required in the λ-LEUS method requires "filling up" all the basins in the potential of mean force. This introduces a non-productive pre-sampling time that is system-dependent, and generally exceeds the corresponding equilibration time in a TI calculation. In this letter, a remedy is proposed to this problem, termed the slow growth memory guessing (SGMG) approach. Instead of initializing the biasing potential to zero at the start of the preoptimization, an approximate potential of mean force is estimated from a short slow growth calculation, and its negative used to construct the initial memory. Considering the same test system as in the preceding article, it is shown that of the application of SGMG in λ-LEUS permits to reduce the preoptimization time by about a factor of four.
Woestehoff, Skye A; Meissner, Christian A
2016-10-01
Research on jurors' perceptions of confession evidence suggests that jurors may not be sensitive to factors that can influence the reliability of a confession. Jurors' decisions tend not to be influenced by situational pressures to confess, which suggests that jurors commit the correspondence bias when evaluating a confession. One method to potentially increase sensitivity and counteract the correspondence bias is by highlighting a motivation other than guilt for the defendant's confession. We conducted 3 experiments to evaluate jurors' sensitivity to false confession risk factors. Participants read a trial transcript that varied the presence of false confession risk factors within an interrogation. Some participants also read testimony that presented an alternative motivation for the confession (expert testimony, Experiments 1 and 3; defendant testimony, Experiment 2). Across 3 experiments, participants were generally able to distinguish between interrogation practices that can produce a false confession, regardless of the presence or absence of expert or defendant testimony. Experiment 3 explored whether participants' attributions for the confessor's motivation were affected by interrogative pressure and expert testimony, and whether these attributions affected verdicts. Participants' reluctance to convict when false confession risk factors were present was associated with situational, rather than dispositional, attributions regarding the defendant's motivation to confess. It is possible that increased knowledge is responsible for participants' improved sensitivity to false confession risk factors. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Bannerman, J A; Costamagna, A C; McCornack, B P; Ragsdale, D W
2015-06-01
Generalist natural enemies play an important role in controlling soybean aphid, Aphis glycines (Hemiptera: Aphididae), in North America. Several sampling methods are used to monitor natural enemy populations in soybean, but there has been little work investigating their relative bias, precision, and efficiency. We compare five sampling methods: quadrats, whole-plant counts, sweep-netting, walking transects, and yellow sticky cards to determine the most practical methods for sampling the three most prominent species, which included Harmonia axyridis (Pallas), Coccinella septempunctata L. (Coleoptera: Coccinellidae), and Orius insidiosus (Say) (Hemiptera: Anthocoridae). We show an important time by sampling method interaction indicated by diverging community similarities within and between sampling methods as the growing season progressed. Similarly, correlations between sampling methods for the three most abundant species over multiple time periods indicated differences in relative bias between sampling methods and suggests that bias is not consistent throughout the growing season, particularly for sticky cards and whole-plant samples. Furthermore, we show that sticky cards produce strongly biased capture rates relative to the other four sampling methods. Precision and efficiency differed between sampling methods and sticky cards produced the most precise (but highly biased) results for adult natural enemies, while walking transects and whole-plant counts were the most efficient methods for detecting coccinellids and O. insidiosus, respectively. Based on bias, precision, and efficiency considerations, the most practical sampling methods for monitoring in soybean include walking transects for coccinellid detection and whole-plant counts for detection of small predators like O. insidiosus. Sweep-netting and quadrat samples are also useful for some applications, when efficiency is not paramount. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Chen, Jie; Li, Chao; Brissette, François P.; Chen, Hua; Wang, Mingna; Essou, Gilles R. C.
2018-05-01
Bias correction is usually implemented prior to using climate model outputs for impact studies. However, bias correction methods that are commonly used treat climate variables independently and often ignore inter-variable dependencies. The effects of ignoring such dependencies on impact studies need to be investigated. This study aims to assess the impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling. To this end, a joint bias correction (JBC) method which corrects the joint distribution of two variables as a whole is compared with an independent bias correction (IBC) method; this is considered in terms of correcting simulations of precipitation and temperature from 26 climate models for hydrological modeling over 12 watersheds located in various climate regimes. The results show that the simulated precipitation and temperature are considerably biased not only in the individual distributions, but also in their correlations, which in turn result in biased hydrological simulations. In addition to reducing the biases of the individual characteristics of precipitation and temperature, the JBC method can also reduce the bias in precipitation-temperature (P-T) correlations. In terms of hydrological modeling, the JBC method performs significantly better than the IBC method for 11 out of the 12 watersheds over the calibration period. For the validation period, the advantages of the JBC method are greatly reduced as the performance becomes dependent on the watershed, GCM and hydrological metric considered. For arid/tropical and snowfall-rainfall-mixed watersheds, JBC performs better than IBC. For snowfall- or rainfall-dominated watersheds, however, the two methods behave similarly, with IBC performing somewhat better than JBC. Overall, the results emphasize the advantages of correcting the P-T correlation when using climate model-simulated precipitation and temperature to assess the impact of climate change on watershed hydrology. However, a thorough validation and a comparison with other methods are recommended before using the JBC method, since it may perform worse than the IBC method for some cases due to bias nonstationarity of climate model outputs.
Estimating Gravity Biases with Wavelets in Support of a 1-cm Accurate Geoid Model
NASA Astrophysics Data System (ADS)
Ahlgren, K.; Li, X.
2017-12-01
Systematic errors that reside in surface gravity datasets are one of the major hurdles in constructing a high-accuracy geoid model at high resolutions. The National Oceanic and Atmospheric Administration's (NOAA) National Geodetic Survey (NGS) has an extensive historical surface gravity dataset consisting of approximately 10 million gravity points that are known to have systematic biases at the mGal level (Saleh et al. 2013). As most relevant metadata is absent, estimating and removing these errors to be consistent with a global geopotential model and airborne data in the corresponding wavelength is quite a difficult endeavor. However, this is crucial to support a 1-cm accurate geoid model for the United States. With recently available independent gravity information from GRACE/GOCE and airborne gravity from the NGS Gravity for the Redefinition of the American Vertical Datum (GRAV-D) project, several different methods of bias estimation are investigated which utilize radial basis functions and wavelet decomposition. We estimate a surface gravity value by incorporating a satellite gravity model, airborne gravity data, and forward-modeled topography at wavelet levels according to each dataset's spatial wavelength. Considering the estimated gravity values over an entire gravity survey, an estimate of the bias and/or correction for the entire survey can be found and applied. In order to assess the accuracy of each bias estimation method, two techniques are used. First, each bias estimation method is used to predict the bias for two high-quality (unbiased and high accuracy) geoid slope validation surveys (GSVS) (Smith et al. 2013 & Wang et al. 2017). Since these surveys are unbiased, the various bias estimation methods should reflect that and provide an absolute accuracy metric for each of the bias estimation methods. Secondly, the corrected gravity datasets from each of the bias estimation methods are used to build a geoid model. The accuracy of each geoid model provides an additional metric to assess the performance of each bias estimation method. The geoid model accuracies are assessed using the two GSVS lines and GPS-leveling data across the United States.
Hall, William J.; Lee, Kent M.; Merino, Yesenia M.; Thomas, Tainayah W.; Payne, B. Keith; Eng, Eugenia; Day, Steven H.; Coyne-Beasley, Tamera
2015-01-01
Background. In the United States, people of color face disparities in access to health care, the quality of care received, and health outcomes. The attitudes and behaviors of health care providers have been identified as one of many factors that contribute to health disparities. Implicit attitudes are thoughts and feelings that often exist outside of conscious awareness, and thus are difficult to consciously acknowledge and control. These attitudes are often automatically activated and can influence human behavior without conscious volition. Objectives. We investigated the extent to which implicit racial/ethnic bias exists among health care professionals and examined the relationships between health care professionals’ implicit attitudes about racial/ethnic groups and health care outcomes. Search Methods. To identify relevant studies, we searched 10 computerized bibliographic databases and used a reference harvesting technique. Selection Criteria. We assessed eligibility using double independent screening based on a priori inclusion criteria. We included studies if they sampled existing health care providers or those in training to become health care providers, measured and reported results on implicit racial/ethnic bias, and were written in English. Data Collection and Analysis. We included a total of 15 studies for review and then subjected them to double independent data extraction. Information extracted included the citation, purpose of the study, use of theory, study design, study site and location, sampling strategy, response rate, sample size and characteristics, measurement of relevant variables, analyses performed, and results and findings. We summarized study design characteristics, and categorized and then synthesized substantive findings. Main Results. Almost all studies used cross-sectional designs, convenience sampling, US participants, and the Implicit Association Test to assess implicit bias. Low to moderate levels of implicit racial/ethnic bias were found among health care professionals in all but 1 study. These implicit bias scores are similar to those in the general population. Levels of implicit bias against Black, Hispanic/Latino/Latina, and dark-skinned people were relatively similar across these groups. Although some associations between implicit bias and health care outcomes were nonsignificant, results also showed that implicit bias was significantly related to patient–provider interactions, treatment decisions, treatment adherence, and patient health outcomes. Implicit attitudes were more often significantly related to patient–provider interactions and health outcomes than treatment processes. Conclusions. Most health care providers appear to have implicit bias in terms of positive attitudes toward Whites and negative attitudes toward people of color. Future studies need to employ more rigorous methods to examine the relationships between implicit bias and health care outcomes. Interventions targeting implicit attitudes among health care professionals are needed because implicit bias may contribute to health disparities for people of color. PMID:26469668
NASA Astrophysics Data System (ADS)
Malakar, N. K.; Lary, D. J.; Gencaga, D.; Albayrak, A.; Wei, J.
2013-08-01
Measurements made by satellite remote sensing, Moderate Resolution Imaging Spectroradiometer (MODIS), and globally distributed Aerosol Robotic Network (AERONET) are compared. Comparison of the two datasets measurements for aerosol optical depth values show that there are biases between the two data products. In this paper, we present a general framework towards identifying relevant set of variables responsible for the observed bias. We present a general framework to identify the possible factors influencing the bias, which might be associated with the measurement conditions such as the solar and sensor zenith angles, the solar and sensor azimuth, scattering angles, and surface reflectivity at the various measured wavelengths, etc. Specifically, we performed analysis for remote sensing Aqua-Land data set, and used machine learning technique, neural network in this case, to perform multivariate regression between the ground-truth and the training data sets. Finally, we used mutual information between the observed and the predicted values as the measure of similarity to identify the most relevant set of variables. The search is brute force method as we have to consider all possible combinations. The computations involves a huge number crunching exercise, and we implemented it by writing a job-parallel program.
A geostatistical approach to data harmonization - Application to radioactivity exposure data
NASA Astrophysics Data System (ADS)
Baume, O.; Skøien, J. O.; Heuvelink, G. B. M.; Pebesma, E. J.; Melles, S. J.
2011-06-01
Environmental issues such as air, groundwater pollution and climate change are frequently studied at spatial scales that cross boundaries between political and administrative regions. It is common for different administrations to employ different data collection methods. If these differences are not taken into account in spatial interpolation procedures then biases may appear and cause unrealistic results. The resulting maps may show misleading patterns and lead to wrong interpretations. Also, errors will propagate when these maps are used as input to environmental process models. In this paper we present and apply a geostatistical model that generalizes the universal kriging model such that it can handle heterogeneous data sources. The associated best linear unbiased estimation and prediction (BLUE and BLUP) equations are presented and it is shown that these lead to harmonized maps from which estimated biases are removed. The methodology is illustrated with an example of country bias removal in a radioactivity exposure assessment for four European countries. The application also addresses multicollinearity problems in data harmonization, which arise when both artificial bias factors and natural drifts are present and cannot easily be distinguished. Solutions for handling multicollinearity are suggested and directions for further investigations proposed.
NASA Astrophysics Data System (ADS)
Zhang, Wenlei; Hirai, Yoshikazu; Tsuchiya, Toshiyuki; Tabata, Osamu
2018-06-01
Tensile strength and strength distribution in a microstructure of single crystal silicon (SCS) were improved significantly by coating the surface with a diamond-like carbon (DLC) film. To explore the influence of coating parameters and the mechanism of film fracture, SCS microstructure surfaces (120 × 4 × 5 μm3) were fully coated by plasma enhanced chemical vapor deposition (PECVD) of a DLC at five different bias voltages. After the depositions, Raman spectroscopy, X-ray photoelectron spectroscopy (XPS), thermal desorption spectrometry (TDS), surface profilometry, atomic force microscope (AFM) measurement, and nanoindentation methods were used to study the chemical and mechanical properties of the deposited DLC films. Tensile test indicated that the average strength of coated samples was 13.2-29.6% higher than that of the SCS sample, and samples fabricated with a -400 V bias voltage were strongest. The fracture toughness of the DLC film was the dominant factor in the observed tensile strength. Deviations in strength were reduced with increasingly negative bias voltage. The effect of residual stress on the tensile properties is discussed in detail.
Müller, Jörg M; Furniss, Tilman
2013-11-30
The often-reported low informant agreement about child psychopathology between multiple informants has lead to various suggestions about how to address discrepant ratings. Among the factors that may lower agreement that have been discussed is informant credibility, reliability, or psychopathology, which is of interest in this paper. We tested three different models, namely, the accuracy, the distortion, and an integrated so-called combined model, that conceptualize parental ratings to assess child psychopathology. The data comprise ratings of child psychopathology from multiple informants (mother, therapist and kindergarten teacher) and ratings of maternal psychopathology. The children were patients in a preschool psychiatry unit (N=247). The results from structural equation modeling show that maternal ratings of child psychopathology were biased by maternal psychopathology (distortion model). Based on this statistical background, we suggest a method to adjust biased maternal ratings. We illustrate the maternal bias by comparing the ratings of mother to expert ratings (combined kindergarten teacher and therapist ratings) and show that the correction equation increases the agreement between maternal and expert ratings. We conclude that this approach may help to reduce misclassification of preschool children as 'clinical' on the basis of biased maternal ratings. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Hui; Blencowe, M. P.; Armour, A. D.; Rimberg, A. J.
2017-09-01
We give a semiclassical analysis of the average photon number as well as photon number variance (Fano factor F ) for a Josephson junction (JJ) embedded microwave cavity system, where the JJ is subject to a fluctuating (i.e., noisy) bias voltage with finite dc average. Through the ac Josephson effect, the dc voltage bias drives the effectively nonlinear microwave cavity mode into an amplitude squeezed state (F <1 ), as has been established previously [Armour et al., Phys. Rev. Lett. 111, 247001 (2013), 10.1103/PhysRevLett.111.247001], but bias noise acts to degrade this squeezing. We find that the sensitivity of the Fano factor to bias voltage noise depends qualitatively on which stable fixed point regime the system is in for the corresponding classical nonlinear steady-state dynamics. Furthermore, we show that the impact of voltage bias noise is most significant when the cavity is excited to states with large average photon number.
Systematic error of diode thermometer.
Iskrenovic, Predrag S
2009-08-01
Semiconductor diodes are often used for measuring temperatures. The forward voltage across a diode decreases, approximately linearly, with the increase in temperature. The applied method is mainly the simplest one. A constant direct current flows through the diode, and voltage is measured at diode terminals. The direct current that flows through the diode, putting it into operating mode, heats up the diode. The increase in temperature of the diode-sensor, i.e., the systematic error due to self-heating, depends on the intensity of current predominantly and also on other factors. The results of systematic error measurements due to heating up by the forward-bias current have been presented in this paper. The measurements were made at several diodes over a wide range of bias current intensity.
Gender differences in autoimmunity associated with exposure to environmental factors
Pollard, K. Michael
2011-01-01
Autoimmunity is thought to result from a combination of genetics, environmental triggers, and stochastic events. Gender is also a significant risk factor with many diseases exhibiting a female bias. Although the role of environmental triggers, especially medications, in eliciting autoimmunity is well established less is known about the interplay between gender, the environment and autoimmunity. This review examines the contribution of gender in autoimmunity induced by selected chemical, physical and biological agents in humans and animal models. Epidemiological studies reveal that environmental factors can be associated with a gender bias in human autoimmunity. However many studies show that the increased risk of autoimmunity is often influenced by occupational exposure or other gender biased activities. Animal studies, although often prejudiced by the exclusive use of female animals, reveal that gender bias can be strain specific suggesting an interaction between sex chromosome complement and background genes. This observation has important implications because it argues that within a gender biased disease there may be individuals in which gender does not contribute to autoimmunity. Exposure to environmental factors, which encompasses everything around us, adds an additional layer of complexity. Understanding how the environment influences the relationship between sex chromosome complement and innate and adaptive immune responses will be essential in determining the role of gender in environmentally-induced autoimmunity. PMID:22137891
Evaluation of cancer mortality in a cohort of workers exposed to low-level radiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lea, C.S.
1995-12-01
The purpose of this dissertation was to re-analyze existing data to explore methodologic approaches that may determine whether excess cancer mortality in the ORNL cohort can be explained by time-related factors not previously considered; grouping of cancer outcomes; selection bias due to choice of method selected to incorporate an empirical induction period; or the type of statistical model chosen.
Cognitive debiasing 1: origins of bias and theory of debiasing.
Croskerry, Pat; Singhal, Geeta; Mamede, Sílvia
2013-10-01
Numerous studies have shown that diagnostic failure depends upon a variety of factors. Psychological factors are fundamental in influencing the cognitive performance of the decision maker. In this first of two papers, we discuss the basics of reasoning and the Dual Process Theory (DPT) of decision making. The general properties of the DPT model, as it applies to diagnostic reasoning, are reviewed. A variety of cognitive and affective biases are known to compromise the decision-making process. They mostly appear to originate in the fast intuitive processes of Type 1 that dominate (or drive) decision making. Type 1 processes work well most of the time but they may open the door for biases. Removing or at least mitigating these biases would appear to be an important goal. We will also review the origins of biases. The consensus is that there are two major sources: innate, hard-wired biases that developed in our evolutionary past, and acquired biases established in the course of development and within our working environments. Both are associated with abbreviated decision making in the form of heuristics. Other work suggests that ambient and contextual factors may create high risk situations that dispose decision makers to particular biases. Fatigue, sleep deprivation and cognitive overload appear to be important determinants. The theoretical basis of several approaches towards debiasing is then discussed. All share a common feature that involves a deliberate decoupling from Type 1 intuitive processing and moving to Type 2 analytical processing so that eventually unexamined intuitive judgments can be submitted to verification. This decoupling step appears to be the critical feature of cognitive and affective debiasing.
Meta-assessment of bias in science
Fanelli, Daniele; Costas, Rodrigo; Ioannidis, John P. A.
2017-01-01
Numerous biases are believed to affect the scientific literature, but their actual prevalence across disciplines is unknown. To gain a comprehensive picture of the potential imprint of bias in science, we probed for the most commonly postulated bias-related patterns and risk factors, in a large random sample of meta-analyses taken from all disciplines. The magnitude of these biases varied widely across fields and was overall relatively small. However, we consistently observed a significant risk of small, early, and highly cited studies to overestimate effects and of studies not published in peer-reviewed journals to underestimate them. We also found at least partial confirmation of previous evidence suggesting that US studies and early studies might report more extreme effects, although these effects were smaller and more heterogeneously distributed across meta-analyses and disciplines. Authors publishing at high rates and receiving many citations were, overall, not at greater risk of bias. However, effect sizes were likely to be overestimated by early-career researchers, those working in small or long-distance collaborations, and those responsible for scientific misconduct, supporting hypotheses that connect bias to situational factors, lack of mutual control, and individual integrity. Some of these patterns and risk factors might have modestly increased in intensity over time, particularly in the social sciences. Our findings suggest that, besides one being routinely cautious that published small, highly-cited, and earlier studies may yield inflated results, the feasibility and costs of interventions to attenuate biases in the literature might need to be discussed on a discipline-specific and topic-specific basis. PMID:28320937
Capiau, Sara; Wilk, Leah S; De Kesel, Pieter M M; Aalders, Maurice C G; Stove, Christophe P
2018-02-06
The hematocrit (Hct) effect is one of the most important hurdles currently preventing more widespread implementation of quantitative dried blood spot (DBS) analysis in a routine context. Indeed, the Hct may affect both the accuracy of DBS methods as well as the interpretation of DBS-based results. We previously developed a method to determine the Hct of a DBS based on its hemoglobin content using noncontact diffuse reflectance spectroscopy. Despite the ease with which the analysis can be performed (i.e., mere scanning of the DBS) and the good results that were obtained, the method did require a complicated algorithm to derive the total hemoglobin content from the DBS's reflectance spectrum. As the total hemoglobin was calculated as the sum of oxyhemoglobin, methemoglobin, and hemichrome, the three main hemoglobin derivatives formed in DBS upon aging, the reflectance spectrum needed to be unmixed to determine the quantity of each of these derivatives. We now simplified the method by only using the reflectance at a single wavelength, located at a quasi-isosbestic point in the reflectance curve. At this wavelength, assuming 1-to-1 stoichiometry of the aging reaction, the reflectance is insensitive to the hemoglobin degradation and only scales with the total amount of hemoglobin and, hence, the Hct. This simplified method was successfully validated. At each quality control level as well as at the limits of quantitation (i.e., 0.20 and 0.67) bias, intra- and interday imprecision were within 10%. Method reproducibility was excellent based on incurred sample reanalysis and surpassed the reproducibility of the original method. Furthermore, the influence of the volume spotted, the measurement location within the spot, as well as storage time and temperature were evaluated, showing no relevant impact of these parameters. Application to 233 patient samples revealed a good correlation between the Hct determined on whole blood and the predicted Hct determined on venous DBS. The bias obtained with Bland and Altman analysis was -0.015 and the limits of agreement were -0.061 and 0.031, indicating that the simplified, noncontact Hct prediction method even outperforms the original method. In addition, using caffeine as a model compound, it was demonstrated that this simplified Hct prediction method can effectively be used to implement a Hct-dependent correction factor to DBS-based results to alleviate the Hct bias.
NASA Astrophysics Data System (ADS)
Uranishi, Katsushige; Ikemori, Fumikazu; Nakatsubo, Ryohei; Shimadera, Hikari; Kondo, Akira; Kikutani, Yuki; Asano, Katsuyoshi; Sugata, Seiji
2017-10-01
This study presented a comparison approach with multiple source apportionment methods to identify which sectors of emission data have large biases. The source apportionment methods for the comparison approach included both receptor and chemical transport models, which are widely used to quantify the impacts of emission sources on fine particulate matter of less than 2.5 μm in diameter (PM2.5). We used daily chemical component concentration data in the year 2013, including data for water-soluble ions, elements, and carbonaceous species of PM2.5 at 11 sites in the Kinki-Tokai district in Japan in order to apply the Positive Matrix Factorization (PMF) model for the source apportionment. Seven PMF factors of PM2.5 were identified with the temporal and spatial variation patterns and also retained features of the sites. These factors comprised two types of secondary sulfate, road transportation, heavy oil combustion by ships, biomass burning, secondary nitrate, and soil and industrial dust, accounting for 46%, 17%, 7%, 14%, 13%, and 3% of the PM2.5, respectively. The multiple-site data enabled a comprehensive identification of the PM2.5 sources. For the same period, source contributions were estimated by air quality simulations using the Community Multiscale Air Quality model (CMAQ) with the brute-force method (BFM) for four source categories. Both models provided consistent results for the following three of the four source categories: secondary sulfates, road transportation, and heavy oil combustion sources. For these three target categories, the models' agreement was supported by the small differences and high correlations between the CMAQ/BFM- and PMF-estimated source contributions to the concentrations of PM2.5, SO42-, and EC. In contrast, contributions of the biomass burning sources apportioned by CMAQ/BFM were much lower than and little correlated with those captured by the PMF model, indicating large uncertainties in the biomass burning emissions used in the CMAQ simulations. Thus, this comparison approach using the two antithetical models enables us to identify which sectors of emission data have large biases for improvement of future air quality simulations.
McConnell, Mark D; Monroe, Adrian P; Burger, Loren Wes; Martin, James A
2017-02-01
Advances in understanding avian nesting ecology are hindered by a prevalent lack of agreement between nest-site characteristics and fitness metrics such as nest success. We posit this is a result of inconsistent and improper timing of nest-site vegetation measurements. Therefore, we evaluated how the timing of nest vegetation measurement influences the estimated effects of vegetation structure on nest survival. We simulated phenological changes in nest-site vegetation growth over a typical nesting season and modeled how the timing of measuring that vegetation, relative to nest fate, creates bias in conclusions regarding its influence on nest survival. We modeled the bias associated with four methods of measuring nest-site vegetation: Method 1-measuring at nest initiation, Method 2-measuring at nest termination regardless of fate, Method 3-measuring at nest termination for successful nests and at estimated completion for unsuccessful nests, and Method 4-measuring at nest termination regardless of fate while also accounting for initiation date. We quantified and compared bias for each method for varying simulated effects, ranked models for each method using AIC, and calculated the proportion of simulations in which each model (measurement method) was selected as the best model. Our results indicate that the risk of drawing an erroneous or spurious conclusion was present in all methods but greater with Method 2 which is the most common method reported in the literature. Methods 1 and 3 were similarly less biased. Method 4 provided no additional value as bias was similar to Method 2 for all scenarios. While Method 1 is seldom practical to collect in the field, Method 3 is logistically practical and minimizes inherent bias. Implementation of Method 3 will facilitate estimating the effect of nest-site vegetation on survival, in the least biased way, and allow reliable conclusions to be drawn.
Sucunza, Federico; Danilewicz, Daniel; Cremer, Marta; Andriolo, Artur; Zerbini, Alexandre N
2018-01-01
Estimation of visibility bias is critical to accurately compute abundance of wild populations. The franciscana, Pontoporia blainvillei, is considered the most threatened small cetacean in the southwestern Atlantic Ocean. Aerial surveys are considered the most effective method to estimate abundance of this species, but many existing estimates have been considered unreliable because they lack proper estimation of correction factors for visibility bias. In this study, helicopter surveys were conducted to determine surfacing-diving intervals of franciscanas and to estimate availability for aerial platforms. Fifteen hours were flown and 101 groups of 1 to 7 franciscanas were monitored, resulting in a sample of 248 surface-dive cycles. The mean surfacing interval and diving interval times were 16.10 seconds (SE = 9.74) and 39.77 seconds (SE = 29.06), respectively. Availability was estimated at 0.39 (SE = 0.01), a value 16-46% greater than estimates computed from diving parameters obtained from boats or from land. Generalized mixed-effects models were used to investigate the influence of biological and environmental predictors on the proportion of time franciscana groups are visually available to be seen from an aerial platform. These models revealed that group size was the main factor influencing the proportion at surface. The use of negatively biased estimates of availability results in overestimation of abundance, leads to overly optimistic assessments of extinction probabilities and to potentially ineffective management actions. This study demonstrates that estimates of availability must be computed from suitable platforms to ensure proper conservation decisions are implemented to protect threatened species such as the franciscana.
Self-referent information processing in individuals with bipolar spectrum disorders
Molz Adams, Ashleigh; Shapero, Benjamin G.; Pendergast, Laura H.; Alloy, Lauren B.; Abramson, Lyn Y.
2014-01-01
Background Bipolar spectrum disorders (BSDs) are common and impairing, which has led to an examination of risk factors for their development and maintenance. Historically, research has examined cognitive vulnerabilities to BSDs derived largely from the unipolar depression literature. Specifically, theorists propose that dysfunctional information processing guided by negative self-schemata may be a risk factor for depression. However, few studies have examined whether BSD individuals also show self-referent processing biases. Methods This study examined self-referent information processing differences between 66 individuals with and 58 individuals without a BSD in a young adult sample (age M = 19.65, SD = 1.74; 62% female; 47% Caucasian). Repeated measures multivariate analysis of variance (MANOVA) was conducted to examine multivariate effects of BSD diagnosis on 4 self-referent processing variables (self-referent judgments, response latency, behavioral predictions, and recall) in response to depression-related and nondepression-related stimuli. Results Bipolar individuals endorsed and recalled more negative and fewer positive self-referent adjectives, as well as made more negative and fewer positive behavioral predictions. Many of these information-processing biases were partially, but not fully, mediated by depressive symptoms. Limitations Our sample was not a clinical or treatment-seeking sample, so we cannot generalize our results to clinical BSD samples. No participants had a bipolar I disorder at baseline. Conclusions This study provides further evidence that individuals with BSDs exhibit a negative self-referent information processing bias. This may mean that those with BSDs have selective attention and recall of negative information about themselves, highlighting the need for attention to cognitive biases in therapy. PMID:24074480
Danilewicz, Daniel; Cremer, Marta; Andriolo, Artur; Zerbini, Alexandre N.
2018-01-01
Estimation of visibility bias is critical to accurately compute abundance of wild populations. The franciscana, Pontoporia blainvillei, is considered the most threatened small cetacean in the southwestern Atlantic Ocean. Aerial surveys are considered the most effective method to estimate abundance of this species, but many existing estimates have been considered unreliable because they lack proper estimation of correction factors for visibility bias. In this study, helicopter surveys were conducted to determine surfacing-diving intervals of franciscanas and to estimate availability for aerial platforms. Fifteen hours were flown and 101 groups of 1 to 7 franciscanas were monitored, resulting in a sample of 248 surface-dive cycles. The mean surfacing interval and diving interval times were 16.10 seconds (SE = 9.74) and 39.77 seconds (SE = 29.06), respectively. Availability was estimated at 0.39 (SE = 0.01), a value 16–46% greater than estimates computed from diving parameters obtained from boats or from land. Generalized mixed-effects models were used to investigate the influence of biological and environmental predictors on the proportion of time franciscana groups are visually available to be seen from an aerial platform. These models revealed that group size was the main factor influencing the proportion at surface. The use of negatively biased estimates of availability results in overestimation of abundance, leads to overly optimistic assessments of extinction probabilities and to potentially ineffective management actions. This study demonstrates that estimates of availability must be computed from suitable platforms to ensure proper conservation decisions are implemented to protect threatened species such as the franciscana. PMID:29534086
Bagher, Amina M; Laprairie, Robert B; Kelly, Melanie E M; Denovan-Wright, Eileen M
2018-01-01
G protein-coupled receptors (GPCRs) interact with multiple intracellular effector proteins such that different ligands may preferentially activate one signal pathway over others, a phenomenon known as signaling bias. Signaling bias can be quantified to optimize drug selection for preclinical research. Here, we describe moderate-throughput methods to quantify signaling bias of known and novel compounds. In the example provided, we describe a method to define cannabinoid-signaling bias in a cell culture model of Huntington's disease (HD). Decreasing type 1 cannabinoid receptor (CB 1 ) levels is correlated with chorea and cognitive deficits in HD. There is evidence that elevating CB 1 levels and/or signaling may be beneficial for HD patients while decreasing CB 1 levels and/or signaling may be detrimental. Recent studies have found that Gα i/o -biased CB 1 agonists activate extracellular signal-regulated kinase (ERK), increase CB 1 protein levels, and improve viability of cells expressing mutant huntingtin. In contrast, CB 1 agonists that are β-arrestin1-biased were found to reduce CB 1 protein levels and cell viability. Measuring agonist bias of known and novel CB 1 agonists will provide important data that predict CB 1 -specific agonists that might be beneficial in animal models of HD and, following animal testing, in HD patients. This method can also be applied to study signaling bias for other GPCRs.
Identification method of laser gyro error model under changing physical field
NASA Astrophysics Data System (ADS)
Wang, Qingqing; Niu, Zhenzhong
2018-04-01
In this paper, the influence mechanism of temperature, temperature changing rate and temperature gradient on the inertial devices is studied. The two-order model of zero bias and the three-order model of the calibration factor of lster gyro under temperature variation are deduced. The calibration scheme of temperature error is designed, and the experiment is carried out. Two methods of stepwise regression analysis and BP neural network are used to identify the parameters of the temperature error model, and the effectiveness of the two methods is proved by the temperature error compensation.
Old, L.; Wojtak, R.; Mamon, G. A.; ...
2015-03-26
Our paper is the second in a series in which we perform an extensive comparison of various galaxy-based cluster mass estimation techniques that utilize the positions, velocities and colours of galaxies. Our aim is to quantify the scatter, systematic bias and completeness of cluster masses derived from a diverse set of 25 galaxy-based methods using two contrasting mock galaxy catalogues based on a sophisticated halo occupation model and a semi-analytic model. Analysing 968 clusters, we find a wide range in the rms errors in log M200c delivered by the different methods (0.18–1.08 dex, i.e. a factor of ~1.5–12), with abundance-matchingmore » and richness methods providing the best results, irrespective of the input model assumptions. In addition, certain methods produce a significant number of catastrophic cases where the mass is under- or overestimated by a factor greater than 10. Given the steeply falling high-mass end of the cluster mass function, we recommend that richness- or abundance-matching-based methods are used in conjunction with these methods as a sanity check for studies selecting high-mass clusters. We also see a stronger correlation of the recovered to input number of galaxies for both catalogues in comparison with the group/cluster mass, however, this does not guarantee that the correct member galaxies are being selected. Finally, we did not observe significantly higher scatter for either mock galaxy catalogues. These results have implications for cosmological analyses that utilize the masses, richnesses, or abundances of clusters, which have different uncertainties when different methods are used.« less
Neural Network and Nearest Neighbor Algorithms for Enhancing Sampling of Molecular Dynamics.
Galvelis, Raimondas; Sugita, Yuji
2017-06-13
The free energy calculations of complex chemical and biological systems with molecular dynamics (MD) are inefficient due to multiple local minima separated by high-energy barriers. The minima can be escaped using an enhanced sampling method such as metadynamics, which apply bias (i.e., importance sampling) along a set of collective variables (CV), but the maximum number of CVs (or dimensions) is severely limited. We propose a high-dimensional bias potential method (NN2B) based on two machine learning algorithms: the nearest neighbor density estimator (NNDE) and the artificial neural network (ANN) for the bias potential approximation. The bias potential is constructed iteratively from short biased MD simulations accounting for correlation among CVs. Our method is capable of achieving ergodic sampling and calculating free energy of polypeptides with up to 8-dimensional bias potential.
SU-E-T-525: Ionization Chamber Perturbation in Flattening Filter Free Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Czarnecki, D; Voigts-Rhetz, P von; Zink, K
2015-06-15
Purpose: Changing the characteristic of a photon beam by mechanically removing the flattening filter may impact the dose response of ionization chambers. Thus, perturbation factors of cylindrical ionization chambers in conventional and flattening filter free photon beams were calculated by Monte Carlo simulations. Methods: The EGSnrc/BEAMnrc code system was used for all Monte Carlo calculations. BEAMnrc models of nine different linear accelerators with and without flattening filter were used to create realistic photon sources. Monte Carlo based calculations to determine the fluence perturbations due to the presens of the chambers components, the different material of the sensitive volume (air insteadmore » of water) as well as the volume effect were performed by the user code egs-chamber. Results: Stem, central electrode, wall, density and volume perturbation factors for linear accelerators with and without flattening filter were calculated as a function of the beam quality specifier TPR{sub 20/10}. A bias between the perturbation factors as a function of TPR{sub 20/10} for flattening filter free beams and conventional linear accelerators could not be observed for the perturbations caused by the components of the ionization chamber and the sensitive volume. Conclusion: The results indicate that the well-known small bias between the beam quality correction factor as a function of TPR20/10 for the flattening filter free and conventional linear accelerators is not caused by the geometry of the detector but rather by the material of the sensitive volume. This suggest that the bias for flattening filter free photon fields is only caused by the different material of the sensitive volume (air instead of water)« less
Smoking and mortality in stroke survivors: can we eliminate the paradox?
Levine, Deborah A; Walter, James M; Karve, Sudeep J; Skolarus, Lesli E; Levine, Steven R; Mulhorn, Kristine A
2014-07-01
Many studies have suggested that smoking does not increase mortality in stroke survivors. Index event bias, a sample selection bias, potentially explains this paradoxical finding. Therefore, we compared all-cause, cardiovascular disease (CVD), and cancer mortality by cigarette smoking status among stroke survivors using methods to account for index event bias. Among 5797 stroke survivors of 45 years or older who responded to the National Health Interview Survey years 1997-2004, an annual, population-based survey of community-dwelling US adults, linked to the National Death Index, we estimated all-cause, CVD, and cancer mortality by smoking status using Cox proportional regression and propensity score analysis to account for demographic, socioeconomic, and clinical factors. Mean follow-up was 4.5 years. From 1997 to 2004, 18.7% of stroke survivors smoked. There were 1988 deaths in this stroke survivor cohort, with 50% of deaths because of CVD and 15% because of cancer. Current smokers had an increased risk of all-cause mortality (hazard ratio [HR], 1.36; 95% confidence interval [CI], 1.14-1.63) and cancer mortality (HR, 3.83; 95% CI, 2.48-5.91) compared with never smokers, after controlling for demographic, socioeconomic, and clinical factors. Current smokers had an increased risk of CVD mortality controlling for age and sex (HR, 1.29; 95% CI, 1.01-1.64), but this risk did not persist after controlling for socioeconomic and clinical factors (HR, 1.15; 95% CI, .88-1.50). Stroke survivors who smoke have an increased risk of all-cause mortality, which is largely because of cancer mortality. Socioeconomic and clinical factors explain stroke survivors' higher risk of CVD mortality associated with smoking. Published by Elsevier Inc.
Hendrick, Elizabeth M; Tino, Vincent R; Hanna, Steven R; Egan, Bruce A
2013-07-01
The U.S. Environmental Protection Agency (EPA) plume volume molar ratio method (PVMRM) and the ozone limiting method (OLM) are in the AERMOD model to predict the 1-hr average NO2/NO(x) concentration ratio. These ratios are multiplied by the AERMOD predicted NO(x) concentration to predict the 1-hr average NO2 concentration. This paper first briefly reviews PVMRM and OLM and points out some scientific parameterizations that could be improved (such as specification of relative dispersion coefficients) and then discusses an evaluation of the PVMRM and OLM methods as implemented in AERMOD using a new data set. While AERMOD has undergone many model evaluation studies in its default mode, PVMRM and OLM are nondefault options, and to date only three NO2 field data sets have been used in their evaluations. Here AERMOD/PVMRM and AERMOD/OLM codes are evaluated with a new data set from a northern Alaskan village with a small power plant. Hourly pollutant concentrations (NO, NO2, ozone) as well as meteorological variables were measured at a single monitor 500 m from the plant. Power plant operating parameters and emissions were calculated based on hourly operator logs. Hourly observations covering 1 yr were considered, but the evaluations only used hours when the wind was in a 60 degrees sector including the monitor and when concentrations were above a threshold. PVMRM is found to have little bias in predictions of the C(NO2)/C(NO(x)) ratio, which mostly ranged from 0.2 to 0.4 at this site. OLM overpredicted the ratio. AERMOD overpredicts the maximum NO(x) concentration but has an underprediction bias for lower concentrations. AERMOD/PVMRM overpredicts the maximum C(NO2) by about 50%, while AERMOD/OLM overpredicts by a factor of 2. For 381 hours evaluated, there is a relative mean bias in C(NO2) predictions of near zero for AERMOD/PVMRM, while the relative mean bias reflects a factor of 2 overprediction for AERMOD/OLM. This study was initiated because the new stringent 1-hr NO2 NAAQS has prompted modelers to more widely use the PVMRM and OLM methods for conversion of NO(x) to NO2 in the AERMOD regulatory model. To date these methods have been evaluated with a limited number of data sets. This study identified a new data set of ambient pollutant and meteorological monitoring near an isolated power plant in Wainwright, Alaska. To supplement the existing evaluations, this new data were used to evaluate PVMRM and OLM. This new data set has been and will be made available to other scientists for future investigations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Ji; Ishak, Mustapha; Lin, Weikang
Intrinsic alignments (IA) of galaxies have been recognized as one of the most serious contaminants to weak lensing. These systematics need to be isolated and mitigated in order for ongoing and future lensing surveys to reach their full potential. The IA self-calibration (SC) method was shown in previous studies to be able to reduce the GI contamination by up to a factor of 10 for the 2-point and 3-point correlations. The SC method does not require the assumption of an IA model in its working and can extract the GI signal from the same photo-z survey offering the possibility tomore » test and understand structure formation scenarios and their relationship to IA models. In this paper, we study the effects of the IA SC mitigation method on the precision and accuracy of cosmological parameter constraints from future cosmic shear surveys LSST, WFIRST and Euclid. We perform analytical and numerical calculations to estimate the loss of precision and the residual bias in the best fit cosmological parameters after the self-calibration is performed. We take into account uncertainties from photometric redshifts and the galaxy bias. We find that the confidence contours are slightly inflated from applying the SC method itself while a significant increase is due to the inclusion of the photo-z uncertainties. The bias of cosmological parameters is reduced from several-σ, when IA is not corrected for, to below 1-σ after SC is applied. These numbers are comparable to those resulting from applying the method of marginalizing over IA model parameters despite the fact that the two methods operate very differently. We conclude that implementing the SC for these future cosmic-shear surveys will not only allow one to efficiently mitigate the GI contaminant but also help to understand their modeling and link to structure formation.« less
NASA Astrophysics Data System (ADS)
Moghim, S.; Hsu, K.; Bras, R. L.
2013-12-01
General Circulation Models (GCMs) are used to predict circulation and energy transfers between the atmosphere and the land. It is known that these models produce biased results that will have impact on their uses. This work proposes a new method for bias correction: the equidistant cumulative distribution function-artificial neural network (EDCDFANN) procedure. The method uses artificial neural networks (ANNs) as a surrogate model to estimate bias-corrected temperature, given an identification of the system derived from GCM models output variables. A two-layer feed forward neural network is trained with observations during a historical period and then the adjusted network can be used to predict bias-corrected temperature for future periods. To capture the extreme values this method is combined with the equidistant CDF matching method (EDCDF, Li et al. 2010). The proposed method is tested with the Community Climate System Model (CCSM3) outputs using air and skin temperature, specific humidity, shortwave and longwave radiation as inputs to the ANN. This method decreases the mean square error and increases the spatial correlation between the modeled temperature and the observed one. The results indicate the EDCDFANN has potential to remove the biases of the model outputs.
Wang, Yinan; Kong, Feng; Huang, Lijie; Liu, Jia
2016-10-01
Self-esteem is a widely studied construct in psychology that is typically measured by the Rosenberg Self-Esteem Scale (RSES). However, a series of cross-sectional and longitudinal studies have suggested that a simple and widely used unidimensional factor model does not provide an adequate explanation of RSES responses due to method effects. To identify the neural correlates of the method effect, we sought to determine whether and how method effects were associated with the RSES and investigate the neural basis of these effects. Two hundred and eighty Chinese college students (130 males; mean age = 22.64 years) completed the RSES and underwent magnetic resonance imaging (MRI). Behaviorally, method effects were linked to both positively and negatively worded items in the RSES. Neurally, the right amygdala volume negatively correlated with the negative method factor, while the hippocampal volume positively correlated with the general self-esteem factor in the RSES. The neural dissociation between the general self-esteem factor and negative method factor suggests that there are different neural mechanisms underlying them. The amygdala is involved in modulating negative affectivity; therefore, the current study sheds light on the nature of method effects that are related to self-report with a mix of positively and negatively worded items. © 2015 Wiley Periodicals, Inc.
The Driver Behaviour Questionnaire as accident predictor; A methodological re-meta-analysis.
Af Wåhlberg, A E; Barraclough, P; Freeman, J
2015-12-01
The Manchester Driver Behaviour Questionnaire (DBQ) is the most commonly used self-report tool in traffic safety research and applied settings. It has been claimed that the violation factor of this instrument predicts accident involvement, which was supported by a previous meta-analysis. However, that analysis did not test for methodological effects, or include unpublished results. The present study re-analysed studies on prediction of accident involvement from DBQ factors, including lapses, and many unpublished effects. Tests of various types of dissemination bias and common method variance were undertaken. Outlier analysis showed that some effects were probably not reliable data, but excluding them did not change the results. For correlations between violations and crashes, tendencies for published effects to be larger than unpublished ones and for effects to decrease over time were observed, but were not significant. Also, using the mean of accidents as proxy for effect indicated that studies where effects for violations are not reported have smaller effect sizes. These differences indicate dissemination bias. Studies using self-reported accidents as dependent variables had much larger effects than those using recorded accident data. Also, zero-order correlations were larger than partial correlations controlled for exposure. Similarly, violations/accidents effects were strong only when there was also a strong correlation between accidents and exposure. Overall, the true effect is probably very close to zero (r<.07) for violations versus traffic accident involvement, depending upon which tendencies are controlled for. Methodological factors and dissemination bias have inflated the published effect sizes of the DBQ. Strong evidence of various artefactual effects is apparent. A greater level of care should be taken if the DBQ continues to be used in traffic safety research. Also, validation of self-reports should be more comprehensive in the future, taking into account the possibility of common method variance. Copyright © 2015 Elsevier Ltd and National Safety Council. All rights reserved.
Bias against research on gender bias.
Cislak, Aleksandra; Formanowicz, Magdalena; Saguy, Tamar
2018-01-01
The bias against women in academia is a documented phenomenon that has had detrimental consequences, not only for women, but also for the quality of science. First, gender bias in academia affects female scientists, resulting in their underrepresentation in academic institutions, particularly in higher ranks. The second type of gender bias in science relates to some findings applying only to male participants, which produces biased knowledge. Here, we identify a third potentially powerful source of gender bias in academia: the bias against research on gender bias. In a bibliometric investigation covering a broad range of social sciences, we analyzed published articles on gender bias and race bias and established that articles on gender bias are funded less often and published in journals with a lower Impact Factor than articles on comparable instances of social discrimination. This result suggests the possibility of an underappreciation of the phenomenon of gender bias and related research within the academic community. Addressing this meta-bias is crucial for the further examination of gender inequality, which severely affects many women across the world.
NASA Technical Reports Server (NTRS)
Zhang, Zhibo; Meyer, Kerry G.; Platnick, Steven; Oreopoulos, Lazaros; Lee, Dongmin; Yu, Hongbin
2014-01-01
This paper describes an efficient and unique method for computing the shortwave direct radiative effect (DRE) of aerosol residing above low-level liquid-phase clouds using CALIOP and MODIS data. It addresses the overlap of aerosol and cloud rigorously by utilizing the joint histogram of cloud optical depth and cloud top pressure while also accounting for subgrid-scale variations of aerosols. The method is computationally efficient because of its use of grid-level cloud and aerosol statistics, instead of pixel-level products, and a pre-computed look-up table based on radiative transfer calculations. We verify that for smoke over the southeast Atlantic Ocean the method yields a seasonal mean instantaneous (approximately 1:30PM local time) shortwave DRE of above cloud aerosol (ACA) that generally agrees with more rigorous pixel-level computation within 4 percent. We also estimate the impact of potential CALIOP aerosol optical depth (AOD) retrieval bias of ACA on DRE. We find that the regional and seasonal mean instantaneous DRE of ACA over southeast Atlantic Ocean would increase, from the original value of 6.4 W m(-2) based on operational CALIOP AOD to 9.6 W m(-2) if CALIOP AOD retrieval are biased low by a factor of 1.5 (Meyer et al., 2013) and further to 30.9 W m(-2) if CALIOP AOD retrieval are biased low by a factor of 5 as suggested in (Jethva et al., 2014). In contrast, the instantaneous ACA radiative forcing efficiency (RFE) remains relatively invariant in all cases at about 53 W m(-2) AOD(-1), suggesting a near linear relation between the instantaneous RFE and AOD. We also compute the annual mean instantaneous shortwave DRE of light-absorbing aerosols (i.e., smoke and polluted dust) over global oceans based on 4 years of CALIOP and MODIS data. We find that the variability of the annual mean shortwave DRE of above-cloud light-absorbing aerosol is mainly driven by the optical depth of the underlying clouds. While we demonstrate our method using CALIOP and MODIS data, it can also be extended to other satellite data sets, as well as climate model outputs.
An Approach to Addressing Selection Bias in Survival Analysis
Carlin, Caroline S.; Solid, Craig A.
2014-01-01
This work proposes a frailty model that accounts for non-random treatment assignment in survival analysis. Using Monte Carlo simulation, we found that estimated treatment parameters from our proposed endogenous selection survival model (esSurv) closely parallel the consistent two-stage residual inclusion (2SRI) results, while offering computational and interpretive advantages. The esSurv method greatly enhances computational speed relative to 2SRI by eliminating the need for bootstrapped standard errors, and generally results in smaller standard errors than those estimated by 2SRI. In addition, esSurv explicitly estimates the correlation of unobservable factors contributing to both treatment assignment and the outcome of interest, providing an interpretive advantage over the residual parameter estimate in the 2SRI method. Comparisons with commonly used propensity score methods and with a model that does not account for non-random treatment assignment show clear bias in these methods that is not mitigated by increased sample size. We illustrate using actual dialysis patient data comparing mortality of patients with mature arteriovenous grafts for venous access to mortality of patients with grafts placed but not yet ready for use at the initiation of dialysis. We find strong evidence of endogeneity (with estimate of correlation in unobserved factors ρ̂ = 0.55), and estimate a mature-graft hazard ratio of 0.197 in our proposed method, with a similar 0.173 hazard ratio using 2SRI. The 0.630 hazard ratio from a frailty model without a correction for the non-random nature of treatment assignment illustrates the importance of accounting for endogeneity. PMID:24845211
NASA Astrophysics Data System (ADS)
Alharbi, Raied; Hsu, Kuolin; Sorooshian, Soroosh; Braithwaite, Dan
2018-01-01
Precipitation is a key input variable for hydrological and climate studies. Rain gauges are capable of providing reliable precipitation measurements at point scale. However, the uncertainty of rain measurements increases when the rain gauge network is sparse. Satellite -based precipitation estimations appear to be an alternative source of precipitation measurements, but they are influenced by systematic bias. In this study, a method for removing the bias from the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) over a region where the rain gauge is sparse is investigated. The method consists of monthly empirical quantile mapping, climate classification, and inverse-weighted distance method. Daily PERSIANN-CCS is selected to test the capability of the method for removing the bias over Saudi Arabia during the period of 2010 to 2016. The first six years (2010 - 2015) are calibrated years and 2016 is used for validation. The results show that the yearly correlation coefficient was enhanced by 12%, the yearly mean bias was reduced by 93% during validated year. Root mean square error was reduced by 73% during validated year. The correlation coefficient, the mean bias, and the root mean square error show that the proposed method removes the bias on PERSIANN-CCS effectively that the method can be applied to other regions where the rain gauge network is sparse.
Model-Based Control of Observer Bias for the Analysis of Presence-Only Data in Ecology
Warton, David I.; Renner, Ian W.; Ramp, Daniel
2013-01-01
Presence-only data, where information is available concerning species presence but not species absence, are subject to bias due to observers being more likely to visit and record sightings at some locations than others (hereafter “observer bias”). In this paper, we describe and evaluate a model-based approach to accounting for observer bias directly – by modelling presence locations as a function of known observer bias variables (such as accessibility variables) in addition to environmental variables, then conditioning on a common level of bias to make predictions of species occurrence free of such observer bias. We implement this idea using point process models with a LASSO penalty, a new presence-only method related to maximum entropy modelling, that implicitly addresses the “pseudo-absence problem” of where to locate pseudo-absences (and how many). The proposed method of bias-correction is evaluated using systematically collected presence/absence data for 62 plant species endemic to the Blue Mountains near Sydney, Australia. It is shown that modelling and controlling for observer bias significantly improves the accuracy of predictions made using presence-only data, and usually improves predictions as compared to pseudo-absence or “inventory” methods of bias correction based on absences from non-target species. Future research will consider the potential for improving the proposed bias-correction approach by estimating the observer bias simultaneously across multiple species. PMID:24260167
NASA Astrophysics Data System (ADS)
Passow, Christian; Donner, Reik
2017-04-01
Quantile mapping (QM) is an established concept that allows to correct systematic biases in multiple quantiles of the distribution of a climatic observable. It shows remarkable results in correcting biases in historical simulations through observational data and outperforms simpler correction methods which relate only to the mean or variance. Since it has been shown that bias correction of future predictions or scenario runs with basic QM can result in misleading trends in the projection, adjusted, trend preserving, versions of QM were introduced in the form of detrended quantile mapping (DQM) and quantile delta mapping (QDM) (Cannon, 2015, 2016). Still, all previous versions and applications of QM based bias correction rely on the assumption of time-independent quantiles over the investigated period, which can be misleading in the context of a changing climate. Here, we propose a novel combination of linear quantile regression (QR) with the classical QM method to introduce a consistent, time-dependent and trend preserving approach of bias correction for historical and future projections. Since QR is a regression method, it is possible to estimate quantiles in the same resolution as the given data and include trends or other dependencies. We demonstrate the performance of the new method of linear regression quantile mapping (RQM) in correcting biases of temperature and precipitation products from historical runs (1959 - 2005) of the COSMO model in climate mode (CCLM) from the Euro-CORDEX ensemble relative to gridded E-OBS data of the same spatial and temporal resolution. A thorough comparison with established bias correction methods highlights the strengths and potential weaknesses of the new RQM approach. References: A.J. Cannon, S.R. Sorbie, T.Q. Murdock: Bias Correction of GCM Precipitation by Quantile Mapping - How Well Do Methods Preserve Changes in Quantiles and Extremes? Journal of Climate, 28, 6038, 2015 A.J. Cannon: Multivariate Bias Correction of Climate Model Outputs - Matching Marginal Distributions and Inter-variable Dependence Structure. Journal of Climate, 29, 7045, 2016
Is AIDS a Biasing Factor in Teacher Judgment?
ERIC Educational Resources Information Center
Walker, David W.; Hulecki, Mary B.
1989-01-01
Regular-education, third-grade teachers (n=91) in Indiana reviewed one of two psychological reports, identical except that one reported a diagnosis of Acquired Immune Deficiency Syndrome (AIDS) and one reported a diagnosis of rheumatic fever. AIDS was not found to be a biasing factor in teachers' judgments regarding special education placement.…
Data assimilation in integrated hydrological modelling in the presence of observation bias
NASA Astrophysics Data System (ADS)
Rasmussen, J.; Madsen, H.; Jensen, K. H.; Refsgaard, J. C.
2015-08-01
The use of bias-aware Kalman filters for estimating and correcting observation bias in groundwater head observations is evaluated using both synthetic and real observations. In the synthetic test, groundwater head observations with a constant bias and unbiased stream discharge observations are assimilated in a catchment scale integrated hydrological model with the aim of updating stream discharge and groundwater head, as well as several model parameters relating to both stream flow and groundwater modeling. The Colored Noise Kalman filter (ColKF) and the Separate bias Kalman filter (SepKF) are tested and evaluated for correcting the observation biases. The study found that both methods were able to estimate most of the biases and that using any of the two bias estimation methods resulted in significant improvements over using a bias-unaware Kalman Filter. While the convergence of the ColKF was significantly faster than the convergence of the SepKF, a much larger ensemble size was required as the estimation of biases would otherwise fail. Real observations of groundwater head and stream discharge were also assimilated, resulting in improved stream flow modeling in terms of an increased Nash-Sutcliffe coefficient while no clear improvement in groundwater head modeling was observed. Both the ColKF and the SepKF tended to underestimate the biases, which resulted in drifting model behavior and sub-optimal parameter estimation, but both methods provided better state updating and parameter estimation than using a bias-unaware filter.
Data assimilation in integrated hydrological modelling in the presence of observation bias
NASA Astrophysics Data System (ADS)
Rasmussen, Jørn; Madsen, Henrik; Høgh Jensen, Karsten; Refsgaard, Jens Christian
2016-05-01
The use of bias-aware Kalman filters for estimating and correcting observation bias in groundwater head observations is evaluated using both synthetic and real observations. In the synthetic test, groundwater head observations with a constant bias and unbiased stream discharge observations are assimilated in a catchment-scale integrated hydrological model with the aim of updating stream discharge and groundwater head, as well as several model parameters relating to both streamflow and groundwater modelling. The coloured noise Kalman filter (ColKF) and the separate-bias Kalman filter (SepKF) are tested and evaluated for correcting the observation biases. The study found that both methods were able to estimate most of the biases and that using any of the two bias estimation methods resulted in significant improvements over using a bias-unaware Kalman filter. While the convergence of the ColKF was significantly faster than the convergence of the SepKF, a much larger ensemble size was required as the estimation of biases would otherwise fail. Real observations of groundwater head and stream discharge were also assimilated, resulting in improved streamflow modelling in terms of an increased Nash-Sutcliffe coefficient while no clear improvement in groundwater head modelling was observed. Both the ColKF and the SepKF tended to underestimate the biases, which resulted in drifting model behaviour and sub-optimal parameter estimation, but both methods provided better state updating and parameter estimation than using a bias-unaware filter.
Method for revealing biases in precision mass measurements
NASA Astrophysics Data System (ADS)
Vabson, V.; Vendt, R.; Kübarsepp, T.; Noorma, M.
2013-02-01
A practical method for the quantification of systematic errors of large-scale automatic comparators is presented. This method is based on a comparison of the performance of two different comparators. First, the differences of 16 equal partial loads of 1 kg are measured with a high-resolution mass comparator featuring insignificant bias and 1 kg maximum load. At the second stage, a large-scale comparator is tested by using combined loads with known mass differences. Comparing the different results, the biases of any comparator can be easily revealed. These large-scale comparator biases are determined over a 16-month period, and for the 1 kg loads, a typical pattern of biases in the range of ±0.4 mg is observed. The temperature differences recorded inside the comparator concurrently with mass measurements are found to remain within a range of ±30 mK, which obviously has a minor effect on the detected biases. Seasonal variations imply that the biases likely arise mainly due to the functioning of the environmental control at the measurement location.
Participation in an Intergenerational Service Learning Course and Implicit Biases
ERIC Educational Resources Information Center
Kogan, Lori R.; Schoenfeld-Tacher, Regina M.
2018-01-01
Biases against the elderly and people with disabilities can lead to discriminatory behaviors. One way to conceptualize attitudes toward the elderly and people with disabilities is through the differentiation of explicit (conscious) and implicit (unconscious) factors. Although both explicit and implicit attitudes and biases contribute to the full…
Non-Gaussian bias: insights from discrete density peaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desjacques, Vincent; Riotto, Antonio; Gong, Jinn-Ouk, E-mail: Vincent.Desjacques@unige.ch, E-mail: jinn-ouk.gong@apctp.org, E-mail: Antonio.Riotto@unige.ch
2013-09-01
Corrections induced by primordial non-Gaussianity to the linear halo bias can be computed from a peak-background split or the widespread local bias model. However, numerical simulations clearly support the prediction of the former, in which the non-Gaussian amplitude is proportional to the linear halo bias. To understand better the reasons behind the failure of standard Lagrangian local bias, in which the halo overdensity is a function of the local mass overdensity only, we explore the effect of a primordial bispectrum on the 2-point correlation of discrete density peaks. We show that the effective local bias expansion to peak clustering vastlymore » simplifies the calculation. We generalize this approach to excursion set peaks and demonstrate that the resulting non-Gaussian amplitude, which is a weighted sum of quadratic bias factors, precisely agrees with the peak-background split expectation, which is a logarithmic derivative of the halo mass function with respect to the normalisation amplitude. We point out that statistics of thresholded regions can be computed using the same formalism. Our results suggest that halo clustering statistics can be modelled consistently (in the sense that the Gaussian and non-Gaussian bias factors agree with peak-background split expectations) from a Lagrangian bias relation only if the latter is specified as a set of constraints imposed on the linear density field. This is clearly not the case of standard Lagrangian local bias. Therefore, one is led to consider additional variables beyond the local mass overdensity.« less
Influence of item distribution pattern and abundance on efficiency of benthic core sampling
Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.
2014-01-01
ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.
Furlanello, Cesare; Serafini, Maria; Merler, Stefano; Jurman, Giuseppe
2003-11-06
We describe the E-RFE method for gene ranking, which is useful for the identification of markers in the predictive classification of array data. The method supports a practical modeling scheme designed to avoid the construction of classification rules based on the selection of too small gene subsets (an effect known as the selection bias, in which the estimated predictive errors are too optimistic due to testing on samples already considered in the feature selection process). With E-RFE, we speed up the recursive feature elimination (RFE) with SVM classifiers by eliminating chunks of uninteresting genes using an entropy measure of the SVM weights distribution. An optimal subset of genes is selected according to a two-strata model evaluation procedure: modeling is replicated by an external stratified-partition resampling scheme, and, within each run, an internal K-fold cross-validation is used for E-RFE ranking. Also, the optimal number of genes can be estimated according to the saturation of Zipf's law profiles. Without a decrease of classification accuracy, E-RFE allows a speed-up factor of 100 with respect to standard RFE, while improving on alternative parametric RFE reduction strategies. Thus, a process for gene selection and error estimation is made practical, ensuring control of the selection bias, and providing additional diagnostic indicators of gene importance.
Estimating Bias Error Distributions
NASA Technical Reports Server (NTRS)
Liu, Tian-Shu; Finley, Tom D.
2001-01-01
This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.
Briel, Matthias; Lane, Melanie; Montori, Victor M; Bassler, Dirk; Glasziou, Paul; Malaga, German; Akl, Elie A; Ferreira-Gonzalez, Ignacio; Alonso-Coello, Pablo; Urrutia, Gerard; Kunz, Regina; Culebro, Carolina Ruiz; da Silva, Suzana Alves; Flynn, David N; Elamin, Mohamed B; Strahm, Brigitte; Murad, M Hassan; Djulbegovic, Benjamin; Adhikari, Neill KJ; Mills, Edward J; Gwadry-Sridhar, Femida; Kirpalani, Haresh; Soares, Heloisa P; Elnour, Nisrin O Abu; You, John J; Karanicolas, Paul J; Bucher, Heiner C; Lampropulos, Julianna F; Nordmann, Alain J; Burns, Karen EA; Mulla, Sohail M; Raatz, Heike; Sood, Amit; Kaur, Jagdeep; Bankhead, Clare R; Mullan, Rebecca J; Nerenberg, Kara A; Vandvik, Per Olav; Coto-Yglesias, Fernando; Schünemann, Holger; Tuche, Fabio; Chrispim, Pedro Paulo M; Cook, Deborah J; Lutz, Kristina; Ribic, Christine M; Vale, Noah; Erwin, Patricia J; Perera, Rafael; Zhou, Qi; Heels-Ansdell, Diane; Ramsay, Tim; Walter, Stephen D; Guyatt, Gordon H
2009-01-01
Background Randomized clinical trials (RCTs) stopped early for benefit often receive great attention and affect clinical practice, but pose interpretational challenges for clinicians, researchers, and policy makers. Because the decision to stop the trial may arise from catching the treatment effect at a random high, truncated RCTs (tRCTs) may overestimate the true treatment effect. The Study Of Trial Policy Of Interim Truncation (STOPIT-1), which systematically reviewed the epidemiology and reporting quality of tRCTs, found that such trials are becoming more common, but that reporting of stopping rules and decisions were often deficient. Most importantly, treatment effects were often implausibly large and inversely related to the number of the events accrued. The aim of STOPIT-2 is to determine the magnitude and determinants of possible bias introduced by stopping RCTs early for benefit. Methods/Design We will use sensitive strategies to search for systematic reviews addressing the same clinical question as each of the tRCTs identified in STOPIT-1 and in a subsequent literature search. We will check all RCTs included in each systematic review to determine their similarity to the index tRCT in terms of participants, interventions, and outcome definition, and conduct new meta-analyses addressing the outcome that led to early termination of the tRCT. For each pair of tRCT and systematic review of corresponding non-tRCTs we will estimate the ratio of relative risks, and hence estimate the degree of bias. We will use hierarchical multivariable regression to determine the factors associated with the magnitude of this ratio. Factors explored will include the presence and quality of a stopping rule, the methodological quality of the trials, and the number of total events that had occurred at the time of truncation. Finally, we will evaluate whether Bayesian methods using conservative informative priors to "regress to the mean" overoptimistic tRCTs can correct observed biases. Discussion A better understanding of the extent to which tRCTs exaggerate treatment effects and of the factors associated with the magnitude of this bias can optimize trial design and data monitoring charters, and may aid in the interpretation of the results from trials stopped early for benefit. PMID:19580665
The L0 Regularized Mumford-Shah Model for Bias Correction and Segmentation of Medical Images.
Duan, Yuping; Chang, Huibin; Huang, Weimin; Zhou, Jiayin; Lu, Zhongkang; Wu, Chunlin
2015-11-01
We propose a new variant of the Mumford-Shah model for simultaneous bias correction and segmentation of images with intensity inhomogeneity. First, based on the model of images with intensity inhomogeneity, we introduce an L0 gradient regularizer to model the true intensity and a smooth regularizer to model the bias field. In addition, we derive a new data fidelity using the local intensity properties to allow the bias field to be influenced by its neighborhood. Second, we use a two-stage segmentation method, where the fast alternating direction method is implemented in the first stage for the recovery of true intensity and bias field and a simple thresholding is used in the second stage for segmentation. Different from most of the existing methods for simultaneous bias correction and segmentation, we estimate the bias field and true intensity without fixing either the number of the regions or their values in advance. Our method has been validated on medical images of various modalities with intensity inhomogeneity. Compared with the state-of-art approaches and the well-known brain software tools, our model is fast, accurate, and robust with initializations.
Constructing a multidimensional free energy surface like a spider weaving a web.
Chen, Changjun
2017-10-15
Complete free energy surface in the collective variable space provides important information of the reaction mechanisms of the molecules. But, sufficient sampling in the collective variable space is not easy. The space expands quickly with the number of the collective variables. To solve the problem, many methods utilize artificial biasing potentials to flatten out the original free energy surface of the molecule in the simulation. Their performances are sensitive to the definitions of the biasing potentials. Fast-growing biasing potential accelerates the sampling speed but decreases the accuracy of the free energy result. Slow-growing biasing potential gives an optimized result but needs more simulation time. In this article, we propose an alternative method. It adds the biasing potential to a representative point of the molecule in the collective variable space to improve the conformational sampling. And the free energy surface is calculated from the free energy gradient in the constrained simulation, not given by the negative of the biasing potential as previous methods. So the presented method does not require the biasing potential to remove all the barriers and basins on the free energy surface exactly. Practical applications show that the method in this work is able to produce the accurate free energy surfaces for different molecules in a short time period. The free energy errors are small in the cases of various biasing potentials. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Using Propensity Scores to Reduce Selection Bias in Mathematics Education Research
ERIC Educational Resources Information Center
Graham, Suzanne E.
2010-01-01
Selection bias is a problem for mathematics education researchers interested in using observational rather than experimental data to make causal inferences about the effects of different instructional methods in mathematics on student outcomes. Propensity score methods represent 1 approach to dealing with such selection bias. This article…
Seshia, Shashi S; Bryan Young, G; Makhinson, Michael; Smith, Preston A; Stobart, Kent; Croskerry, Pat
2018-02-01
Although patient safety has improved steadily, harm remains a substantial global challenge. Additionally, safety needs to be ensured not only in hospitals but also across the continuum of care. Better understanding of the complex cognitive factors influencing health care-related decisions and organizational cultures could lead to more rational approaches, and thereby to further improvement. A model integrating the concepts underlying Reason's Swiss cheese theory and the cognitive-affective biases plus cascade could advance the understanding of cognitive-affective processes that underlie decisions and organizational cultures across the continuum of care. Thematic analysis, qualitative information from several sources being used to support argumentation. Complex covert cognitive phenomena underlie decisions influencing health care. In the integrated model, the Swiss cheese slices represent dynamic cognitive-affective (mental) gates: Reason's successive layers of defence. Like firewalls and antivirus programs, cognitive-affective gates normally allow the passage of rational decisions but block or counter unsounds ones. Gates can be breached (ie, holes created) at one or more levels of organizations, teams, and individuals, by (1) any element of cognitive-affective biases plus (conflicts of interest and cognitive biases being the best studied) and (2) other potential error-provoking factors. Conversely, flawed decisions can be blocked and consequences minimized; for example, by addressing cognitive biases plus and error-provoking factors, and being constantly mindful. Informed shared decision making is a neglected but critical layer of defence (cognitive-affective gate). The integrated model can be custom tailored to specific situations, and the underlying principles applied to all methods for improving safety. The model may also provide a framework for developing and evaluating strategies to optimize organizational cultures and decisions. The concept is abstract, the model is virtual, and the best supportive evidence is qualitative and indirect. The proposed model may help enhance rational decision making across the continuum of care, thereby improving patient safety globally. © 2017 The Authors. Journal of Evaluation in Clinical Practice published by John Wiley & Sons, Ltd.
Correction of stream quality trends for the effects of laboratory measurement bias
Alexander, Richard B.; Smith, Richard A.; Schwarz, Gregory E.
1993-01-01
We present a statistical model relating measurements of water quality to associated errors in laboratory methods. Estimation of the model allows us to correct trends in water quality for long-term and short-term variations in laboratory measurement errors. An illustration of the bias correction method for a large national set of stream water quality and quality assurance data shows that reductions in the bias of estimates of water quality trend slopes are achieved at the expense of increases in the variance of these estimates. Slight improvements occur in the precision of estimates of trend in bias by using correlative information on bias and water quality to estimate random variations in measurement bias. The results of this investigation stress the need for reliable, long-term quality assurance data and efficient statistical methods to assess the effects of measurement errors on the detection of water quality trends.
Bias correction for magnetic resonance images via joint entropy regularization.
Wang, Shanshan; Xia, Yong; Dong, Pei; Luo, Jianhua; Huang, Qiu; Feng, Dagan; Li, Yuanxiang
2014-01-01
Due to the imperfections of the radio frequency (RF) coil or object-dependent electrodynamic interactions, magnetic resonance (MR) images often suffer from a smooth and biologically meaningless bias field, which causes severe troubles for subsequent processing and quantitative analysis. To effectively restore the original signal, this paper simultaneously exploits the spatial and gradient features of the corrupted MR images for bias correction via the joint entropy regularization. With both isotropic and anisotropic total variation (TV) considered, two nonparametric bias correction algorithms have been proposed, namely IsoTVBiasC and AniTVBiasC. These two methods have been applied to simulated images under various noise levels and bias field corruption and also tested on real MR data. The test results show that the proposed two methods can effectively remove the bias field and also present comparable performance compared to the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Cleves, Ann E.; Jain, Ajay N.
2008-03-01
Inductive bias is the set of assumptions that a person or procedure makes in making a prediction based on data. Different methods for ligand-based predictive modeling have different inductive biases, with a particularly sharp contrast between 2D and 3D similarity methods. A unique aspect of ligand design is that the data that exist to test methodology have been largely man-made, and that this process of design involves prediction. By analyzing the molecular similarities of known drugs, we show that the inductive bias of the historic drug discovery process has a very strong 2D bias. In studying the performance of ligand-based modeling methods, it is critical to account for this issue in dataset preparation, use of computational controls, and in the interpretation of results. We propose specific strategies to explicitly address the problems posed by inductive bias considerations.
McManus, I C; Elder, Andrew T; Dacre, Jane
2013-07-30
Bias of clinical examiners against some types of candidate, based on characteristics such as sex or ethnicity, would represent a threat to the validity of an examination, since sex or ethnicity are 'construct-irrelevant' characteristics. In this paper we report a novel method for assessing sex and ethnic bias in over 2000 examiners who had taken part in the PACES and nPACES (new PACES) examinations of the MRCP(UK). PACES and nPACES are clinical skills examinations that have two examiners at each station who mark candidates independently. Differences between examiners cannot be due to differences in performance of a candidate because that is the same for the two examiners, and hence may result from bias or unreliability on the part of the examiners. By comparing each examiner against a 'basket' of all of their co-examiners, it is possible to identify examiners whose behaviour is anomalous. The method assessed hawkishness-doveishness, sex bias, ethnic bias and, as a control condition to assess the statistical method, 'even-number bias' (i.e. treating candidates with odd and even exam numbers differently). Significance levels were Bonferroni corrected because of the large number of examiners being considered. The results of 26 diets of PACES and six diets of nPACES were examined statistically to assess the extent of hawkishness, as well as sex bias and ethnicity bias in individual examiners. The control (odd-number) condition suggested that about 5% of examiners were significant at an (uncorrected) 5% level, and that the method therefore worked as expected. As in a previous study (BMC Medical Education, 2006, 6:42), some examiners were hawkish or doveish relative to their peers. No examiners showed significant sex bias, and only a single examiner showed evidence consistent with ethnic bias. A re-analysis of the data considering only one examiner per station, as would be the case for many clinical examinations, showed that analysis with a single examiner runs a serious risk of false positive identifications probably due to differences in case-mix and content-specificity. In examinations where there are two independent examiners at a station, our method can assess the extent of bias against candidates with particular characteristics. The method would be far less sensitive in examinations with only a single examiner per station as examiner variance would be confounded with candidate performance variance. The method however works well when there is more than one examiner at a station and in the case of the current MRCP(UK) clinical examination, nPACES, found possible sex bias in no examiners and possible ethnic bias in only one.
Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q.; Ducote, Justin L.; Su, Min-Ying; Molloi, Sabee
2013-01-01
Purpose: Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. Methods: T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left–right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. Results: The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left–right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left–right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. Conclusions: The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications. PMID:24320536
Cognitive debiasing 1: origins of bias and theory of debiasing
Croskerry, Pat; Singhal, Geeta; Mamede, Sílvia
2013-01-01
Numerous studies have shown that diagnostic failure depends upon a variety of factors. Psychological factors are fundamental in influencing the cognitive performance of the decision maker. In this first of two papers, we discuss the basics of reasoning and the Dual Process Theory (DPT) of decision making. The general properties of the DPT model, as it applies to diagnostic reasoning, are reviewed. A variety of cognitive and affective biases are known to compromise the decision-making process. They mostly appear to originate in the fast intuitive processes of Type 1 that dominate (or drive) decision making. Type 1 processes work well most of the time but they may open the door for biases. Removing or at least mitigating these biases would appear to be an important goal. We will also review the origins of biases. The consensus is that there are two major sources: innate, hard-wired biases that developed in our evolutionary past, and acquired biases established in the course of development and within our working environments. Both are associated with abbreviated decision making in the form of heuristics. Other work suggests that ambient and contextual factors may create high risk situations that dispose decision makers to particular biases. Fatigue, sleep deprivation and cognitive overload appear to be important determinants. The theoretical basis of several approaches towards debiasing is then discussed. All share a common feature that involves a deliberate decoupling from Type 1 intuitive processing and moving to Type 2 analytical processing so that eventually unexamined intuitive judgments can be submitted to verification. This decoupling step appears to be the critical feature of cognitive and affective debiasing. PMID:23882089
van Iterson, Maarten; van Zwet, Erik W; Heijmans, Bastiaan T
2017-01-27
We show that epigenome- and transcriptome-wide association studies (EWAS and TWAS) are prone to significant inflation and bias of test statistics, an unrecognized phenomenon introducing spurious findings if left unaddressed. Neither GWAS-based methodology nor state-of-the-art confounder adjustment methods completely remove bias and inflation. We propose a Bayesian method to control bias and inflation in EWAS and TWAS based on estimation of the empirical null distribution. Using simulations and real data, we demonstrate that our method maximizes power while properly controlling the false positive rate. We illustrate the utility of our method in large-scale EWAS and TWAS meta-analyses of age and smoking.
Calculation of air movement in ice caves by using the CalcFlow method
NASA Astrophysics Data System (ADS)
Meyer, Christiane; Pflitsch, Andreas; Maggi, Valter
2017-04-01
We present a method to determine the air flow regime within ice caves by temperature loggers. Technical capabilities of conducting airflow measurements are restricted by the availability of energy at the ice cave study sites throughout the year. Though the knowledge of the airflow regime is a prerequisite for the understanding of the cave climate. By cross-correlating different time series of air temperature measurements inside a cave, we define the travel time of the air between the loggers, which corresponds to the time shift of best correlation, and use this result to derive the airflow speed. Then we estimate the temperature biases and scale factors for the temperature variations observed by the different loggers by a least squares adjustment. As quality control for bias and scale we use the formal errors of the estimation process. For the calculated airflow speed quality criteria are developed by use of a simulation study. Furthermore we will apply the method to temperature measurements in the static ice cave Schellenberger Eishöhle (Germany). In the end we show how the method can be used as an advanced filter for the separation of different signal contents of the temperature measurements.
Tie, Junbo; Cao, Juliang; Chang, Lubing; Cai, Shaokun; Wu, Meiping; Lian, Junxiang
2018-03-16
Compensation of gravity disturbance can improve the precision of inertial navigation, but the effect of compensation will decrease due to the accelerometer bias, and estimation of the accelerometer bias is a crucial issue in gravity disturbance compensation. This paper first investigates the effect of accelerometer bias on gravity disturbance compensation, and the situation in which the accelerometer bias should be estimated is established. The accelerometer bias is estimated from the gravity vector measurement, and a model of measurement noise in gravity vector measurement is built. Based on this model, accelerometer bias is separated from the gravity vector measurement error by the method of least squares. Horizontal gravity disturbances are calculated through EGM2008 spherical harmonic model to build the simulation scene, and the simulation results indicate that precise estimations of the accelerometer bias can be obtained with the proposed method.
Cao, Juliang; Cai, Shaokun; Wu, Meiping; Lian, Junxiang
2018-01-01
Compensation of gravity disturbance can improve the precision of inertial navigation, but the effect of compensation will decrease due to the accelerometer bias, and estimation of the accelerometer bias is a crucial issue in gravity disturbance compensation. This paper first investigates the effect of accelerometer bias on gravity disturbance compensation, and the situation in which the accelerometer bias should be estimated is established. The accelerometer bias is estimated from the gravity vector measurement, and a model of measurement noise in gravity vector measurement is built. Based on this model, accelerometer bias is separated from the gravity vector measurement error by the method of least squares. Horizontal gravity disturbances are calculated through EGM2008 spherical harmonic model to build the simulation scene, and the simulation results indicate that precise estimations of the accelerometer bias can be obtained with the proposed method. PMID:29547552
NASA Astrophysics Data System (ADS)
Johnstone, Samuel; Hourigan, Jeremy; Gallagher, Christopher
2013-05-01
Heterogeneous concentrations of α-producing nuclides in apatite have been recognized through a variety of methods. The presence of zonation in apatite complicates both traditional α-ejection corrections and diffusive models, both of which operate under the assumption of homogeneous concentrations. In this work we develop a method for measuring radial concentration profiles of 238U and 232Th in apatite by laser ablation ICP-MS depth profiling. We then focus on one application of this method, removing bias introduced by applying inappropriate α-ejection corrections. Formal treatment of laser ablation ICP-MS depth profile calibration for apatite includes construction and calibration of matrix-matched standards and quantification of rates of elemental fractionation. From this we conclude that matrix-matched standards provide more robust monitors of fractionation rate and concentrations than doped silicate glass standards. We apply laser ablation ICP-MS depth profiling to apatites from three unknown populations and small, intact crystals of Durango fluorapatite. Accurate and reproducible Durango apatite dates suggest that prolonged exposure to laser drilling does not impact cooling ages. Intracrystalline concentrations vary by at least a factor of 2 in the majority of the samples analyzed, but concentration variation only exceeds 5x in 5 grains and 10x in 1 out of the 63 grains analyzed. Modeling of synthetic concentration profiles suggests that for concentration variations of 2x and 10x individual homogeneous versus zonation dependent α-ejection corrections could lead to age bias of >5% and >20%, respectively. However, models based on measured concentration profiles only generated biases exceeding 5% in 13 of the 63 cases modeled. Application of zonation dependent α-ejection corrections did not significantly reduce the age dispersion present in any of the populations studied. This suggests that factors beyond homogeneous α-ejection corrections are the dominant source of overdispersion in apatite (U-Th)/He cooling ages.
Krueger, Aaron B; Carnell, Pauline; Carpenter, John F
2016-04-01
In many manufacturing and research areas, the ability to accurately monitor and characterize nanoparticles is becoming increasingly important. Nanoparticle tracking analysis is rapidly becoming a standard method for this characterization, yet several key factors in data acquisition and analysis may affect results. Nanoparticle tracking analysis is prone to user input and bias on account of a high number of parameters available, contains a limited analysis volume, and individual sample characteristics such as polydispersity or complex protein solutions may affect analysis results. This study systematically addressed these key issues. The integrated syringe pump was used to increase the sample volume analyzed. It was observed that measurements recorded under flow caused a reduction in total particle counts for both polystyrene and protein particles compared to those collected under static conditions. In addition, data for polydisperse samples tended to lose peak resolution at higher flow rates, masking distinct particle populations. Furthermore, in a bimodal particle population, a bias was seen toward the larger species within the sample. The impacts of filtration on an agitated intravenous immunoglobulin sample and operating parameters including "MINexps" and "blur" were investigated to optimize the method. Taken together, this study provides recommendations on instrument settings and sample preparations to properly characterize complex samples. Copyright © 2016. Published by Elsevier Inc.
Taniguchi, Hidetaka; Sato, Hiroshi; Shirakawa, Tomohiro
2018-05-09
Human learners can generalize a new concept from a small number of samples. In contrast, conventional machine learning methods require large amounts of data to address the same types of problems. Humans have cognitive biases that promote fast learning. Here, we developed a method to reduce the gap between human beings and machines in this type of inference by utilizing cognitive biases. We implemented a human cognitive model into machine learning algorithms and compared their performance with the currently most popular methods, naïve Bayes, support vector machine, neural networks, logistic regression and random forests. We focused on the task of spam classification, which has been studied for a long time in the field of machine learning and often requires a large amount of data to obtain high accuracy. Our models achieved superior performance with small and biased samples in comparison with other representative machine learning methods.
Vladimirov, N V; Likhoshvaĭ, V A; Matushkin, Iu G
2007-01-01
Gene expression is known to correlate with degree of codon bias in many unicellular organisms. However, such correlation is absent in some organisms. Recently we demonstrated that inverted complementary repeats within coding DNA sequence must be considered for proper estimation of translation efficiency, since they may form secondary structures that obstruct ribosome movement. We have developed a program for estimation of potential coding DNA sequence expression in defined unicellular organism using its genome sequence. The program computes elongation efficiency index. Computation is based on estimation of coding DNA sequence elongation efficiency, taking into account three key factors: codon bias, average number of inverted complementary repeats, and free energy of potential stem-loop structures formed by the repeats. The influence of these factors on translation is numerically estimated. An optimal proportion of these factors is computed for each organism individually. Quantitative translational characteristics of 384 unicellular organisms (351 bacteria, 28 archaea, 5 eukaryota) have been computed using their annotated genomes from NCBI GenBank. Five potential evolutionary strategies of translational optimization have been determined among studied organisms. A considerable difference of preferred translational strategies between Bacteria and Archaea has been revealed. Significant correlations between elongation efficiency index and gene expression levels have been shown for two organisms (S. cerevisiae and H. pylori) using available microarray data. The proposed method allows to estimate numerically the coding DNA sequence translation efficiency and to optimize nucleotide composition of heterologous genes in unicellular organisms. http://www.mgs.bionet.nsc.ru/mgs/programs/eei-calculator/.
Elvik, Rune
2011-11-01
A large number of studies have tried to assess how various aspects of driver health influence driver involvement in accidents. The objective of this paper is to provide a framework for a critical assessment of the quality these studies from a methodological point of view. Examples are given of how various sources of bias and confounding can produce study findings that are highly misleading. Ten potential sources of error and bias in epidemiological studies of the contribution of driver health impairments to road accidents are discussed: (1) Poor description of the medical conditions whose effects are studied (measurement error). (2) Inadequate control for the effects of exposure on accident rate. (3) Sampling endogeneity with respect to assessment for fitness to drive (outcome-based sampling; self-selection bias). (4) Combined exposure to several risk factors. (5) Poor control for potentially confounding factors. (6) Failure to specify potentially moderating factors (interaction effects). (7) Failure to consider a severity gradient with respect to the effect of health impairments. (8) Failure to specify the compliance of drivers with medical treatments or treatment effectiveness. (9) No data on the population prevalence of various health conditions. (10) The use of multiple study approaches and methods making the comparison and synthesis of findings difficult. Examples are given of how all these items may influence the findings of a single study or make synthesising findings from multiple studies difficult. A checklist for assessing study quality is provided. Copyright © 2011 Elsevier Ltd. All rights reserved.
Mifsud, Borbala; Martincorena, Inigo; Darbo, Elodie; Sugar, Robert; Schoenfelder, Stefan; Fraser, Peter; Luscombe, Nicholas M
2017-01-01
Hi-C is one of the main methods for investigating spatial co-localisation of DNA in the nucleus. However, the raw sequencing data obtained from Hi-C experiments suffer from large biases and spurious contacts, making it difficult to identify true interactions. Existing methods use complex models to account for biases and do not provide a significance threshold for detecting interactions. Here we introduce a simple binomial probabilistic model that resolves complex biases and distinguishes between true and false interactions. The model corrects biases of known and unknown origin and yields a p-value for each interaction, providing a reliable threshold based on significance. We demonstrate this experimentally by testing the method against a random ligation dataset. Our method outperforms previous methods and provides a statistical framework for further data analysis, such as comparisons of Hi-C interactions between different conditions. GOTHiC is available as a BioConductor package (http://www.bioconductor.org/packages/release/bioc/html/GOTHiC.html).
Explaining ideology: two factors are better than one.
Robbins, Philip; Shields, Kenneth
2014-06-01
Hibbing et al. contend that individual differences in political ideology can be substantially accounted for in terms of differences in a single psychological factor, namely, strength of negativity bias. We argue that, given the multidimensional structure of ideology, a better explanation of ideological variation will take into account both individual differences in negativity bias and differences in empathic concern.
ERIC Educational Resources Information Center
Reeves, Edward B.
The system of high-stakes accountability in the Kentucky public schools raises the question of whether teachers and administrators should be held accountable if test scores are influenced by external factors over which educators have no control. This study investigates whether such external factors , or "contextual effects," bias the…
Ion collection from a plasma by a pinhole
NASA Technical Reports Server (NTRS)
Snyder, David B.; Herr, Joel L.
1992-01-01
Ion focusing by a biased pinhole is studied numerically. Laplace's equation is solved in 3-D for cylindrical symmetry on a constant grid to determine the potential field produced by a biased pinhole in a dielectric material. Focusing factors are studied for ions of uniform incident velocity with a 3-D Maxwellian distribution superimposed. Ion currents to the pinhole are found by particle tracking. The focusing factor of positive ions as a function of initial velocity, temperature, injection radius, and hole size is reported. For a typical Space Station Freedom environment (oxygen ions having a 4.5 eV ram energy, 0.1 eV temperature, and a -140 V biased pinhole), a focusing factor of 13.35 is found for a 1.5 mm radius pinhole.
VanTieghem, Michelle R.; Gabard-Durnam, Laurel; Goff, Bonnie; Flannery, Jessica; Humphreys, Kathryn L.; Telzer, Eva H.; Caldera, Christina; Louie, Jennifer Y.; Shapiro, Mor; Bolger, Niall; Tottenham, Nim
2018-01-01
Institutional caregiving is associated with significant deviations from species-expected caregiving, altering the normative sequence of attachment formation and placing children at risk for long-term emotional difficulties. However, little is known about factors that can promote resilience following early institutional caregiving. In the current study, we investigated how adaptations in affective processing (i.e. positive valence bias) and family-level protective factors (i.e. secure parent-child relationships) moderate risk for internalizing symptoms in Previously Institutionalized (PI) youth. Children and adolescents with and without a history of institutional care performed a laboratory-based affective processing task and self-reported measures of parent-child relationship security. PI youth were more likely than comparison youth to show positive valence biases when interpreting ambiguous facial expressions. Both positive valence bias and parent-child relationship security moderated the association between institutional care and parent-reported internalizing symptoms, such that greater positive valence bias and more secure parent-child relationships predicted fewer symptoms in PI youth. However, when both factors were tested concurrently, parent-child relationship security more strongly moderated the link between PI status and internalizing symptoms. These findings suggest that both individual-level adaptations in affective processing and family-level factors of secure parent-child relationships may ameliorate risk for internalizing psychopathology following early institutional caregiving. PMID:28401841
Vantieghem, Michelle R; Gabard-Durnam, Laurel; Goff, Bonnie; Flannery, Jessica; Humphreys, Kathryn L; Telzer, Eva H; Caldera, Christina; Louie, Jennifer Y; Shapiro, Mor; Bolger, Niall; Tottenham, Nim
2017-05-01
Institutional caregiving is associated with significant deviations from species-expected caregiving, altering the normative sequence of attachment formation and placing children at risk for long-term emotional difficulties. However, little is known about factors that can promote resilience following early institutional caregiving. In the current study, we investigated how adaptations in affective processing (i.e., positive valence bias) and family-level protective factors (i.e., secure parent-child relationships) moderate risk for internalizing symptoms in previously institutionalized (PI) youth. Children and adolescents with and without a history of institutional care performed a laboratory-based affective processing task and self-reported measures of parent-child relationship security. PI youth were more likely than comparison youth to show positive valence biases when interpreting ambiguous facial expressions. Both positive valence bias and parent-child relationship security moderated the association between institutional care and parent-reported internalizing symptoms, such that greater positive valence bias and more secure parent-child relationships predicted fewer symptoms in PI youth. However, when both factors were tested concurrently, parent-child relationship security more strongly moderated the link between PI status and internalizing symptoms. These findings suggest that both individual-level adaptations in affective processing and family-level factors of secure parent-child relationships may ameliorate risk for internalizing psychopathology following early institutional caregiving.
Bias-correction of PERSIANN-CDR Extreme Precipitation Estimates Over the United States
NASA Astrophysics Data System (ADS)
Faridzad, M.; Yang, T.; Hsu, K. L.; Sorooshian, S.
2017-12-01
Ground-based precipitation measurements can be sparse or even nonexistent over remote regions which make it difficult for extreme event analysis. PERSIANN-CDR (CDR), with 30+ years of daily rainfall information, provides an opportunity to study precipitation for regions where ground measurements are limited. In this study, the use of CDR annual extreme precipitation for frequency analysis of extreme events over limited/ungauged basins is explored. The adjustment of CDR is implemented in two steps: (1) Calculated CDR bias correction factor at limited gauge locations based on the linear regression analysis of gauge and CDR annual maxima precipitation; and (2) Extend the bias correction factor to the locations where gauges are not available. The correction factors are estimated at gauge sites over various catchments, elevation zones, and climate regions and the results were generalized to ungauged sites based on regional and climatic similarity. Case studies were conducted on 20 basins with diverse climate and altitudes in the Eastern and Western US. Cross-validation reveals that the bias correction factors estimated on limited calibration data can be extended to regions with similar characteristics. The adjusted CDR estimates also outperform gauge interpolation on validation sites consistently. It is suggested that the CDR with bias adjustment has a potential for study frequency analysis of extreme events, especially for regions with limited gauge observations.
van Walraven, Carl
2017-04-01
Diagnostic codes used in administrative databases cause bias due to misclassification of patient disease status. It is unclear which methods minimize this bias. Serum creatinine measures were used to determine severe renal failure status in 50,074 hospitalized patients. The true prevalence of severe renal failure and its association with covariates were measured. These were compared to results for which renal failure status was determined using surrogate measures including the following: (1) diagnostic codes; (2) categorization of probability estimates of renal failure determined from a previously validated model; or (3) bootstrap methods imputation of disease status using model-derived probability estimates. Bias in estimates of severe renal failure prevalence and its association with covariates were minimal when bootstrap methods were used to impute renal failure status from model-based probability estimates. In contrast, biases were extensive when renal failure status was determined using codes or methods in which model-based condition probability was categorized. Bias due to misclassification from inaccurate diagnostic codes can be minimized using bootstrap methods to impute condition status using multivariable model-derived probability estimates. Copyright © 2017 Elsevier Inc. All rights reserved.
Adaptive enhanced sampling by force-biasing using neural networks
NASA Astrophysics Data System (ADS)
Guo, Ashley Z.; Sevgen, Emre; Sidky, Hythem; Whitmer, Jonathan K.; Hubbell, Jeffrey A.; de Pablo, Juan J.
2018-04-01
A machine learning assisted method is presented for molecular simulation of systems with rugged free energy landscapes. The method is general and can be combined with other advanced sampling techniques. In the particular implementation proposed here, it is illustrated in the context of an adaptive biasing force approach where, rather than relying on discrete force estimates, one can resort to a self-regularizing artificial neural network to generate continuous, estimated generalized forces. By doing so, the proposed approach addresses several shortcomings common to adaptive biasing force and other algorithms. Specifically, the neural network enables (1) smooth estimates of generalized forces in sparsely sampled regions, (2) force estimates in previously unexplored regions, and (3) continuous force estimates with which to bias the simulation, as opposed to biases generated at specific points of a discrete grid. The usefulness of the method is illustrated with three different examples, chosen to highlight the wide range of applicability of the underlying concepts. In all three cases, the new method is found to enhance considerably the underlying traditional adaptive biasing force approach. The method is also found to provide improvements over previous implementations of neural network assisted algorithms.
Health risk perception, optimistic bias, and personal satisfaction.
Bränström, Richard; Brandberg, Yvonne
2010-01-01
To examine change in risk perception and optimistic bias concerning behavior-linked health threats and environmental health threats between adolescence and young adulthood and how these factors related to personal satisfaction. In 1996 and 2002, 1624 adolescents responded to a mailed questionnaire. Adolescents showed strong positive optimistic bias concerning behaviorlinked risks, and this optimistic bias increased with age. Increase in optimistic bias over time predicted increase in personal satisfaction. The capacity to process and perceive potential threats in a positive manner might be a valuable human ability positively influencing personal satisfaction and well-being.
Efficient global biopolymer sampling with end-transfer configurational bias Monte Carlo
NASA Astrophysics Data System (ADS)
Arya, Gaurav; Schlick, Tamar
2007-01-01
We develop an "end-transfer configurational bias Monte Carlo" method for efficient thermodynamic sampling of complex biopolymers and assess its performance on a mesoscale model of chromatin (oligonucleosome) at different salt conditions compared to other Monte Carlo moves. Our method extends traditional configurational bias by deleting a repeating motif (monomer) from one end of the biopolymer and regrowing it at the opposite end using the standard Rosenbluth scheme. The method's sampling efficiency compared to local moves, pivot rotations, and standard configurational bias is assessed by parameters relating to translational, rotational, and internal degrees of freedom of the oligonucleosome. Our results show that the end-transfer method is superior in sampling every degree of freedom of the oligonucleosomes over other methods at high salt concentrations (weak electrostatics) but worse than the pivot rotations in terms of sampling internal and rotational sampling at low-to-moderate salt concentrations (strong electrostatics). Under all conditions investigated, however, the end-transfer method is several orders of magnitude more efficient than the standard configurational bias approach. This is because the characteristic sampling time of the innermost oligonucleosome motif scales quadratically with the length of the oligonucleosomes for the end-transfer method while it scales exponentially for the traditional configurational-bias method. Thus, the method we propose can significantly improve performance for global biomolecular applications, especially in condensed systems with weak nonbonded interactions and may be combined with local enhancements to improve local sampling.
Cultural and biological factors modulate spatial biases over development.
Girelli, Luisa; Marinelli, Chiara Valeria; Grossi, Giuseppe; Arduino, Lisa S
2017-11-01
Increasing evidence supports the contribution of both biological and cultural factors to visuospatial processing. The present study adds to the literature by exploring the interplay of perceptual and linguistic mechanisms in determining visuospatial asymmetries in adults (Experiment 1) and children (Experiment 2). In particular, pre-schoolers (3 and 5 year-olds), school-aged children (8 year-old), and adult participants were required to bisect different types of stimuli, that is, lines, words, and figure strings. In accordance with the literature, results yielded a leftward bias for lines and words and a rightward bias for figure strings, in adult participants. More critically, different biases were found for lines, words, and figure strings in children as a function of age, reflecting the impact of both cultural and biological factors on the processing of different visuospatial materials. Specifically, an adult-like pattern of results emerged only in the older group of children (8 year-old), but not in pre-schoolers. Results are discussed in terms of literacy, reading habits exposure, and biological maturation.
The CHESS method of forensic opinion formulation: striving to checkmate bias.
Wills, Cheryl D
2008-01-01
Expert witnesses use various methods to render dispassionate opinions. Some forensic psychiatrists acknowledge bias up front; other experts use principles endorsed by the American Academy of Psychiatry and the Law or other professional organizations. This article introduces CHESS, a systematic method for reducing bias in expert opinions. The CHESS method involves identifying a Claim or preliminary opinion; developing a Hierarchy of supporting evidence; examining the evidence for weaknesses or areas of Exposure; Studying and revising the claim and supporting evidence; and Synthesizing a revised opinion. Case examples illustrate how the CHESS method may help experts reduce bias while strengthening opinions. The method also helps experts prepare for court by reminding them to anticipate questions that may be asked during cross-examination. The CHESS method provides a framework for formulating, revising, and identifying limitations of opinions, which allows experts to incorporate neutrality into forensic opinions.
Discrimination, Racial Bias, and Telomere Length in African-American Men
Chae, David H.; Nuru-Jeter, Amani M.; Adler, Nancy E.; Brody, Gene H.; Lin, Jue; Blackburn, Elizabeth H.; Epel, Elissa S.
2013-01-01
Background Leukocyte telomere length (LTL) is an indicator of general systemic aging, with shorter LTL being associated with several chronic diseases of aging and earlier mortality. Identifying factors related to LTL among African Americans may yield insights into mechanisms underlying racial disparities in health. Purpose To test whether the combination of more frequent reports of racial discrimination and holding a greater implicit anti-black racial bias is associated with shorter LTL among African-American men. Methods Cross-sectional study of a community sample of 92 African-American men aged between 30 and 50 years. Participants were recruited from February to May 2010. Ordinary least squares regressions were used to examine LTL in kilobase pairs in relation to racial discrimination and implicit racial bias. Data analysis was completed in July 2013. Results After controlling for chronologic age, socioeconomic, and health-related characteristics, the interaction between racial discrimination and implicit racial bias was significantly associated with LTL (b= −0.10, SE=0.04, p=0.02). Those demonstrating a stronger implicit anti-black bias and reporting higher levels of racial discrimination had the shortest LTL. Household income-to-poverty threshold ratio was also associated with LTL (b=0.05, SE=0.02, p<0.01). Conclusions Results suggest that multiple levels of racism, including interpersonal experiences of racial discrimination and the internalization of negative racial bias, operate jointly to accelerate biological aging among African-American men. Societal efforts to address racial discrimination in concert with efforts to promote positive in-group racial attitudes may protect against premature biological aging in this population. PMID:24439343
Attributional Style in Healthy Persons: Its Association with 'Theory of Mind' Skills
Jeon, Im Hong; Kim, Kyung Ran; Kim, Hwan Hee; Park, Jin Young; Lee, Mikyung; Jo, Hye Hyun; Koo, Se Jun; Jeong, Yu Jin; Song, Yun Young; Kang, Jee In; Lee, Su Young; Lee, Eun
2013-01-01
Objective Attributional style, especially external personal attribution bias, was found to play a pivotal role in clinical and non-clinical paranoia. The study of the relationship of the tendency to infer/perceive hostility and blame with theory of mind skills has significant theoretical importance as it may provide additional information on how persons process social situations. The aim of this study was whether hostility perception bias and blame bias might be associated with theory of mind skills, neurocognition and emotional factors in healthy persons. Methods Total 263 participants (133 male and 130 female) were recruited. The attributional style was measured by using the Ambiguous Intentions Hostility Questionnaire (AIHQ). Participants were requested to complete a Brüne's Theory of Mind Picture Stories task, neurocognitive task including Standard Progressive Matrices (SPM) and digit span, and other emotional dysregulation trait scales including Rosenberg's self-esteem, Spielberg's trait anxiety inventory, and Novaco anger scale. Results Multiple regression analysis showed that hostility perception bias score in ambiguous situation were found to be associated with theory of mind questionnaire score and emotional dysregulation traits of Novaco anger scale. Also, composite blame bias score in ambiguous situation were found to be associated with emotional dysregulation traits of Novaco anger scale and Spielberg's trait anxiety scale. Conclusion The main finding was that the attributional style of hostility perception bias might be primarily contributed by theory of mind skills rather than neurocognitive function such as attention and working memory, and reasoning ability. The interpretations and implications would be discussed in details. PMID:23482524
Luddites and the Demographic Transition. NBER Working Paper No. 14484
ERIC Educational Resources Information Center
O'Rourke, Kevin H.; Rahman, Ahmed S.; Taylor, Alan M.
2008-01-01
Technological change was unskilled-labor-biased during the early Industrial Revolution, but is skill-biased today. This is not embedded in extant unified growth models. We develop a model which can endogenously account for these facts, where factor bias reflects profit-maximizing decisions by innovators. Endowments dictate that the early…
Ranking Bias in Association Studies
Jeffries, Neal O.
2009-01-01
Background It is widely appreciated that genomewide association studies often yield overestimates of the association of a marker with disease when attention focuses upon the marker showing the strongest relationship. For example, in a case-control setting the largest (in absolute value) estimated odds ratio has been found to typically overstate the association as measured in a second, independent set of data. The most common reason given for this observation is that the choice of the most extreme test statistic is often conditional upon first observing a significant p value associated with the marker. A second, less appreciated reason is described here. Under common circumstances it is the multiple testing of many markers and subsequent focus upon those with most extreme test statistics (i.e. highly ranked results) that leads to bias in the estimated effect sizes. Conclusions This bias, termed ranking bias, is separate from that arising from conditioning on a significant p value and may often be a more important factor in generating bias. An analytic description of this bias, simulations demonstrating its extent, and identification of some factors leading to its exacerbation are presented. PMID:19172085
Guo, Hongbin; Renaut, Rosemary A; Chen, Kewei; Reiman, Eric M
2010-01-01
Graphical analysis methods are widely used in positron emission tomography quantification because of their simplicity and model independence. But they may, particularly for reversible kinetics, lead to bias in the estimated parameters. The source of the bias is commonly attributed to noise in the data. Assuming a two-tissue compartmental model, we investigate the bias that originates from modeling error. This bias is an intrinsic property of the simplified linear models used for limited scan durations, and it is exaggerated by random noise and numerical quadrature error. Conditions are derived under which Logan's graphical method either over- or under-estimates the distribution volume in the noise-free case. The bias caused by modeling error is quantified analytically. The presented analysis shows that the bias of graphical methods is inversely proportional to the dissociation rate. Furthermore, visual examination of the linearity of the Logan plot is not sufficient for guaranteeing that equilibrium has been reached. A new model which retains the elegant properties of graphical analysis methods is presented, along with a numerical algorithm for its solution. We perform simulations with the fibrillar amyloid β radioligand [11C] benzothiazole-aniline using published data from the University of Pittsburgh and Rotterdam groups. The results show that the proposed method significantly reduces the bias due to modeling error. Moreover, the results for data acquired over a 70 minutes scan duration are at least as good as those obtained using existing methods for data acquired over a 90 minutes scan duration. PMID:20493196
NASA Astrophysics Data System (ADS)
Wang, Wenhui; Cao, Changyong; Ignatov, Alex; Li, Zhenglong; Wang, Likun; Zhang, Bin; Blonski, Slawomir; Li, Jun
2017-09-01
The Suomi NPP VIIRS thermal emissive bands (TEB) have been performing very well since data became available on January 20, 2012. The longwave infrared bands at 11 and 12 um (M15 and M16) are primarily used for sea surface temperature (SST) retrievals. A long standing anomaly has been observed during the quarterly warm-up-cool-down (WUCD) events. During such event daytime SST product becomes anomalous with a warm bias shown as a spike in the SST time series on the order of 0.2 K. A previous study (CAO et al. 2017) suggested that the VIIRS TEB calibration anomaly during WUCD is due to a flawed theoretical assumption in the calibration equation and proposed an Ltrace method to address the issue. This paper complements that study and presents operational implementation and validation of the Ltrace method for M15 and M16. The Ltrace method applies bias correction during WUCD only. It requires a simple code change and one-time calibration parameter look-up-table update. The method was evaluated using colocated CrIS observations and the SST algorithm. Our results indicate that the method can effectively reduce WUCD calibration anomaly in M15, with residual bias of 0.02 K after the correction. It works less effectively for M16, with residual bias of 0.04 K. The Ltrace method may over-correct WUCD calibration biases, especially for M16. However, the residual WUCD biases are small in both bands. Evaluation results using the SST algorithm show that the method can effectively remove SST anomaly during WUCD events.
Arnaud, Mickael; Salvo, Francesco; Ahmed, Ismaïl; Robinson, Philip; Moore, Nicholas; Bégaud, Bernard; Tubert-Bitter, Pascale; Pariente, Antoine
2016-03-01
The two methods for minimizing competition bias in signal of disproportionate reporting (SDR) detection--masking factor (MF) and masking ratio (MR)--have focused on the strength of disproportionality for identifying competitors and have been tested using competitors at the drug level. The aim of this study was to develop a method that relies on identifying competitors by considering the proportion of reports of adverse events (AEs) that mention the drug class at an adequate level of drug grouping to increase sensitivity (Se) for SDR unmasking, and its comparison with MF and MR. Reports in the French spontaneous reporting database between 2000 and 2005 were selected. Five AEs were considered: myocardial infarction, pancreatitis, aplastic anemia, convulsions, and gastrointestinal bleeding; related reports were retrieved using standardized Medical Dictionary for Regulatory Activities (MedDRA(®)) queries. Potential competitors of AEs were identified using the developed method, i.e. Competition Index (ComIn), as well as MF and MR. All three methods were tested according to Anatomical Therapeutic Chemical (ATC) classification levels 2-5. For each AE, SDR detection was performed, first in the complete database, and second after removing reports mentioning competitors; SDRs only detected after the removal were unmasked. All unmasked SDRs were validated using the Summary of Product Characteristics, and constituted the reference dataset used for computing the performance for SDR unmasking (area under the curve [AUC], Se). Performance of the ComIn was highest when considering competitors at ATC level 3 (AUC: 62 %; Se: 52 %); similar results were obtained with MF and MR. The ComIn could greatly minimize the competition bias in SDR detection. Further study using a larger dataset is needed.
Li, Xiaolei; Deng, Lei; Chen, Xiaoman; Cheng, Mengfan; Fu, Songnian; Tang, Ming; Liu, Deming
2017-04-17
A novel automatic bias control (ABC) method for optical in-phase and quadrature (IQ) modulator is proposed and experimentally demonstrated. In the proposed method, two different low frequency sine wave dither signals are generated and added on to the I/Q bias signal respectively. Instead of power monitoring of the harmonics of the dither signal, dither-correlation detection is proposed and used to adjust the bias voltages of the optical IQ modulator. By this way, not only frequency spectral analysis isn't required but also the directional bias adjustment could be realized, resulting in the decrease of algorithm complexity and the growth of convergence rate of ABC algorithm. The results show that the sensitivity of the proposed ABC method outperforms that of the traditional dither frequency monitoring method. Moreover, the proposed ABC method is proved to be modulation-format-free, and the transmission penalty caused by this method for both 10 Gb/s optical QPSK and 17.9 Gb/s optical 16QAM-OFDM signal transmission are negligible in our experiment.
NASA Astrophysics Data System (ADS)
An, Yanbin; Behnam, Ashkan; Pop, Eric; Bosman, Gijs; Ural, Ant
2015-09-01
Metal-semiconductor Schottky junction devices composed of chemical vapor deposition grown monolayer graphene on p-type silicon substrates are fabricated and characterized. Important diode parameters, such as the Schottky barrier height, ideality factor, and series resistance, are extracted from forward bias current-voltage characteristics using a previously established method modified to take into account the interfacial native oxide layer present at the graphene/silicon junction. It is found that the ideality factor can be substantially increased by the presence of the interfacial oxide layer. Furthermore, low frequency noise of graphene/silicon Schottky junctions under both forward and reverse bias is characterized. The noise is found to be 1/f dominated and the shot noise contribution is found to be negligible. The dependence of the 1/f noise on the forward and reverse current is also investigated. Finally, the photoresponse of graphene/silicon Schottky junctions is studied. The devices exhibit a peak responsivity of around 0.13 A/W and an external quantum efficiency higher than 25%. From the photoresponse and noise measurements, the bandwidth is extracted to be ˜1 kHz and the normalized detectivity is calculated to be 1.2 ×109 cm Hz1/2 W-1. These results provide important insights for the future integration of graphene with silicon device technology.
Evaluating disease management program effectiveness: an introduction to time-series analysis.
Linden, Ariel; Adams, John L; Roberts, Nancy
2003-01-01
Currently, the most widely used method in the disease management (DM) industry for evaluating program effectiveness is referred to as the "total population approach." This model is a pretest-posttest design, with the most basic limitation being that without a control group, there may be sources of bias and/or competing extraneous confounding factors that offer a plausible rationale explaining the change from baseline. Furthermore, with the current inclination of DM programs to use financial indicators rather than program-specific utilization indicators as the principal measure of program success, additional biases are introduced that may cloud evaluation results. This paper presents a non-technical introduction to time-series analysis (using disease-specific utilization measures) as an alternative, and more appropriate, approach to evaluating DM program effectiveness than the current total population approach.
Quantitative PET Imaging in Drug Development: Estimation of Target Occupancy.
Naganawa, Mika; Gallezot, Jean-Dominique; Rossano, Samantha; Carson, Richard E
2017-12-11
Positron emission tomography, an imaging tool using radiolabeled tracers in humans and preclinical species, has been widely used in recent years in drug development, particularly in the central nervous system. One important goal of PET in drug development is assessing the occupancy of various molecular targets (e.g., receptors, transporters, enzymes) by exogenous drugs. The current linear mathematical approaches used to determine occupancy using PET imaging experiments are presented. These algorithms use results from multiple regions with different target content in two scans, a baseline (pre-drug) scan and a post-drug scan. New mathematical estimation approaches to determine target occupancy, using maximum likelihood, are presented. A major challenge in these methods is the proper definition of the covariance matrix of the regional binding measures, accounting for different variance of the individual regional measures and their nonzero covariance, factors that have been ignored by conventional methods. The novel methods are compared to standard methods using simulation and real human occupancy data. The simulation data showed the expected reduction in variance and bias using the proper maximum likelihood methods, when the assumptions of the estimation method matched those in simulation. Between-method differences for data from human occupancy studies were less obvious, in part due to small dataset sizes. These maximum likelihood methods form the basis for development of improved PET covariance models, in order to minimize bias and variance in PET occupancy studies.
DIA-datasnooping and identifiability
NASA Astrophysics Data System (ADS)
Zaminpardaz, S.; Teunissen, P. J. G.
2018-04-01
In this contribution, we present and analyze datasnooping in the context of the DIA method. As the DIA method for the detection, identification and adaptation of mismodelling errors is concerned with estimation and testing, it is the combination of both that needs to be considered. This combination is rigorously captured by the DIA estimator. We discuss and analyze the DIA-datasnooping decision probabilities and the construction of the corresponding partitioning of misclosure space. We also investigate the circumstances under which two or more hypotheses are nonseparable in the identification step. By means of a theorem on the equivalence between the nonseparability of hypotheses and the inestimability of parameters, we demonstrate that one can forget about adapting the parameter vector for hypotheses that are nonseparable. However, as this concerns the complete vector and not necessarily functions of it, we also show that parameter functions may exist for which adaptation is still possible. It is shown how this adaptation looks like and how it changes the structure of the DIA estimator. To demonstrate the performance of the various elements of DIA-datasnooping, we apply the theory to some selected examples. We analyze how geometry changes in the measurement setup affect the testing procedure, by studying their partitioning of misclosure space, the decision probabilities and the minimal detectable and identifiable biases. The difference between these two minimal biases is highlighted by showing the difference between their corresponding contributing factors. We also show that if two alternative hypotheses, say Hi and Hj , are nonseparable, the testing procedure may have different levels of sensitivity to Hi -biases compared to the same Hj -biases.
Weight Bias and Psychosocial Implications for Acute Care of Patients With Obesity.
Smigelski-Theiss, Rachel; Gampong, Malisa; Kurasaki, Jill
2017-01-01
Obesity is a complex medical condition that has psychosocial and physiological implications for those suffering from the disease. Factors contributing to obesity such as depression, childhood experiences, and the physical environment should be recognized and addressed. Weight bias and stigmatization by health care providers and bedside clinicians negatively affect patients with obesity, hindering those patients from receiving appropriate care. To provide optimal care of patients with obesity or adiposity, health care providers must understand the physiological needs and requirements of this population while recognizing and addressing their own biases. The authors describe psychosocial and environmental factors that contribute to obesity, discuss health care providers' weight biases, and highlight implications for acute care of patients suffering from obesity. ©2017 American Association of Critical-Care Nurses.
Electronic transport in Thue-Morse gapped graphene superlattice under applied bias
NASA Astrophysics Data System (ADS)
Wang, Mingjing; Zhang, Hongmei; Liu, De
2018-04-01
We investigate theoretically the electronic transport properties of Thue-Morse gapped graphene superlattice under an applied electric field. The results indicate that the combined effect of the band gap and the applied bias breaks the angular symmetry of the transmission coefficient. The zero-averaged wave-number gap can be greatly modulated by the band gap and the applied bias, but its position is robust against change of the band gap. Moreover, the conductance and the Fano factor are strongly dependent not only on the Fermi energy but also on the band gap and the applied bias. In the vicinity of the new Dirac point, the minimum value of the conductance obviously decreases and the Fano factor gradually forms a Poissonian value plateau with increasing of the band gap.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016
Cheung, Kei Long; Ten Klooster, Peter M; Smit, Cees; de Vries, Hein; Pieterse, Marcel E
2017-03-23
In public health monitoring of young people it is critical to understand the effects of selective non-response, in particular when a controversial topic is involved like substance abuse or sexual behaviour. Research that is dependent upon voluntary subject participation is particularly vulnerable to sampling bias. As respondents whose participation is hardest to elicit on a voluntary basis are also more likely to report risk behaviour, this potentially leads to underestimation of risk factor prevalence. Inviting adolescents to participate in a home-sent postal survey is a typical voluntary recruitment strategy with high non-response, as opposed to mandatory participation during school time. This study examines the extent to which prevalence estimates of adolescent health-related characteristics are biased due to different sampling methods, and whether this also biases within-subject analyses. Cross-sectional datasets collected in 2011 in Twente and IJsselland, two similar and adjacent regions in the Netherlands, were used. In total, 9360 youngsters in a mandatory sample (Twente) and 1952 youngsters in a voluntary sample (IJsselland) participated in the study. To test whether the samples differed on health-related variables, we conducted both univariate and multivariable logistic regression analyses controlling for any demographic difference between the samples. Additional multivariable logistic regressions were conducted to examine moderating effects of sampling method on associations between health-related variables. As expected, females, older individuals, as well as individuals with higher education levels, were over-represented in the voluntary sample, compared to the mandatory sample. Respondents in the voluntary sample tended to smoke less, consume less alcohol (ever, lifetime, and past four weeks), have better mental health, have better subjective health status, have more positive school experiences and have less sexual intercourse than respondents in the mandatory sample. No moderating effects were found for sampling method on associations between variables. This is one of first studies to provide strong evidence that voluntary recruitment may lead to a strong non-response bias in health-related prevalence estimates in adolescents, as compared to mandatory recruitment. The resulting underestimation in prevalence of health behaviours and well-being measures appeared large, up to a four-fold lower proportion for self-reported alcohol consumption. Correlations between variables, though, appeared to be insensitive to sampling bias.
Publication Bias ( The "File-Drawer Problem") in Scientific Inference
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.; DeVincenzi, Donald (Technical Monitor)
1999-01-01
Publication bias arises whenever the probability that a study is published depends on the statistical significance of its results. This bias, often called the file-drawer effect since the unpublished results are imagined to be tucked away in researchers' file cabinets, is potentially a severe impediment to combining the statistical results of studies collected from the literature. With almost any reasonable quantitative model for publication bias, only a small number of studies lost in the file-drawer will produce a significant bias. This result contradicts the well known Fail Safe File Drawer (FSFD) method for setting limits on the potential harm of publication bias, widely used in social, medical and psychic research. This method incorrectly treats the file drawer as unbiased, and almost always miss-estimates the seriousness of publication bias. A large body of not only psychic research, but medical and social science studies, has mistakenly relied on this method to validate claimed discoveries. Statistical combination can be trusted only if it is known with certainty that all studies that have been carried out are included. Such certainty is virtually impossible to achieve in literature surveys.
A scanning tunneling microscope break junction method with continuous bias modulation.
Beall, Edward; Yin, Xing; Waldeck, David H; Wierzbinski, Emil
2015-09-28
Single molecule conductance measurements on 1,8-octanedithiol were performed using the scanning tunneling microscope break junction method with an externally controlled modulation of the bias voltage. Application of an AC voltage is shown to improve the signal to noise ratio of low current (low conductance) measurements as compared to the DC bias method. The experimental results show that the current response of the molecule(s) trapped in the junction and the solvent media to the bias modulation can be qualitatively different. A model RC circuit which accommodates both the molecule and the solvent is proposed to analyze the data and extract a conductance for the molecule.
Neuropathic sensory symptoms: association with pain and psychological factors
Shaygan, Maryam; Böger, Andreas; Kröner-Herwig, Birgit
2014-01-01
Background A large number of population-based studies of chronic pain have considered neuropathic sensory symptoms to be associated with a high level of pain intensity and negative affectivity. The present study examines the question of whether this association previously found in non-selected samples of chronic pain patients can also be found in chronic pain patients with underlying pathology of neuropathic sensory symptoms. Methods Neuropathic sensory symptoms in 306 patients with chronic pain diagnosed as typical neuropathic pain, radiculopathy, fibromyalgia, or nociceptive back pain were assessed using the Pain DETECT Questionnaire. Two separate cluster analyses were performed to identify subgroups of patients with different levels of self-reported neuropathic sensory symptoms and, furthermore, to identify subgroups of patients with distinct patterns of neuropathic sensory symptoms (adjusted for individual response bias regarding specific symptoms). Results ANOVA (analysis of variance) results in typical neuropathic pain, radiculopathy, and fibromyalgia showed no significant differences between the three levels of neuropathic sensory symptoms regarding pain intensity, pain chronicity, pain catastrophizing, pain acceptance, and depressive symptoms. However, in nociceptive back pain patients, significant differences were found for all variables except pain chronicity. When controlling for the response bias of patients in ratings of symptoms, none of the patterns of neuropathic sensory symptoms were associated with pain and psychological factors. Conclusion Neuropathic sensory symptoms are not closely associated with higher levels of pain intensity and cognitive-emotional evaluations in chronic pain patients with underlying pathology of neuropathic sensory symptoms. The findings are discussed in term of differential response bias in patients with versus without verified neuropathic sensory symptoms by clinical examination, medical tests, or underlying pathology of disease. Our results lend support to the importance of using adjusted scores, thereby eliminating the response bias, when investigating self-reported neuropathic symptoms by patients. PMID:24899808
Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q; Ducote, Justin L; Su, Min-Ying; Molloi, Sabee
2013-12-01
Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left-right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left-right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left-right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications.
ERIC Educational Resources Information Center
Ramey, Christopher H.; Chrysikou, Evangelia G.; Reilly, Jamie
2013-01-01
Word learning is a lifelong activity constrained by cognitive biases that people possess at particular points in development. Age of acquisition (AoA) is a psycholinguistic variable that may prove useful toward gauging the relative weighting of different phonological, semantic, and morphological factors at different phases of language acquisition…
An Investigation of the Learning Strategies as Bias Factors in Second Language Cloze Tests
ERIC Educational Resources Information Center
Ajideh, Parviz; Yaghoubi-Notash, Massoud; Khalili, Abdolreza
2017-01-01
The present study investigated the contribution of the EFL students' learning strategies to the explanation of the variance in their results on language tests. More specifically, it examined the role of these strategies as bias factors in the results of English cloze tests. Based on this aim, first, 158 intermediate EFL learners were selected from…
Double propensity-score adjustment: A solution to design bias or bias due to incomplete matching.
Austin, Peter C
2017-02-01
Propensity-score matching is frequently used to reduce the effects of confounding when using observational data to estimate the effects of treatments. Matching allows one to estimate the average effect of treatment in the treated. Rosenbaum and Rubin coined the term "bias due to incomplete matching" to describe the bias that can occur when some treated subjects are excluded from the matched sample because no appropriate control subject was available. The presence of incomplete matching raises important questions around the generalizability of estimated treatment effects to the entire population of treated subjects. We describe an analytic solution to address the bias due to incomplete matching. Our method is based on using optimal or nearest neighbor matching, rather than caliper matching (which frequently results in the exclusion of some treated subjects). Within the sample matched on the propensity score, covariate adjustment using the propensity score is then employed to impute missing potential outcomes under lack of treatment for each treated subject. Using Monte Carlo simulations, we found that the proposed method resulted in estimates of treatment effect that were essentially unbiased. This method resulted in decreased bias compared to caliper matching alone and compared to either optimal matching or nearest neighbor matching alone. Caliper matching alone resulted in design bias or bias due to incomplete matching, while optimal matching or nearest neighbor matching alone resulted in bias due to residual confounding. The proposed method also tended to result in estimates with decreased mean squared error compared to when caliper matching was used.
Double propensity-score adjustment: A solution to design bias or bias due to incomplete matching
2016-01-01
Propensity-score matching is frequently used to reduce the effects of confounding when using observational data to estimate the effects of treatments. Matching allows one to estimate the average effect of treatment in the treated. Rosenbaum and Rubin coined the term “bias due to incomplete matching” to describe the bias that can occur when some treated subjects are excluded from the matched sample because no appropriate control subject was available. The presence of incomplete matching raises important questions around the generalizability of estimated treatment effects to the entire population of treated subjects. We describe an analytic solution to address the bias due to incomplete matching. Our method is based on using optimal or nearest neighbor matching, rather than caliper matching (which frequently results in the exclusion of some treated subjects). Within the sample matched on the propensity score, covariate adjustment using the propensity score is then employed to impute missing potential outcomes under lack of treatment for each treated subject. Using Monte Carlo simulations, we found that the proposed method resulted in estimates of treatment effect that were essentially unbiased. This method resulted in decreased bias compared to caliper matching alone and compared to either optimal matching or nearest neighbor matching alone. Caliper matching alone resulted in design bias or bias due to incomplete matching, while optimal matching or nearest neighbor matching alone resulted in bias due to residual confounding. The proposed method also tended to result in estimates with decreased mean squared error compared to when caliper matching was used. PMID:25038071
Crampin, A C; Mwinuka, V; Malema, S S; Glynn, J R; Fine, P E
2001-01-01
Selection bias, particularly of controls, is common in case-control studies and may materially affect the results. Methods of control selection should be tailored both for the risk factors and disease under investigation and for the population being studied. We present here a control selection method devised for a case-control study of tuberculosis in rural Africa (Karonga, northern Malawi) that selects an age/sex frequency-matched random sample of the population, with a geographical distribution in proportion to the population density. We also present an audit of the selection process, and discuss the potential of this method in other settings.
NASA Astrophysics Data System (ADS)
Zhang, Yunchao; Charles, Christine; Boswell, Roderick W.
2017-07-01
This experimental study shows the validity of Sheridan's method in determining plasma density in low pressure, weakly magnetized, RF plasmas using ion saturation current data measured by a planar Langmuir probe. The ion density derived from Sheridan's method which takes into account the sheath expansion around the negatively biased probe tip, presents a good consistency with the electron density measured by a cylindrical RF-compensated Langmuir probe using the Druyvesteyn theory. The ion density obtained from the simplified method which neglects the sheath expansion effect, overestimates the true density magnitude, e.g., by a factor of 3 to 12 for the present experiment.
States higher in racial bias spend less on disabled medicaid enrollees.
Leitner, Jordan B; Hehman, Eric; Snowden, Lonnie R
2018-02-07
While there is considerable state-by-state variation in Medicaid disability expenditure, little is known about the factors that contribution to this variation. Since Blacks disproportionately benefit from Medicaid disability programs, we aimed to gain insight into whether racial bias towards Blacks is one factor that explains state-by-state variation in Medicaid disability expenditures. We compiled 1,764,927 responses of explicit and implicit racial bias from all 50 states and Washington D.C. to generate estimates of racial bias for each state (or territory). We then used these estimates to predict states' expenditure per disabled Medicaid enrollee. We also examined whether the relationship between racial bias and disabled Medicaid enrollee expenditure might vary according to states' level of income for Whites, income for Blacks, or conservatism. States with more explicit or implicit racial bias spent less per disabled Medicaid enrollee. This correlation was strongest in states where Whites had lower income, Blacks had higher income, or conservatism was high. Accordingly, these results suggest that racial bias might play a role in Medicaid disability expenditure in places where Whites have a lower economic advantage or there is a culture of conservatism. This research established correlations between state-level racial bias and Medicaid disability expenditure. Future research might build upon this work to understand the direction of causality and pathways that might explain these correlations. Copyright © 2018 Elsevier Ltd. All rights reserved.
Artieri, Carlo G; Fraser, Hunter B
2014-12-01
The recent advent of ribosome profiling-sequencing of short ribosome-bound fragments of mRNA-has offered an unprecedented opportunity to interrogate the sequence features responsible for modulating translational rates. Nevertheless, numerous analyses of the first riboprofiling data set have produced equivocal and often incompatible results. Here we analyze three independent yeast riboprofiling data sets, including two with much higher coverage than previously available, and find that all three show substantial technical sequence biases that confound interpretations of ribosomal occupancy. After accounting for these biases, we find no effect of previously implicated factors on ribosomal pausing. Rather, we find that incorporation of proline, whose unique side-chain stalls peptide synthesis in vitro, also slows the ribosome in vivo. We also reanalyze a method that implicated positively charged amino acids as the major determinant of ribosomal stalling and demonstrate that it produces false signals of stalling in low-coverage data. Our results suggest that any analysis of riboprofiling data should account for sequencing biases and sparse coverage. To this end, we establish a robust methodology that enables analysis of ribosome profiling data without prior assumptions regarding which positions spanned by the ribosome cause stalling. © 2014 Artieri and Fraser; Published by Cold Spring Harbor Laboratory Press.
A Bayesian approach to truncated data sets: An application to Malmquist bias in Supernova Cosmology
NASA Astrophysics Data System (ADS)
March, Marisa Cristina
2018-01-01
A problem commonly encountered in statistical analysis of data is that of truncated data sets. A truncated data set is one in which a number of data points are completely missing from a sample, this is in contrast to a censored sample in which partial information is missing from some data points. In astrophysics this problem is commonly seen in a magnitude limited survey such that the survey is incomplete at fainter magnitudes, that is, certain faint objects are simply not observed. The effect of this `missing data' is manifested as Malmquist bias and can result in biases in parameter inference if it is not accounted for. In Frequentist methodologies the Malmquist bias is often corrected for by analysing many simulations and computing the appropriate correction factors. One problem with this methodology is that the corrections are model dependent. In this poster we derive a Bayesian methodology for accounting for truncated data sets in problems of parameter inference and model selection. We first show the methodology for a simple Gaussian linear model and then go on to show the method for accounting for a truncated data set in the case for cosmological parameter inference with a magnitude limited supernova Ia survey.
NASA Astrophysics Data System (ADS)
Venema, Victor; Lindau, Ralf
2016-04-01
In an accompanying talk we show that well-homogenized national dataset warm more than temperatures from global collections averaged over the region of common coverage. In this poster we want to present auxiliary work about possible biases in the raw observations and on how well relative statistical homogenization can remove trend biases. There are several possible causes of cooling biases, which have not been studied much. Siting could be an important factor. Urban stations tend to move away from the centre to better locations. Many stations started inside of urban areas and are nowadays more outside. Even for villages the temperature difference between the centre and edge can be 0.5°C. When a city station moves to an airport, which often happened around WWII, this takes the station (largely) out of the urban heat island. During the 20th century the Stevenson screen was established as the dominant thermometer screen. This screen protected the thermometer much better against radiation than earlier designs. Deficits of earlier measurement methods have artificially warmed the temperatures in the 19th century. Newer studies suggest we may have underestimated the size of this bias. Currently we are in a transition to Automatic Weather Stations. The net global effect of this transition is not clear at this moment. Irrigation on average decreases the 2m-temperature by about 1 degree centigrade. At the same time, irrigation has increased significantly during the last century. People preferentially live in irrigated areas and weather stations serve agriculture. Thus it is possible that there is a higher likelihood that weather stations are erected in irrigated areas than elsewhere. In this case irrigation could lead to a spurious cooling trend. In the Parallel Observations Science Team of the International Surface Temperature Initiative (ISTI-POST) we are studying influence of the introduction of Stevenson screens and Automatic Weather Stations using parallel measurements, as well as the influence of relocations. Previous validation studies of statistical homogenizations unfortunately have some caveats when it comes to the large-scale trends. The main problem is that the validation datasets had a relatively large signal to noise ratio (SNR), i.e., they had a large break variance relative to the variance of the noise of the difference time series. Our recent work on multiple breakpoint detection methods shows that SNR is very important and that for a SNR around 0.5 the segmentation is about as good as a random segmentation. If the corrections are computed with a composite reference that also contains breaks, the bias due to network-wide transitions that are executed over short periods will reduce the obvious breaks in the single stations, but may not reduce the large-scale bias much. The joint correction method using a decomposition approach (ANOVA) can remove the bias when all breaks (predictors) are known. Any error in the predictors will, however, lead to undercorrection of any large-scale trend biases.
ERIC Educational Resources Information Center
Gibb, Brandon E.; Benas, Jessica S.; Grassia, Marie; McGeary, John
2009-01-01
In this study, we examined the roles of specific cognitive (attentional bias) and genetic ("5-HTTLPR") risk factors in the intergenerational transmission of depression. Focusing first on the link between maternal history of major depressive disorder (MDD) and children's attentional biases, we found that children of mothers with a history…
Overcoming Biases to Effectively Serve African American College Students: A Call to the Profession
ERIC Educational Resources Information Center
Duncan, Lonnie E.
2005-01-01
This article reexamines the help-seeking behavior of African American college students with a focus on possible counselor biases as well as biases in the settings in which counselors work. These issues are discussed as possible contributing factors to the underutilization of counseling by African American college students. Strategies to overcoming…
ERIC Educational Resources Information Center
Munder, Thomas; Fluckiger, Christoph; Gerger, Heike; Wampold, Bruce E.; Barth, Jurgen
2012-01-01
Many meta-analyses of comparative outcome studies found a substantial association of researcher allegiance (RA) and relative treatment effects. Therefore, RA is regarded as a biasing factor in comparative outcome research (RA bias hypothesis). However, the RA bias hypothesis has been criticized as causality might be reversed. That is, RA might be…
The role of cognitive biases and personality variables in subclinical delusional ideation.
Menon, Mahesh; Quilty, Lena Catherine; Zawadzki, John Anthony; Woodward, Todd Stephen; Sokolowski, Helen Moriah; Boon, Heather Shirley; Wong, Albert Hung Choy
2013-05-01
A number of cognitive biases, most notably a data gathering bias characterised by "jumping to conclusions" (JTC), and the "bias against disconfirmatory evidence" (BADE), have been shown to be associated with delusions and subclinical delusional ideation. Certain personality variables, particularly "openness to experience", are thought to be associated with schizotypy. Using structural equation modelling, we examined the association between two higher order subfactors ("aspects") of "openness to experience" (labelled "openness" and "intellect"), these cognitive biases, and their relationship to subclinical delusional ideation in 121 healthy, nonpsychiatric controls. Our results suggest that cognitive biases (specifically the data gathering bias and BADE) and the "openness" aspect are independently associated with subclinical delusional ideation, and the data gathering bias is weakly associated with "positive schizotypy". "Intellect" is negatively associated with delusional ideation and might play a potential protective role. Cognitive biases and personality are likely to be independent risk factors for the development of delusions.
Electrochemical force microscopy
Kalinin, Sergei V.; Jesse, Stephen; Collins, Liam F.; Rodriguez, Brian J.
2017-01-10
A system and method for electrochemical force microscopy are provided. The system and method are based on a multidimensional detection scheme that is sensitive to forces experienced by a biased electrode in a solution. The multidimensional approach allows separation of fast processes, such as double layer charging, and charge relaxation, and slow processes, such as diffusion and faradaic reactions, as well as capturing the bias dependence of the response. The time-resolved and bias measurements can also allow probing both linear (small bias range) and non-linear (large bias range) electrochemical regimes and potentially the de-convolution of charge dynamics and diffusion processes from steric effects and electrochemical reactivity.
REVIEW OF IMPROVEMENTS IN RADIO FREQUENCY PHOTONICS
2017-09-01
control boards keep the MZM biased at quadrature. A couple of methods exist for bias control: optical power monitoring or second harmonic power... bias , referred to as low- biasing . The increased RF gain for operating at the low bias point comes from the improved optical gain of the sidebands...Figure 3: Optical Gain for an MZM at Quadrature and Low Bias Operation ............................... 3 Figure 4: RF Gain for an MZM at Different
Kazmerski, Lawrence L.
1990-01-01
A Method and apparatus for differential spectroscopic atomic-imaging is disclosed for spatial resolution and imaging for display not only individual atoms on a sample surface, but also bonding and the specific atomic species in such bond. The apparatus includes a scanning tunneling microscope (STM) that is modified to include photon biasing, preferably a tuneable laser, modulating electronic surface biasing for the sample, and temperature biasing, preferably a vibration-free refrigerated sample mounting stage. Computer control and data processing and visual display components are also included. The method includes modulating the electronic bias voltage with and without selected photon wavelengths and frequency biasing under a stabilizing (usually cold) bias temperature to detect bonding and specific atomic species in the bonds as the STM rasters the sample. This data is processed along with atomic spatial topography data obtained from the STM raster scan to create a real-time visual image of the atoms on the sample surface.
Environmental risk factors and Parkinson's disease: An umbrella review of meta-analyses.
Bellou, Vanesa; Belbasis, Lazaros; Tzoulaki, Ioanna; Evangelou, Evangelos; Ioannidis, John P A
2016-02-01
Parkinson's disease is a neurological disorder with complex pathogenesis implicating both environmental and genetic factors. We aimed to summarise the environmental risk factors that have been studied for potential association with Parkinson's disease, assess the presence of diverse biases, and identify the risk factors with the strongest support. We searched PubMed from inception to September 18, 2015, to identify systematic reviews and meta-analyses of observational studies that examined associations between environmental factors and Parkinson's disease. For each meta-analysis we estimated the summary effect size by random-effects and fixed-effects models, the 95% confidence interval and the 95% prediction interval. We estimated the between-study heterogeneity expressed by I(2), evidence of small-study effects and evidence of excess significance bias. Overall, 75 unique meta-analyses on different risk factors for Parkinson's disease were examined, covering diverse biomarkers, dietary factors, drugs, medical history or comorbid diseases, exposure to toxic environmental agents and habits. 21 of 75 meta-analyses had results that were significant at p < 0.001 by random-effects. Evidence for an association was convincing (more than 1000 cases, p < 10(-6) by random-effects, not large heterogeneity, 95% prediction interval excluding the null value and absence of hints for small-study effects and excess significance bias) for constipation, and physical activity. Many environmental factors have substantial evidence of association with Parkinson's disease, but several, perhaps most, of them may reflect reverse causation, residual confounding, information bias, sponsor conflicts or other caveats. Copyright © 2016. Published by Elsevier Ltd.
Latimer, Nicholas R; Abrams, K R; Lambert, P C; Crowther, M J; Wailoo, A J; Morden, J P; Akehurst, R L; Campbell, M J
2017-04-01
Estimates of the overall survival benefit of new cancer treatments are often confounded by treatment switching in randomised controlled trials (RCTs) - whereby patients randomised to the control group are permitted to switch onto the experimental treatment upon disease progression. In health technology assessment, estimates of the unconfounded overall survival benefit associated with the new treatment are needed. Several switching adjustment methods have been advocated in the literature, some of which have been used in health technology assessment. However, it is unclear which methods are likely to produce least bias in realistic RCT-based scenarios. We simulated RCTs in which switching, associated with patient prognosis, was permitted. Treatment effect size and time dependency, switching proportions and disease severity were varied across scenarios. We assessed the performance of alternative adjustment methods based upon bias, coverage and mean squared error, related to the estimation of true restricted mean survival in the absence of switching in the control group. We found that when the treatment effect was not time-dependent, rank preserving structural failure time models (RPSFTM) and iterative parameter estimation methods produced low levels of bias. However, in the presence of a time-dependent treatment effect, these methods produced higher levels of bias, similar to those produced by an inverse probability of censoring weights method. The inverse probability of censoring weights and structural nested models produced high levels of bias when switching proportions exceeded 85%. A simplified two-stage Weibull method produced low bias across all scenarios and provided the treatment switching mechanism is suitable, represents an appropriate adjustment method.
2016-07-01
bias and scale factor tests. By testing state-of-the-art gyroscopes, the effect of input rate stability and accuracy may be examined. Based on the...tumble test or bias analysis at a tilted position to remove the effect of Earth’s rotation in the scale factor test • A rate table with better rate...format guide and test procedure for coriolis vibratory gyros. Piscataway (NJ): IEEE; 2004 Dec. 3. Maio A, Smith G, Knight R, Nothwang W, Conroy J
Electrical properties of fluorine-doped ZnO nanowires formed by biased plasma treatment
NASA Astrophysics Data System (ADS)
Wang, Ying; Chen, Yicong; Song, Xiaomeng; Zhang, Zhipeng; She, Juncong; Deng, Shaozhi; Xu, Ningsheng; Chen, Jun
2018-05-01
Doping is an effective method for tuning electrical properties of zinc oxide nanowires, which are used in nanoelectronic devices. Here, ZnO nanowires were prepared by a thermal oxidation method. Fluorine doping was achieved by a biased plasma treatment, with bias voltages of 100, 200, and 300 V. Transmission electron microscopy indicated that the nanowires treated at bias voltages of 100 and 200 V featured low crystallinity. When the bias voltage was 300 V, the nanowires showed single crystalline structures. Photoluminescence measurements revealed that concentrations of oxygen and surface defects decreased at high bias voltage. X-ray photoelectron spectroscopy suggested that the F content increased as the bias voltage was increased. The conductivity of the as-grown nanowires was less than 103 S/m; the conductivity of the treated nanowires ranged from 1 × 104-5 × 104, 1 × 104-1 × 105, and 1 × 103-2 × 104 S/m for bias voltage treatments at 100, 200, and 300 V, respectively. The conductivity improvements of nanowires formed at bias voltages of 100 and 200 V, were attributed to F-doping, defects and surface states. The conductivity of nanowires treated at 300 V was attributed to the presence of F ions. Thus, we provide a method of improving electrical properties of ZnO nanowires without altering their crystal structure.
Antecedents and Consequences of Supplier Performance Evaluation Efficacy
2016-06-30
forming groups of high and low values. These tests are contingent on the reliable and valid measure of high and low rating inflation and high and...year)? Future research could deploy a SPM system as a test case on a limited set of transactions. Using a quasi-experimental design , comparisons...single source, common method bias must be of concern. Harmon’s one -factor test showed that when latent-indicator items were forced onto a single
Free-energy landscapes from adaptively biased methods: Application to quantum systems
NASA Astrophysics Data System (ADS)
Calvo, F.
2010-10-01
Several parallel adaptive biasing methods are applied to the calculation of free-energy pathways along reaction coordinates, choosing as a difficult example the double-funnel landscape of the 38-atom Lennard-Jones cluster. In the case of classical statistics, the Wang-Landau and adaptively biased molecular-dynamics (ABMD) methods are both found efficient if multiple walkers and replication and deletion schemes are used. An extension of the ABMD technique to quantum systems, implemented through the path-integral MD framework, is presented and tested on Ne38 against the quantum superposition method.
Inadequacy of internal covariance estimation for super-sample covariance
NASA Astrophysics Data System (ADS)
Lacasa, Fabien; Kunz, Martin
2017-08-01
We give an analytical interpretation of how subsample-based internal covariance estimators lead to biased estimates of the covariance, due to underestimating the super-sample covariance (SSC). This includes the jackknife and bootstrap methods as estimators for the full survey area, and subsampling as an estimator of the covariance of subsamples. The limitations of the jackknife covariance have been previously presented in the literature because it is effectively a rescaling of the covariance of the subsample area. However we point out that subsampling is also biased, but for a different reason: the subsamples are not independent, and the corresponding lack of power results in SSC underprediction. We develop the formalism in the case of cluster counts that allows the bias of each covariance estimator to be exactly predicted. We find significant effects for a small-scale area or when a low number of subsamples is used, with auto-redshift biases ranging from 0.4% to 15% for subsampling and from 5% to 75% for jackknife covariance estimates. The cross-redshift covariance is even more affected; biases range from 8% to 25% for subsampling and from 50% to 90% for jackknife. Owing to the redshift evolution of the probe, the covariances cannot be debiased by a simple rescaling factor, and an exact debiasing has the same requirements as the full SSC prediction. These results thus disfavour the use of internal covariance estimators on data itself or a single simulation, leaving analytical prediction and simulations suites as possible SSC predictors.
2016-03-03
UU UU 03-03-2016 5-Aug-2013 4-Aug-2014 Final Report: Investigation of a Neurocognitive Biomarker and of Methods to Mitigate Biases in Cognitive ...ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 Hemispheric activity, Lateralization, Cognition , fNIRS...Papers published in non peer-reviewed journals: Final Report: Investigation of a Neurocognitive Biomarker and of Methods to Mitigate Biases in Cognitive
Comparing State SAT Scores: Problems, Biases, and Corrections.
ERIC Educational Resources Information Center
Gohmann, Stephen F.
1988-01-01
One method to correct for selection bias in comparing Scholastic Aptitude Test (SAT) scores among states is presented, which is a modification of J. J. Heckman's Selection Bias Correction (1976, 1979). Empirical results suggest that sample selection bias is present in SAT score regressions. (SLD)
Is probabilistic bias analysis approximately Bayesian?
MacLehose, Richard F.; Gustafson, Paul
2011-01-01
Case-control studies are particularly susceptible to differential exposure misclassification when exposure status is determined following incident case status. Probabilistic bias analysis methods have been developed as ways to adjust standard effect estimates based on the sensitivity and specificity of exposure misclassification. The iterative sampling method advocated in probabilistic bias analysis bears a distinct resemblance to a Bayesian adjustment; however, it is not identical. Furthermore, without a formal theoretical framework (Bayesian or frequentist), the results of a probabilistic bias analysis remain somewhat difficult to interpret. We describe, both theoretically and empirically, the extent to which probabilistic bias analysis can be viewed as approximately Bayesian. While the differences between probabilistic bias analysis and Bayesian approaches to misclassification can be substantial, these situations often involve unrealistic prior specifications and are relatively easy to detect. Outside of these special cases, probabilistic bias analysis and Bayesian approaches to exposure misclassification in case-control studies appear to perform equally well. PMID:22157311
When decision heuristics and science collide.
Yu, Erica C; Sprenger, Amber M; Thomas, Rick P; Dougherty, Michael R
2014-04-01
The ongoing discussion among scientists about null-hypothesis significance testing and Bayesian data analysis has led to speculation about the practices and consequences of "researcher degrees of freedom." This article advances this debate by asking the broader questions that we, as scientists, should be asking: How do scientists make decisions in the course of doing research, and what is the impact of these decisions on scientific conclusions? We asked practicing scientists to collect data in a simulated research environment, and our findings show that some scientists use data collection heuristics that deviate from prescribed methodology. Monte Carlo simulations show that data collection heuristics based on p values lead to biases in estimated effect sizes and Bayes factors and to increases in both false-positive and false-negative rates, depending on the specific heuristic. We also show that using Bayesian data collection methods does not eliminate these biases. Thus, our study highlights the little appreciated fact that the process of doing science is a behavioral endeavor that can bias statistical description and inference in a manner that transcends adherence to any particular statistical framework.
Fetterman, Adam K.; Liu, Tianwei; Robinson, Michael D.
2014-01-01
Objective The color psychology literature has made a convincing case that color is not just about aesthetics, but also about meaning. This work has involved situational manipulations of color, rendering it uncertain as to whether color-meaning associations can be used to characterize how people differ from each other. The present research focuses on the idea that the color red is linked to, or associated with, individual differences in interpersonal hostility. Method Across four studies (N = 376), red preferences and perceptual biases were measured along with individual differences in interpersonal hostility. Results It was found that: (a) a preference for the color red was higher as interpersonal hostility increased, (b) hostile people were biased to see the color red more frequently than non-hostile people, and (c) there was a relationship between a preference for the color red and hostile social decision-making. Conclusions These studies represent an important extension of the color psychology literature, highlighting the need to attend to person-based, as well as situation-based, factors. PMID:24393102
Hansen, Hans; Weber, Reinhard
2009-02-01
An evaluation of tonal components in noise using a semantic differential approach yields several perceptual and connotative factors. This study investigates the effect of culture on these factors with the aid of equivalent listening tests carried out in Japan (n=20), France (n=23), and Germany (n=20). The data's equivalence level is determined by a bias analysis. This analysis gives insight in the cross-cultural validity of the scales used for sound character determination. Three factors were extracted by factor analysis in all cultural subsamples: pleasant, metallic, and power. By employing appropriate target rotations of the factor spaces, the rotated factors were compared and they yield high similarities between the different cultural subsamples. To check cross-cultural differences in means, an item bias analysis was conducted. The a priori assumption of unbiased scales is rejected; the differences obtained are partially linked to bias effects. Acoustical sound descriptors were additionally tested for the semantic dimensions. The high agreement in judgments between the different cultural subsamples contrast the moderate success of the signal parameters to describe the dimensions.
A brain MRI bias field correction method created in the Gaussian multi-scale space
NASA Astrophysics Data System (ADS)
Chen, Mingsheng; Qin, Mingxin
2017-07-01
A pre-processing step is needed to correct for the bias field signal before submitting corrupted MR images to such image-processing algorithms. This study presents a new bias field correction method. The method creates a Gaussian multi-scale space by the convolution of the inhomogeneous MR image with a two-dimensional Gaussian function. In the multi-Gaussian space, the method retrieves the image details from the differentiation of the original image and convolution image. Then, it obtains an image whose inhomogeneity is eliminated by the weighted sum of image details in each layer in the space. Next, the bias field-corrected MR image is retrieved after the Υ correction, which enhances the contrast and brightness of the inhomogeneity-eliminated MR image. We have tested the approach on T1 MRI and T2 MRI with varying bias field levels and have achieved satisfactory results. Comparison experiments with popular software have demonstrated superior performance of the proposed method in terms of quantitative indices, especially an improvement in subsequent image segmentation.
Fleischhauer, Monika; Enge, Sören; Miller, Robert; Strobel, Alexander; Strobel, Anja
2013-01-01
Meta-analytic data highlight the value of the Implicit Association Test (IAT) as an indirect measure of personality. Based on evidence suggesting that confounding factors such as cognitive abilities contribute to the IAT effect, this study provides a first investigation of whether basic personality traits explain unwanted variance in the IAT. In a gender-balanced sample of 204 volunteers, the Big-Five dimensions were assessed via self-report, peer-report, and IAT. By means of structural equation modeling (SEM), latent Big-Five personality factors (based on self- and peer-report) were estimated and their predictive value for unwanted variance in the IAT was examined. In a first analysis, unwanted variance was defined in the sense of method-specific variance which may result from differences in task demands between the two IAT block conditions and which can be mirrored by the absolute size of the IAT effects. In a second analysis, unwanted variance was examined in a broader sense defined as those systematic variance components in the raw IAT scores that are not explained by the latent implicit personality factors. In contrast to the absolute IAT scores, this also considers biases associated with the direction of IAT effects (i.e., whether they are positive or negative in sign), biases that might result, for example, from the IAT's stimulus or category features. None of the explicit Big-Five factors was predictive for method-specific variance in the IATs (first analysis). However, when considering unwanted variance that goes beyond pure method-specific variance (second analysis), a substantial effect of neuroticism occurred that may have been driven by the affective valence of IAT attribute categories and the facilitated processing of negative stimuli, typically associated with neuroticism. The findings thus point to the necessity of using attribute category labels and stimuli of similar affective valence in personality IATs to avoid confounding due to recoding.
An exploration of Intolerance of Uncertainty and memory bias.
Francis, Kylie; Dugas, Michel J; Ricard, Nathalie C
2016-09-01
Research suggests that individuals high in Intolerance of Uncertainty (IU) have information processing biases, which may explain the close relationship between IU and worry. Specifically, high IU individuals show an attentional bias for uncertainty, and negatively interpret uncertain information. However, evidence of a memory bias for uncertainty among high IU individuals is limited. This study therefore explored the relationship between IU and memory for uncertainty. In two separate studies, explicit and implicit memory for uncertain compared to other types of words was assessed. Cognitive avoidance and other factors that could influence information processing were also examined. IUS Factor 1 was a significant positive predictor of explicit memory for positive words, and IUS Factor 2 a significant negative predictor of implicit memory for positive words. Stimulus relevance and vocabulary were significant predictors of implicit memory for uncertain words. Cognitive avoidance was a significant predictor of both explicit and implicit memory for threat words. Female gender was a significant predictor of implicit memory for uncertain and neutral words. Word stimuli such as those used in these studies may not be the optimal way of assessing information processing biases related to IU. In addition, the predominantly female, largely student sample may limit the generalizability of the findings. Future research focusing on IU factors, stimulus relevance, and both explicit and implicit memory, was recommended. The potential role of cognitive avoidance on memory, information processing, and worry was explored. Copyright © 2016 Elsevier Ltd. All rights reserved.
Counseling women with early pregnancy failure: utilizing evidence, preserving preference.
Wallace, Robin R; Goodman, Suzan; Freedman, Lori R; Dalton, Vanessa K; Harris, Lisa H
2010-12-01
To apply principles of shared decision-making to EPF management counseling. To present a patient treatment priority checklist developed from review of available literature on patient priorities for EPF management. Review of evidence for patient preferences; personal, emotional, physical and clinical factors that may influence patient priorities for EPF management; and the clinical factors, resources, and provider bias that may influence current practice. Women have strong and diverse preferences for EPF management and report higher satisfaction when treated according to these preferences. However, estimates of actual treatment patterns suggest that current practice does not reflect the evidence for safety and acceptability of all options, or patient preferences. Multiple practice barriers and biases exist that may be influencing provider counseling about options for EPF management. Choosing management for EPF is a preference-sensitive decision. A patient-centered approach to EPF management should incorporate counseling about all treatment options. Providers can integrate a counseling model into EPF management practice that utilizes principles of shared decision-making and an organized method for eliciting patient preferences, priorities, and concerns about treatment options. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Efficacy of Chinese herbal medicine for stroke modifiable risk factors: a systematic review.
Peng, Wenbo; Lauche, Romy; Ferguson, Caleb; Frawley, Jane; Adams, Jon; Sibbritt, David
2017-01-01
The vast majority of stroke burden is attributable to its modifiable risk factors. This paper aimed to systematically summarise the evidence of Chinese herbal medicine (CHM) interventions on stroke modifiable risk factors for stroke prevention. A literature search was conducted via the MEDLINE, CINAHL/EBSCO, SCOPUS, and Cochrane Database from 1996 to 2016. Randomised controlled trials or cross-over studies were included. Risk of bias was assessed according to the Cochrane Risk of Bias tool. A total of 46 trials (6895 participants) were identified regarding the use of CHM interventions in the management of stroke risk factors, including 12 trials for hypertension, 10 trials for diabetes, eight trials for hyperlipidemia, seven trials for impaired glucose tolerance, three trials for obesity, and six trials for combined risk factors. Amongst the included trials with diverse study design, an intervention of CHM as a supplement to biomedicine and/or a lifestyle intervention was found to be more effective in lowering blood pressure, decreasing blood glucose level, helping impaired glucose tolerance reverse to normal, and/or reducing body weight compared to CHM monotherapy. While no trial reported deaths amongst the CHM groups, some papers do report moderate adverse effects associated with CHM use. However, the findings of such beneficial effects of CHM should be interpreted with caution due to the heterogeneous set of complex CHM studied, the various control interventions employed, the use of different participants' inclusion criteria, and low methodological quality across the published studies. The risk of bias of trials identified was largely unclear in the domains of selection bias and detection bias across the included studies. This study showed substantial evidence of varied CHM interventions improving the stroke modifiable risk factors. More rigorous research examining the use of CHM products for sole or multiple major stroke risk factors are warranted.
An examination of factors driving chinese gamblers' fallacy bias.
Fong, Lawrence Hoc Nang; Law, Rob; Lam, Desmond
2014-09-01
Gambling is a leisure activity, which is enjoyed by many people around the world. Among these people, Chinese are known for their high propensity to gamble and are highly sought after by many casinos. In this exploratory study, the effect of two types of fallacy bias-positive recency and negative recency-on the betting behavior of Chinese gamblers is investigated. Although the influence of fallacy bias on a betting decision is well documented, little is known about the interaction of the factors that dictate fallacy bias. Drawing from an analysis of 2,645 betting decisions, the results show that Chinese gamblers primarily endorse positive recency, especially when the latest outcome is more frequent. This is contrary to most findings on Western subjects in which negative recency is more common. Current findings have meaningful implications to casino gaming entertainment businesses and public policymakers.
The role of spinal concave–convex biases in the progression of idiopathic scoliosis
Driscoll, Mark; Moreau, Alain; Villemure, Isabelle; Parent, Stefan
2009-01-01
Inadequate understanding of risk factors involved in the progression of idiopathic scoliosis restrains initial treatment to observation until the deformity shows signs of significant aggravation. The purpose of this analysis is to explore whether the concave–convex biases associated with scoliosis (local degeneration of the intervertebral discs, nucleus migration, and local increase in trabecular bone-mineral density of vertebral bodies) may be identified as progressive risk factors. Finite element models of a 26° right thoracic scoliotic spine were constructed based on experimental and clinical observations that included growth dynamics governed by mechanical stimulus. Stress distribution over the vertebral growth plates, progression of Cobb angles, and vertebral wedging were explored in models with and without the biases of concave–convex properties. The inclusion of the bias of concave–convex properties within the model both augmented the asymmetrical loading of the vertebral growth plates by up to 37% and further amplified the progression of Cobb angles and vertebral wedging by as much as 5.9° and 0.8°, respectively. Concave–convex biases are factors that influence the progression of scoliotic curves. Quantifying these parameters in a patient with scoliosis may further provide a better clinical assessment of the risk of progression. PMID:19130096
Parr, Christine L; Hjartåker, Anette; Laake, Petter; Lund, Eiliv; Veierød, Marit B
2009-02-01
Case-control studies of melanoma have the potential for recall bias after much public information about the relation with ultraviolet radiation. Recall bias has been investigated in few studies and only for some risk factors. A nested case-control study of recall bias was conducted in 2004 within the Norwegian Women and Cancer Study: 208 melanoma cases and 2,080 matched controls were invited. Data were analyzed for 162 cases (response, 78%) and 1,242 controls (response, 77%). Questionnaire responses to several host factors and ultraviolet exposures collected at enrollment in 1991-1997 and in 2004 were compared stratified on case-control status. Shifts in responses were observed among both cases and controls, but a shift in cases was observed only for skin color after chronic sun exposure, and a larger shift in cases was observed for nevi. Weighted kappa was lower for cases than for controls for most age intervals of sunburn, sunbathing vacations, and solarium use. Differences in odds ratio estimates of melanoma based on prospective and retrospective measurements indicate measurement error that is difficult to characterize. The authors conclude that indications of recall bias were found in this sample of Norwegian women, but that the results were inconsistent for the different exposures.
Relaxation and approximate factorization methods for the unsteady full potential equation
NASA Technical Reports Server (NTRS)
Shankar, V.; Ide, H.; Gorski, J.
1984-01-01
The unsteady form of the full potential equation is solved in conservation form, using implicit methods based on approximate factorization and relaxation schemes. A local time linearization for density is introduced to enable solution to the equation in terms of phi, the velocity potential. A novel flux-biasing technique is applied to generate proper forms of the artificial viscosity, to treat hyperbolic regions with shocks and sonic lines present. The wake is properly modeled by accounting not only for jumps in phi, but also for jumps in higher derivatives of phi obtained from requirements of density continuity. The far field is modeled using the Riemann invariants to simulate nonreflecting boundary conditions. Results are presented for flows over airfoils, cylinders, and spheres. Comparisons are made with available Euler and full potential results.
2009-04-01
completing in-person clinical assessments that include structured clinical interviews and psychological testing . As an introduction to the three...coordinator to opt out of the project. 2.2.2. Analyses of Response Bias. To test for response bias, we compared responders and non-responders to the...used to include these subjects without Wave 2 data in the final analyses. 2.3.2. Analyses of Response Bias. To test for response bias at Wave 3
Zurbrigg, Katherine J; Van den Borre, Nicole M
2013-03-01
The Ontario Farm call Surveillance Project (OFSP) was a practitioner-based, syndromic surveillance system for livestock disease. Three data-recording methods (paper, web-based, and handheld electronic) used by participating veterinarians were compared for timeliness (when the report arrived at the OFSP office), completeness of the report, and the usage and costs of incentives offered to veterinarians as compensation for their time to record data. There were no statistically significant differences in these parameters among the 3 data-recording methods. This indicates that different data-recording methods can be used within a single veterinary surveillance program while maintaining data integrity and timely reporting. Factors such as ease of data collection and providing incentives valued by veterinarians ensured high compliance and long-term participation in the project. It also increased the diversity of the participant group, reducing the likelihood of biased data submissions.
Zurbrigg, Katherine J.; Van den Borre, Nicole M.
2013-01-01
The Ontario Farm call Surveillance Project (OFSP) was a practitioner-based, syndromic surveillance system for livestock disease. Three data-recording methods (paper, web-based, and handheld electronic) used by participating veterinarians were compared for timeliness (when the report arrived at the OFSP office), completeness of the report, and the usage and costs of incentives offered to veterinarians as compensation for their time to record data. There were no statistically significant differences in these parameters among the 3 data-recording methods. This indicates that different data-recording methods can be used within a single veterinary surveillance program while maintaining data integrity and timely reporting. Factors such as ease of data collection and providing incentives valued by veterinarians ensured high compliance and long-term participation in the project. It also increased the diversity of the participant group, reducing the likelihood of biased data submissions. PMID:23997260
Huan, L N; Tejani, A M; Egan, G
2014-10-01
An increasing amount of recently published literature has implicated outcome reporting bias (ORB) as a major contributor to skewing data in both randomized controlled trials and systematic reviews; however, little is known about the current methods in place to detect ORB. This study aims to gain insight into the detection and management of ORB by biomedical journals. This was a cross-sectional analysis involving standardized questions via email or telephone with the top 30 biomedical journals (2012) ranked by impact factor. The Cochrane Database of Systematic Reviews was excluded leaving 29 journals in the sample. Of 29 journals, 24 (83%) responded to our initial inquiry of which 14 (58%) answered our questions and 10 (42%) declined participation. Five (36%) of the responding journals indicated they had a specific method to detect ORB, whereas 9 (64%) did not have a specific method in place. The prevalence of ORB in the review process seemed to differ with 4 (29%) journals indicating ORB was found commonly, whereas 7 (50%) indicated ORB was uncommon or never detected by their journal previously. The majority (n = 10/14, 72%) of journals were unwilling to report or make discrepancies found in manuscripts available to the public. Although the minority, there were some journals (n = 4/14, 29%) which described thorough methods to detect ORB. Many journals seemed to lack a method with which to detect ORB and its estimated prevalence was much lower than that reported in literature suggesting inadequate detection. There exists a potential for overestimation of treatment effects of interventions and unclear risks. Fortunately, there are journals within this sample which appear to utilize comprehensive methods for detection of ORB, but overall, the data suggest improvements at the biomedical journal level for detecting and minimizing the effect of this bias are needed. © 2014 John Wiley & Sons Ltd.
Helzer, Erik G.; Connor-Smith, Jennifer K.; Reed, Marjorie A.
2009-01-01
This study investigated the influence of situational and dispositional factors on attentional biases toward social threat, and the impact of these attentional biases on distress in a sample of adolescents. Results suggest greater biases for personally-relevant threat cues, as individuals reporting high social stress were vigilant to subliminal social threat cues, but not physical threat cues, and those reporting low social stress showed no attentional biases. Individual differences in fearful temperament and attentional control interacted to influence attentional biases, with fearful temperament predicting biases to supraliminal social threat only for individuals with poor attentional control. Multivariate analyses exploring relations between attentional biases for social threat and symptoms of anxiety and depression revealed that attentional biases alone were rarely related to symptoms. However, biases did interact with social stress, fearful temperament, and attentional control to predict distress. Results are discussed in terms of automatic and effortful cognitive mechanisms underlying threat cue processing. PMID:18791905
Effects of Bias Modification Training in Binge Eating Disorder.
Schmitz, Florian; Svaldi, Jennifer
2017-09-01
Food-related attentional biases have been identified as maintaining factors in binge eating disorder (BED) as they can trigger a binge episode. Bias modification training may reduce symptoms, as it has been shown to be successful in other appetitive disorders. The aim of this study was to assess and modify food-related biases in BED. It was tested whether biases could be increased and decreased by means of a modified dot-probe paradigm, how long such bias modification persisted, and whether this affected subjective food craving. Participants were randomly assigned to a bias enhancement (attend to food stimulus) group or to a bias reduction (avoid food stimulus) group. Food-related attentional bias was found to be successfully reduced in the bias-reduction group, and effects persisted briefly. Additionally, subjective craving for food was influenced by the intervention, and possible mechanisms are discussed. Given these promising initial results, future research should investigate boundary conditions of the experimental intervention to understand how it could complement treatment of BED. Copyright © 2017. Published by Elsevier Ltd.
NASA Technical Reports Server (NTRS)
Laird, Jamie S.; Onoda, Shinobu; Hirao, Toshio; Becker, Heidi; Johnston, Allan; Laird, Jamie S.; Itoh, Hisayoshi
2006-01-01
Effects of displacement damage and ionization damage induced by gamma irradiation on the dark current and impulse response of a high-bandwidth low breakdown voltage Si Avalanche Photodiode has been investigated using picosecond laser microscopy. At doses as high as 10Mrad (Si) minimal alteration in the impulse response and bandwidth were observed. However, dark current measurements also performed with and without biased irradiation exhibit anomalously large damage factors for applied biases close to breakdown. The absence of any degradation in the impulse response is discussed as are possible mechanisms for higher dark current damage factors observed for biased irradiation.
ERIC Educational Resources Information Center
Johnson, Aleta Bok
2006-01-01
This article examines the etiology of social phobia, and proposes that the sensitivity to self-scrutiny common to social phobics can be exacerbated by the effects of longstanding racial bias. The impact of racism on identity and the importance of context are explored as salient factors in the onset of a case of social phobia for an…
INTERVENTION AT THE FOOT-SHOE-PEDAL INTERFACE IN COMPETITIVE CYCLISTS
Vicenzino, Bill; Sisto, Sue Ann
2016-01-01
ABSTRACT Background Competitive cyclists are susceptible to injury from the highly repetitive nature of pedaling during training and racing. Deviation from an optimal movement pattern is often cited as a factor contributing to tissue stress with specific concern for excessive frontal plane knee motion. Wedges and orthoses are increasingly used at the foot-shoe-pedal-interface (FSPI) in cycling shoes to alter the kinematics of the lower limb while cycling. Determination of the effect of FSPI alteration on cycling kinematics may offer a simple, inexpensive tool to reduce anterior knee pain in recreational and competitive cyclists. There have been a limited number of experimental studies examining the effect of this intervention in cyclists, and there is little agreement upon which FSPI interventions can prevent or treat knee injury. The purpose of this review is to provide a broader review of the literature than has been performed to date, and to critically examine the literature examining the evidence for FSPI intervention in competitive cyclists. Methods Current literature examining the kinematic response to intervention at the FSPI while cycling was reviewed. A multi-database search was performed in PubMed, EBSCO, Scopus, CINAHL and SPORTdiscus. Eleven articles were reviewed, and a risk of bias assessment performed according to guidelines developed by the Cochrane Bias Methods Group. Papers with a low risk of bias were selected for review, but two papers with higher risk of bias were included as there were few high quality studies available on this topic. Results Seven of the eleven papers had low bias in sequence generation i.e. random allocation to the test condition, only one paper had blinding to group allocation, all papers had detailed but non-standardized methodology, and incomplete data reporting, but were generally free of other bias sources. Conclusions Wedges and orthoses at the FSPI alter kinematics of the lower limb while cycling, although conclusions about their efficacy and response to long-term use are limited. Further high quality experimental studies are needed examining cyclists using standardized methodology and products currently used to alter SPFI function. Level of Evidence 3 PMID:27525187
Development of Spatiotemporal Bias-Correction Techniques for Downscaling GCM Predictions
NASA Astrophysics Data System (ADS)
Hwang, S.; Graham, W. D.; Geurink, J.; Adams, A.; Martinez, C. J.
2010-12-01
Accurately representing the spatial variability of precipitation is an important factor for predicting watershed response to climatic forcing, particularly in small, low-relief watersheds affected by convective storm systems. Although Global Circulation Models (GCMs) generally preserve spatial relationships between large-scale and local-scale mean precipitation trends, most GCM downscaling techniques focus on preserving only observed temporal variability on point by point basis, not spatial patterns of events. Downscaled GCM results (e.g., CMIP3 ensembles) have been widely used to predict hydrologic implications of climate variability and climate change in large snow-dominated river basins in the western United States (Diffenbaugh et al., 2008; Adam et al., 2009). However fewer applications to smaller rain-driven river basins in the southeastern US (where preserving spatial variability of rainfall patterns may be more important) have been reported. In this study a new method was developed to bias-correct GCMs to preserve both the long term temporal mean and variance of the precipitation data, and the spatial structure of daily precipitation fields. Forty-year retrospective simulations (1960-1999) from 16 GCMs were collected (IPCC, 2007; WCRP CMIP3 multi-model database: https://esg.llnl.gov:8443/), and the daily precipitation data at coarse resolution (i.e., 280km) were interpolated to 12km spatial resolution and bias corrected using gridded observations over the state of Florida (Maurer et al., 2002; Wood et al, 2002; Wood et al, 2004). In this method spatial random fields which preserved the observed spatial correlation structure of the historic gridded observations and the spatial mean corresponding to the coarse scale GCM daily rainfall were generated. The spatiotemporal variability of the spatio-temporally bias-corrected GCMs were evaluated against gridded observations, and compared to the original temporally bias-corrected and downscaled CMIP3 data for the central Florida. The hydrologic response of two southwest Florida watersheds to the gridded observation data, the original bias corrected CMIP3 data, and the new spatiotemporally corrected CMIP3 predictions was compared using an integrated surface-subsurface hydrologic model developed by Tampa Bay Water.
NASA Astrophysics Data System (ADS)
Garcia-Pintado, J.; Barberá, G. G.; Erena Arrabal, M.; Castillo, V. M.
2010-12-01
Objective analysis schemes (OAS), also called ``succesive correction methods'' or ``observation nudging'', have been proposed for multisensor precipitation estimation combining remote sensing data (meteorological radar or satellite) with data from ground-based raingauge networks. However, opposite to the more complex geostatistical approaches, the OAS techniques for this use are not optimized. On the other hand, geostatistical techniques ideally require, at the least, modelling the covariance from the rain gauge data at every time step evaluated, which commonly cannot be soundly done. Here, we propose a new procedure (concurrent multiplicative-additive objective analysis scheme [CMA-OAS]) for operational rainfall estimation using rain gauges and meteorological radar, which does not require explicit modelling of spatial covariances. On the basis of a concurrent multiplicative-additive (CMA) decomposition of the spatially nonuniform radar bias, within-storm variability of rainfall and fractional coverage of rainfall are taken into account. Thus both spatially nonuniform radar bias, given that rainfall is detected, and bias in radar detection of rainfall are handled. The interpolation procedure of CMA-OAS is built on the OAS, whose purpose is to estimate a filtered spatial field of the variable of interest through a successive correction of residuals resulting from a Gaussian kernel smoother applied on spatial samples. The CMA-OAS, first, poses an optimization problem at each gauge-radar support point to obtain both a local multiplicative-additive radar bias decomposition and a regionalization parameter. Second, local biases and regionalization parameters are integrated into an OAS to estimate the multisensor rainfall at the ground level. The approach considers radar estimates as background a priori information (first guess), so that nudging to observations (gauges) may be relaxed smoothly to the first guess, and the relaxation shape is obtained from the sequential optimization. The procedure is suited to relatively sparse rain gauge networks. To show the procedure, six storms are analyzed at hourly steps over 10,663 km2. Results generally indicated an improved quality with respect to other methods evaluated: a standard mean-field bias adjustment, an OAS spatially variable adjustment with multiplicative factors, ordinary cokriging, and kriging with external drift. In theory, it could be equally applicable to gauge-satellite estimates and other hydrometeorological variables.
Study of the Dependence on Magnetic Field and Bias Voltage of an AC-Biased TES Microcalorimeter
NASA Technical Reports Server (NTRS)
Bandler, Simon
2011-01-01
At SRON we are studying the performance of a Goddard Space Flight Center single pixel TES microcalorimeter operated in the AC bias configuration. For x-ray photons at 6keV the AC biased pixel shows a best energy resolution of 3.7eV, which is about a factor of 2 worse than the energy resolution observed in identical DC-biased pixels. To better understand the reasons of this discrepancy, we investigated the detector performance as a function of temperature, bias working point and applied magnetic field. A strong periodic dependence of the detector noise on the TES AC bias voltage is measured. We discuss the results in the framework of the recent weak-link behaviour observed inTES microcalorimeters.
Invited Commentary: The Need for Cognitive Science in Methodology.
Greenland, Sander
2017-09-15
There is no complete solution for the problem of abuse of statistics, but methodological training needs to cover cognitive biases and other psychosocial factors affecting inferences. The present paper discusses 3 common cognitive distortions: 1) dichotomania, the compulsion to perceive quantities as dichotomous even when dichotomization is unnecessary and misleading, as in inferences based on whether a P value is "statistically significant"; 2) nullism, the tendency to privilege the hypothesis of no difference or no effect when there is no scientific basis for doing so, as when testing only the null hypothesis; and 3) statistical reification, treating hypothetical data distributions and statistical models as if they reflect known physical laws rather than speculative assumptions for thought experiments. As commonly misused, null-hypothesis significance testing combines these cognitive problems to produce highly distorted interpretation and reporting of study results. Interval estimation has so far proven to be an inadequate solution because it involves dichotomization, an avenue for nullism. Sensitivity and bias analyses have been proposed to address reproducibility problems (Am J Epidemiol. 2017;186(6):646-647); these methods can indeed address reification, but they can also introduce new distortions via misleading specifications for bias parameters. P values can be reframed to lessen distortions by presenting them without reference to a cutoff, providing them for relevant alternatives to the null, and recognizing their dependence on all assumptions used in their computation; they nonetheless require rescaling for measuring evidence. I conclude that methodological development and training should go beyond coverage of mechanistic biases (e.g., confounding, selection bias, measurement error) to cover distortions of conclusions produced by statistical methods and psychosocial forces. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
ERIC Educational Resources Information Center
Okanda, Mako; Itakura, Shoji
2011-01-01
Previous studies have suggested that younger preschoolers exhibit a yes bias due to underdeveloped cognitive abilities, whereas older preschoolers exhibit a response bias due to other factors. To test this hypothesis, we investigated the response latency to yes-no questions pertaining to familiar and unfamiliar objects in 3- to 6-year-olds. The…
Koscik, Timothy R.; Tranel, Daniel
2013-01-01
People tend to assume that outcomes are caused by dispositional factors, e.g., a person’s constitution or personality, even when the actual cause is due to situational factors, e.g., luck or coincidence. This is known as the ‘correspondence bias.’ This tendency can lead normal, intelligent persons to make suboptimal decisions. Here, we used a neuropsychological approach to investigate the neural basis of the correspondence bias, by studying economic decision-making in patients with damage to the ventromedial prefrontal cortex (vmPFC). Given the role of the vmPFC in social cognition, we predicted that vmPFC is necessary for the normal correspondence bias. In our experiment, consistent with expectations, healthy (N=46) and brain-damaged (N=30) comparison participants displayed the correspondence bias when investing and invested no differently when given dispositional or situational information. By contrast, vmPFC patients (N=17) displayed a lack of correspondence bias and invested more when given dispositional than situational information. The results support the conclusion that vmPFC is critical for normal social inference and the correspondence bias, and our findings help clarify the important (and potentially disadvantageous) role of social inference in economic decision-making. PMID:23574584
Automated detection of heuristics and biases among pathologists in a computer-based system.
Crowley, Rebecca S; Legowski, Elizabeth; Medvedeva, Olga; Reitmeyer, Kayse; Tseytlin, Eugene; Castine, Melissa; Jukic, Drazen; Mello-Thoms, Claudia
2013-08-01
The purpose of this study is threefold: (1) to develop an automated, computer-based method to detect heuristics and biases as pathologists examine virtual slide cases, (2) to measure the frequency and distribution of heuristics and errors across three levels of training, and (3) to examine relationships of heuristics to biases, and biases to diagnostic errors. The authors conducted the study using a computer-based system to view and diagnose virtual slide cases. The software recorded participant responses throughout the diagnostic process, and automatically classified participant actions based on definitions of eight common heuristics and/or biases. The authors measured frequency of heuristic use and bias across three levels of training. Biases studied were detected at varying frequencies, with availability and search satisficing observed most frequently. There were few significant differences by level of training. For representativeness and anchoring, the heuristic was used appropriately as often or more often than it was used in biased judgment. Approximately half of the diagnostic errors were associated with one or more biases. We conclude that heuristic use and biases were observed among physicians at all levels of training using the virtual slide system, although their frequencies varied. The system can be employed to detect heuristic use and to test methods for decreasing diagnostic errors resulting from cognitive biases.
NASA Technical Reports Server (NTRS)
Pauwels, V. R. N.; DeLannoy, G. J. M.; Hendricks Franssen, H.-J.; Vereecken, H.
2013-01-01
In this paper, we present a two-stage hybrid Kalman filter to estimate both observation and forecast bias in hydrologic models, in addition to state variables. The biases are estimated using the discrete Kalman filter, and the state variables using the ensemble Kalman filter. A key issue in this multi-component assimilation scheme is the exact partitioning of the difference between observation and forecasts into state, forecast bias and observation bias updates. Here, the error covariances of the forecast bias and the unbiased states are calculated as constant fractions of the biased state error covariance, and the observation bias error covariance is a function of the observation prediction error covariance. In a series of synthetic experiments, focusing on the assimilation of discharge into a rainfall-runoff model, it is shown that both static and dynamic observation and forecast biases can be successfully estimated. The results indicate a strong improvement in the estimation of the state variables and resulting discharge as opposed to the use of a bias-unaware ensemble Kalman filter. Furthermore, minimal code modification in existing data assimilation software is needed to implement the method. The results suggest that a better performance of data assimilation methods should be possible if both forecast and observation biases are taken into account.
Mi, Xiaojuan; Hammill, Bradley G; Curtis, Lesley H; Greiner, Melissa A; Setoguchi, Soko
2013-08-01
To assess the extent of immortal time bias in estimating the clinical effectiveness of implantable cardioverter-defibrillators (ICDs) and the impact of methods of handling immortal time bias. Retrospective population-based cohort study of patients with heart failure in a national registry linked to Medicare claims (2003-2008). We compared three methods of handling immortal time bias, namely the Mantel-Byar (or time-dependent exposure assignment), the landmark, and the exclusion methods. Of the 5,226 study patients, 1,274 (24.4%) received ICD therapy. Total person-years in the Mantel-Byar method were 2,639, or 490 more than that in the exclusion method, reflecting potential immortal time in the study. The exclusion method yielded a hazard ratio of 0.71 (95% confidence interval [CI]: 0.63-0.80), which was 16% lower than the Mantel-Byar method (0.84; 95% CI: 0.75-0.95). The 120-day landmark method yielded similar results to those produced by the Mantel-Byar method (0.82; 95% CI: 0.72-0.95). Immortal time bias was detected in the ICD clinical effectiveness study, which might have led to substantial bias overestimating the treatment effect if handled by exclusion. When an appropriate landmark was selected, that method yielded similar hazard ratios to those obtained by the Mantel-Byar method, supporting the validity of the landmark method. Copyright © 2013 Elsevier Inc. All rights reserved.
Nonparametric and Semiparametric Regression Estimation for Length-biased Survival Data
Shen, Yu; Ning, Jing; Qin, Jing
2016-01-01
For the past several decades, nonparametric and semiparametric modeling for conventional right-censored survival data has been investigated intensively under a noninformative censoring mechanism. However, these methods may not be applicable for analyzing right-censored survival data that arise from prevalent cohorts when the failure times are subject to length-biased sampling. This review article is intended to provide a summary of some newly developed methods as well as established methods for analyzing length-biased data. PMID:27086362
Koch, Amanda J; D'Mello, Susan D; Sackett, Paul R
2015-01-01
Gender bias continues to be a concern in many work settings, leading researchers to identify factors that influence workplace decisions. In this study we examine several of these factors, using an organizing framework of sex distribution within jobs (including male- and female-dominated jobs as well as sex-balanced, or integrated, jobs). We conducted random effects meta-analyses including 136 independent effect sizes from experimental studies (N = 22,348) and examined the effects of decision-maker gender, amount and content of information available to the decision maker, type of evaluation, and motivation to make careful decisions on gender bias in organizational decisions. We also examined study characteristics such as type of participant, publication year, and study design. Our findings revealed that men were preferred for male-dominated jobs (i.e., gender-role congruity bias), whereas no strong preference for either gender was found for female-dominated or integrated jobs. Second, male raters exhibited greater gender-role congruity bias than did female raters for male-dominated jobs. Third, gender-role congruity bias did not consistently decrease when decision makers were provided with additional information about those they were rating, but gender-role congruity bias was reduced when information clearly indicated high competence of those being evaluated. Fourth, gender-role congruity bias did not differ between decisions that required comparisons among ratees and decisions made about individual ratees. Fifth, decision makers who were motivated to make careful decisions tended to exhibit less gender-role congruity bias for male-dominated jobs. Finally, for male-dominated jobs, experienced professionals showed smaller gender-role congruity bias than did undergraduates or working adults. (c) 2015 APA, all rights reserved.
Benwell, Christopher S Y; Harvey, Monika; Gardner, Stephanie; Thut, Gregor
2013-03-01
Systematic biases in spatial attention are a common finding. In the general population, a systematic leftward bias is typically observed (pseudoneglect), possibly as a consequence of right hemisphere dominance for visuospatial attention. However, this leftward bias can cross-over to a systematic rightward bias with changes in stimulus and state factors (such as line length and arousal). The processes governing these changes are still unknown. Here we tested models of spatial attention as to their ability to account for these effects. To this end, we experimentally manipulated both stimulus and state factors, while healthy participants performed a computerized version of a landmark task. State was manipulated by time-on-task (>1 h) leading to increased fatigue and a reliable left- to rightward shift in spatial bias. Stimulus was manipulated by presenting either long or short lines which was associated with a shift of subjective midpoint from a reliable leftward bias for long to a more rightward bias for short lines. Importantly, we found time-on-task and line length effects to be additive suggesting a common denominator for line bisection across all conditions, which is in disagreement with models that assume that bisection decisions in long and short lines are governed by distinct processes (Magnitude estimation vs Global/local distinction). Our findings emphasize the dynamic rather than static nature of spatial biases in midline judgement. They are best captured by theories of spatial attention positing that spatial bias is flexibly modulated, and subject to inter-hemispheric balance which can change over time or conditions to accommodate task demands or reflect fatigue. Copyright © 2012 Elsevier Ltd. All rights reserved.
Zhu, Qiaohao; Carriere, K C
2016-01-01
Publication bias can significantly limit the validity of meta-analysis when trying to draw conclusion about a research question from independent studies. Most research on detection and correction for publication bias in meta-analysis focus mainly on funnel plot-based methodologies or selection models. In this paper, we formulate publication bias as a truncated distribution problem, and propose new parametric solutions. We develop methodologies of estimating the underlying overall effect size and the severity of publication bias. We distinguish the two major situations, in which publication bias may be induced by: (1) small effect size or (2) large p-value. We consider both fixed and random effects models, and derive estimators for the overall mean and the truncation proportion. These estimators will be obtained using maximum likelihood estimation and method of moments under fixed- and random-effects models, respectively. We carried out extensive simulation studies to evaluate the performance of our methodology, and to compare with the non-parametric Trim and Fill method based on funnel plot. We find that our methods based on truncated normal distribution perform consistently well, both in detecting and correcting publication bias under various situations.
Phelan, Sean M; Burke, Sara E; Hardeman, Rachel R; White, Richard O; Przedworski, Julia; Dovidio, John F; Perry, Sylvia P; Plankey, Michael; A Cunningham, Brooke; Finstad, Deborah; W Yeazel, Mark; van Ryn, Michelle
2017-11-01
Implicit and explicit bias among providers can influence the quality of healthcare. Efforts to address sexual orientation bias in new physicians are hampered by a lack of knowledge of school factors that influence bias among students. To determine whether medical school curriculum, role modeling, diversity climate, and contact with sexual minorities predict bias among graduating students against gay and lesbian people. Prospective cohort study. A sample of 4732 first-year medical students was recruited from a stratified random sample of 49 US medical schools in the fall of 2010 (81% response; 55% of eligible), of which 94.5% (4473) identified as heterosexual. Seventy-eight percent of baseline respondents (3492) completed a follow-up survey in their final semester (spring 2014). Medical school predictors included formal curriculum, role modeling, diversity climate, and contact with sexual minorities. Outcomes were year 4 implicit and explicit bias against gay men and lesbian women, adjusted for bias at year 1. In multivariate models, lower explicit bias against gay men and lesbian women was associated with more favorable contact with LGBT faculty, residents, students, and patients, and perceived skill and preparedness for providing care to LGBT patients. Greater explicit bias against lesbian women was associated with discrimination reported by sexual minority students (b = 1.43 [0.16, 2.71]; p = 0.03). Lower implicit sexual orientation bias was associated with more frequent contact with LGBT faculty, residents, students, and patients (b = -0.04 [-0.07, -0.01); p = 0.008). Greater implicit bias was associated with more faculty role modeling of discriminatory behavior (b = 0.34 [0.11, 0.57); p = 0.004). Medical schools may reduce bias against sexual minority patients by reducing negative role modeling, improving the diversity climate, and improving student preparedness to care for this population.
Recall bias in childhood atopic diseases among adults in the Odense Adolescence Cohort Study.
Mortz, Charlotte G; Andersen, Klaus E; Bindslev-Jensen, Carsten
2015-11-01
Atopic dermatitis (AD) is a common disease in childhood and an important risk factor for the later development of other atopic diseases. Many publications on childhood AD use questionnaires based on information obtained in adulthood, which introduce the possibility of recall bias. In a prospective cohort study, recall bias was evaluated in 1,501 unselected schoolchildren (mean age 14 years) evaluated for the first time in 1995 with a standardized questionnaire combined with a clinical examination and repeated in 2010. The lifetime prevalence of AD was 34.1% including data obtained both during school age and 15 years later, compared with 23.6% including data only from adulthood. The most important factors for remembering having had AD in childhood were: (i) long duration of dermatitis in childhood; (ii) adult hand eczema; and (iii) concomitant atopic disease. Recall bias for childhood AD affected the results of logistic regression on adult hand eczema and is a significant problem in retrospective epidemiological questionnaire studies evaluating previous AD as a risk factor for development of other diseases.
Information-theoretic model comparison unifies saliency metrics
Kümmerer, Matthias; Wallis, Thomas S. A.; Bethge, Matthias
2015-01-01
Learning the properties of an image associated with human gaze placement is important both for understanding how biological systems explore the environment and for computer vision applications. There is a large literature on quantitative eye movement models that seeks to predict fixations from images (sometimes termed “saliency” prediction). A major problem known to the field is that existing model comparison metrics give inconsistent results, causing confusion. We argue that the primary reason for these inconsistencies is because different metrics and models use different definitions of what a “saliency map” entails. For example, some metrics expect a model to account for image-independent central fixation bias whereas others will penalize a model that does. Here we bring saliency evaluation into the domain of information by framing fixation prediction models probabilistically and calculating information gain. We jointly optimize the scale, the center bias, and spatial blurring of all models within this framework. Evaluating existing metrics on these rephrased models produces almost perfect agreement in model rankings across the metrics. Model performance is separated from center bias and spatial blurring, avoiding the confounding of these factors in model comparison. We additionally provide a method to show where and how models fail to capture information in the fixations on the pixel level. These methods are readily extended to spatiotemporal models of fixation scanpaths, and we provide a software package to facilitate their use. PMID:26655340
Garrard, Lili; Price, Larry R.; Bott, Marjorie J.; Gajewski, Byron J.
2016-01-01
Item response theory (IRT) models provide an appropriate alternative to the classical ordinal confirmatory factor analysis (CFA) during the development of patient-reported outcome measures (PROMs). Current literature has identified the assessment of IRT model fit as both challenging and underdeveloped (Sinharay & Johnson, 2003; Sinharay, Johnson, & Stern, 2006). This study evaluates the performance of Ordinal Bayesian Instrument Development (OBID), a Bayesian IRT model with a probit link function approach, through applications in two breast cancer-related instrument development studies. The primary focus is to investigate an appropriate method for comparing Bayesian IRT models in PROMs development. An exact Bayesian leave-one-out cross-validation (LOO-CV) approach (Vehtari & Lampinen, 2002) is implemented to assess prior selection for the item discrimination parameter in the IRT model and subject content experts’ bias (in a statistical sense and not to be confused with psychometric bias as in differential item functioning) toward the estimation of item-to-domain correlations. Results support the utilization of content subject experts’ information in establishing evidence for construct validity when sample size is small. However, the incorporation of subject experts’ content information in the OBID approach can be sensitive to the level of expertise of the recruited experts. More stringent efforts need to be invested in the appropriate selection of subject experts to efficiently use the OBID approach and reduce potential bias during PROMs development. PMID:27667878
Garrard, Lili; Price, Larry R; Bott, Marjorie J; Gajewski, Byron J
2016-10-01
Item response theory (IRT) models provide an appropriate alternative to the classical ordinal confirmatory factor analysis (CFA) during the development of patient-reported outcome measures (PROMs). Current literature has identified the assessment of IRT model fit as both challenging and underdeveloped (Sinharay & Johnson, 2003; Sinharay, Johnson, & Stern, 2006). This study evaluates the performance of Ordinal Bayesian Instrument Development (OBID), a Bayesian IRT model with a probit link function approach, through applications in two breast cancer-related instrument development studies. The primary focus is to investigate an appropriate method for comparing Bayesian IRT models in PROMs development. An exact Bayesian leave-one-out cross-validation (LOO-CV) approach (Vehtari & Lampinen, 2002) is implemented to assess prior selection for the item discrimination parameter in the IRT model and subject content experts' bias (in a statistical sense and not to be confused with psychometric bias as in differential item functioning) toward the estimation of item-to-domain correlations. Results support the utilization of content subject experts' information in establishing evidence for construct validity when sample size is small. However, the incorporation of subject experts' content information in the OBID approach can be sensitive to the level of expertise of the recruited experts. More stringent efforts need to be invested in the appropriate selection of subject experts to efficiently use the OBID approach and reduce potential bias during PROMs development.
Chen, DaYang; Zhen, HeFu; Qiu, Yong; Liu, Ping; Zeng, Peng; Xia, Jun; Shi, QianYu; Xie, Lin; Zhu, Zhu; Gao, Ya; Huang, GuoDong; Wang, Jian; Yang, HuanMing; Chen, Fang
2018-03-21
Research based on a strategy of single-cell low-coverage whole genome sequencing (SLWGS) has enabled better reproducibility and accuracy for detection of copy number variations (CNVs). The whole genome amplification (WGA) method and sequencing platform are critical factors for successful SLWGS (<0.1 × coverage). In this study, we compared single cell and multiple cells sequencing data produced by the HiSeq2000 and Ion Proton platforms using two WGA kits and then comprehensively evaluated the GC-bias, reproducibility, uniformity and CNV detection among different experimental combinations. Our analysis demonstrated that the PicoPLEX WGA Kit resulted in higher reproducibility, lower sequencing error frequency but more GC-bias than the GenomePlex Single Cell WGA Kit (WGA4 kit) independent of the cell number on the HiSeq2000 platform. While on the Ion Proton platform, the WGA4 kit (both single cell and multiple cells) had higher uniformity and less GC-bias but lower reproducibility than those of the PicoPLEX WGA Kit. Moreover, on these two sequencing platforms, depending on cell number, the performance of the two WGA kits was different for both sensitivity and specificity on CNV detection. The results can help researchers who plan to use SLWGS on single or multiple cells to select appropriate experimental conditions for their applications.
Bias-correction of CORDEX-MENA projections using the Distribution Based Scaling method
NASA Astrophysics Data System (ADS)
Bosshard, Thomas; Yang, Wei; Sjökvist, Elin; Arheimer, Berit; Graham, L. Phil
2014-05-01
Within the Regional Initiative for the Assessment of the Impact of Climate Change on Water Resources and Socio-Economic Vulnerability in the Arab Region (RICCAR) lead by UN ESCWA, CORDEX RCM projections for the Middle East Northern Africa (MENA) domain are used to drive hydrological impacts models. Bias-correction of newly available CORDEX-MENA projections is a central part of this project. In this study, the distribution based scaling (DBS) method has been applied to 6 regional climate model projections driven by 2 RCP emission scenarios. The DBS method uses a quantile mapping approach and features a conditional temperature correction dependent on the wet/dry state in the climate model data. The CORDEX-MENA domain is particularly challenging for bias-correction as it spans very diverse climates showing pronounced dry and wet seasons. Results show that the regional climate models simulate too low temperatures and often have a displaced rainfall band compared to WATCH ERA-Interim forcing data in the reference period 1979-2008. DBS is able to correct the temperature biases as well as some aspects of the precipitation biases. Special focus is given to the analysis of the influence of the dry-frequency bias (i.e. climate models simulating too few rain days) on the bias-corrected projections and on the modification of the climate change signal by the DBS method.
Charfi, Iness; Nagi, Karim; Mnie-Filali, Ouissame; Thibault, Dominic; Balboni, Gianfranco; Schiller, Peter W.; Trudeau, Louis-Eric
2014-01-01
Signaling bias refers to G protein-coupled receptor ligand ability to preferentially activate one type of signal over another. Bias to evoke signaling as opposed to sequestration has been proposed as a predictor of opioid ligand potential for generating tolerance. Here we measured whether delta opioid receptor agonists preferentially inhibited cyclase activity over internalization in HEK cells. Efficacy (τ) and affinity (KA) values were estimated from functional data and bias was calculated from efficiency coefficients (log τ/KA). This approach better represented the data as compared to alternative methods that estimate bias exclusively from τ values. Log (τ/KA) coefficients indicated that SNC-80 and UFP-512 promoted cyclase inhibition more efficiently than DOR internalization as compared to DPDPE (bias factor for SNC-80: 50 and for UFP-512: 132). Molecular determinants of internalization were different in HEK293 cells and neurons with βarrs contributing to internalization in both cell types, while PKC and GRK2 activities were only involved in neurons. Rank orders of ligand ability to engage different internalization mechanisms in neurons were compared to rank order of Emax values for cyclase assays in HEK cells. Comparison revealed a significant reversal in rank order for cyclase Emax values and βarr-dependent internalization in neurons, indicating that these responses were ligand-specific. Despite this evidence, and because kinases involved in internalization were not the same across cellular backgrounds, it is not possible to assert if the magnitude and nature of bias revealed by rank orders of maximal responses is the same as the one measured in HEK cells. PMID:24022593
Automation bias: empirical results assessing influencing factors.
Goddard, Kate; Roudsari, Abdul; Wyatt, Jeremy C
2014-05-01
To investigate the rate of automation bias - the propensity of people to over rely on automated advice and the factors associated with it. Tested factors were attitudinal - trust and confidence, non-attitudinal - decision support experience and clinical experience, and environmental - task difficulty. The paradigm of simulated decision support advice within a prescribing context was used. The study employed within participant before-after design, whereby 26 UK NHS General Practitioners were shown 20 hypothetical prescribing scenarios with prevalidated correct and incorrect answers - advice was incorrect in 6 scenarios. They were asked to prescribe for each case, followed by being shown simulated advice. Participants were then asked whether they wished to change their prescription, and the post-advice prescription was recorded. Rate of overall decision switching was captured. Automation bias was measured by negative consultations - correct to incorrect prescription switching. Participants changed prescriptions in 22.5% of scenarios. The pre-advice accuracy rate of the clinicians was 50.38%, which improved to 58.27% post-advice. The CDSS improved the decision accuracy in 13.1% of prescribing cases. The rate of automation bias, as measured by decision switches from correct pre-advice, to incorrect post-advice was 5.2% of all cases - a net improvement of 8%. More immediate factors such as trust in the specific CDSS, decision confidence, and task difficulty influenced rate of decision switching. Lower clinical experience was associated with more decision switching. Age, DSS experience and trust in CDSS generally were not significantly associated with decision switching. This study adds to the literature surrounding automation bias in terms of its potential frequency and influencing factors. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
A Large-Scale Analysis of Impact Factor Biased Journal Self-Citations.
Chorus, Caspar; Waltman, Ludo
2016-01-01
Based on three decades of citation data from across scientific fields of science, we study trends in impact factor biased self-citations of scholarly journals, using a purpose-built and easy to use citation based measure. Our measure is given by the ratio between i) the relative share of journal self-citations to papers published in the last two years, and ii) the relative share of journal self-citations to papers published in preceding years. A ratio higher than one suggests that a journal's impact factor is disproportionally affected (inflated) by self-citations. Using recently reported survey data, we show that there is a relation between high values of our proposed measure and coercive journal self-citation malpractices. We use our measure to perform a large-scale analysis of impact factor biased journal self-citations. Our main empirical result is, that the share of journals for which our measure has a (very) high value has remained stable between the 1980s and the early 2000s, but has since risen strongly in all fields of science. This time span corresponds well with the growing obsession with the impact factor as a journal evaluation measure over the last decade. Taken together, this suggests a trend of increasingly pervasive journal self-citation malpractices, with all due unwanted consequences such as inflated perceived importance of journals and biased journal rankings.
Adaptive-numerical-bias metadynamics.
Khanjari, Neda; Eslami, Hossein; Müller-Plathe, Florian
2017-12-05
A metadynamics scheme is presented in which the free energy surface is filled with progressively adding adaptive biasing potentials, obtained from the accumulated probability distribution of the collective variables. Instead of adding Gaussians with assigned height and width in conventional metadynamics method, here we add a more realistic adaptive biasing potential to the Hamiltonian of the system. The shape of the adaptive biasing potential is adjusted on the fly by sampling over the visited states. As the top of the barrier is approached, the biasing potentials become wider. This decreases the problem of trapping the system in the niches, introduced by the addition of Gaussians of fixed height in metadynamics. Our results for the free energy profiles of three test systems show that this method is more accurate and converges more quickly than the conventional metadynamics, and is quite comparable (in accuracy and convergence rate) with the well-tempered metadynamics method. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Crowley, D Max; Coffman, Donna L; Feinberg, Mark E; Greenberg, Mark T; Spoth, Richard L
2014-04-01
Despite growing recognition of the important role implementation plays in successful prevention efforts, relatively little work has sought to demonstrate a causal relationship between implementation factors and participant outcomes. In turn, failure to explore the implementation-to-outcome link limits our understanding of the mechanisms essential to successful programming. This gap is partially due to the inability of current methodological procedures within prevention science to account for the multitude of confounders responsible for variation in implementation factors (i.e., selection bias). The current paper illustrates how propensity and marginal structural models can be used to improve causal inferences involving implementation factors not easily randomized (e.g., participant attendance). We first present analytic steps for simultaneously evaluating the impact of multiple implementation factors on prevention program outcome. Then, we demonstrate this approach for evaluating the impact of enrollment and attendance in a family program, over and above the impact of a school-based program, within PROSPER, a large-scale real-world prevention trial. Findings illustrate the capacity of this approach to successfully account for confounders that influence enrollment and attendance, thereby more accurately representing true causal relations. For instance, after accounting for selection bias, we observed a 5% reduction in the prevalence of 11th grade underage drinking for those who chose to receive a family program and school program compared to those who received only the school program. Further, we detected a 7% reduction in underage drinking for those with high attendance in the family program.
van Ool, Jans S; Snoeijen-Schouwenaars, Francesca M; Schelhaas, Helenius J; Tan, In Y; Aldenkamp, Albert P; Hendriksen, Jos G M
2016-07-01
Epilepsy is a neurological condition that is particularly common in people with intellectual disability (ID). The care for people with both epilepsy and ID is often complicated by the presence of neuropsychiatric disorders, defined as psychiatric symptoms, psychiatric disorders, and behavioral problems. The aim of this study was to investigate associations between epilepsy or epilepsy-related factors and neuropsychiatric comorbidities in patients with ID and between ID and neuropsychiatric comorbidities in patients with epilepsy. We performed a systematic review of the literature, published between January 1995 and January 2015 and retrieved from PubMed/Medline, PsycINFO, and ERIC and assessed the risk of bias using the SIGN-50 methodology. Forty-two studies were identified, fifteen of which were assessed as having a low or acceptable risk-of-bias evaluation. Neuropsychiatric comorbidities were examined in relation to epilepsy in nine studies; in relation to epilepsy-related factors, such as seizure activity, seizure type, and medication in four studies; and in relation to the presence and degree of ID in five studies. We conclude that the presence of epilepsy only was not a clear determinant of neuropsychiatric comorbidity in patients with ID, although a tendency towards negative mood symptoms was identified. Epilepsy-related factors indicating a more severe form of epilepsy were associated with neuropsychiatric comorbidity as was the presence of ID as compared to those without ID in patients with epilepsy, although this should be validated in future research. A large proportion of the studies in this area is associated with a substantial risk of bias. There is a need for high quality studies using standardized methods to enable clear conclusions to be drawn that might assist in improving the quality of care for this population. Copyright © 2016 Elsevier Inc. All rights reserved.
ADHD symptoms in healthy adults are associated with stressful life events and negative memory bias.
Vrijsen, Janna N; Tendolkar, Indira; Onnink, Marten; Hoogman, Martine; Schene, Aart H; Fernández, Guillén; van Oostrom, Iris; Franke, Barbara
2018-06-01
Stressful life events, especially Childhood Trauma, predict ADHD symptoms. Childhood Trauma and negatively biased memory are risk factors for affective disorders. The association of life events and bias with ADHD symptoms may inform about the etiology of ADHD. Memory bias was tested using a computer task in N = 675 healthy adults. Life events and ADHD symptoms were assessed using questionnaires. The mediation of the association between life events and ADHD symptoms by memory bias was examined. We explored the roles of different types of life events and of ADHD symptom clusters. Life events and memory bias were associated with overall ADHD symptoms as well as inattention and hyperactivity/impulsivity symptom clusters. Memory bias mediated the association of Lifetime Life Events, specifically Childhood Trauma, with ADHD symptoms. Negatively biased memory may be a cognitive marker of the effects of Childhood Trauma on the development and/or persistence of ADHD symptoms.
Modelling the large-scale redshift-space 3-point correlation function of galaxies
NASA Astrophysics Data System (ADS)
Slepian, Zachary; Eisenstein, Daniel J.
2017-08-01
We present a configuration-space model of the large-scale galaxy 3-point correlation function (3PCF) based on leading-order perturbation theory and including redshift-space distortions (RSD). This model should be useful in extracting distance-scale information from the 3PCF via the baryon acoustic oscillation method. We include the first redshift-space treatment of biasing by the baryon-dark matter relative velocity. Overall, on large scales the effect of RSD is primarily a renormalization of the 3PCF that is roughly independent of both physical scale and triangle opening angle; for our adopted Ωm and bias values, the rescaling is a factor of ˜1.8. We also present an efficient scheme for computing 3PCF predictions from our model, important for allowing fast exploration of the space of cosmological parameters in future analyses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, Don; Rearden, Bradley T; Reed, Davis Allan
2010-01-01
One of the challenges associated with implementation of burnup credit is the validation of criticality calculations used in the safety evaluation; in particular the availability and use of applicable critical experiment data. The purpose of the validation is to quantify the relationship between reality and calculated results. Validation and determination of bias and bias uncertainty require the identification of sets of critical experiments that are similar to the criticality safety models. A principal challenge for crediting fission products (FP) in a burnup credit safety evaluation is the limited availability of relevant FP critical experiments for bias and bias uncertainty determination.more » This paper provides an evaluation of the available critical experiments that include FPs, along with bounding, burnup-dependent estimates of FP biases generated by combining energy dependent sensitivity data for a typical burnup credit application with the nuclear data uncertainty information distributed with SCALE 6. A method for determining separate bias and bias uncertainty values for individual FPs and illustrative results is presented. Finally, a FP bias calculation method based on data adjustment techniques and reactivity sensitivity coefficients calculated with the SCALE sensitivity/uncertainty tools and some typical results is presented. Using the methods described in this paper, the cross-section bias for a representative high-capacity spent fuel cask associated with the ENDF/B-VII nuclear data for 16 most important stable or near stable FPs is predicted to be no greater than 2% of the total worth of the 16 FPs, or less than 0.13 % k/k.« less
Trutschel, Diana; Palm, Rebecca; Holle, Bernhard; Simon, Michael
2017-11-01
Because not every scientific question on effectiveness can be answered with randomised controlled trials, research methods that minimise bias in observational studies are required. Two major concerns influence the internal validity of effect estimates: selection bias and clustering. Hence, to reduce the bias of the effect estimates, more sophisticated statistical methods are needed. To introduce statistical approaches such as propensity score matching and mixed models into representative real-world analysis and to conduct the implementation in statistical software R to reproduce the results. Additionally, the implementation in R is presented to allow the results to be reproduced. We perform a two-level analytic strategy to address the problems of bias and clustering: (i) generalised models with different abilities to adjust for dependencies are used to analyse binary data and (ii) the genetic matching and covariate adjustment methods are used to adjust for selection bias. Hence, we analyse the data from two population samples, the sample produced by the matching method and the full sample. The different analysis methods in this article present different results but still point in the same direction. In our example, the estimate of the probability of receiving a case conference is higher in the treatment group than in the control group. Both strategies, genetic matching and covariate adjustment, have their limitations but complement each other to provide the whole picture. The statistical approaches were feasible for reducing bias but were nevertheless limited by the sample used. For each study and obtained sample, the pros and cons of the different methods have to be weighted. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Understanding antigay bias from a cognitive-affective-behavioral perspective.
Callender, Kevin A
2015-01-01
In general, United States citizens have become increasingly more accepting of lesbians and gay men over the past few decades. Despite this shift in public attitudes, antigay bias remains openly tolerated, accepted, practiced, and even defended by a substantial portion of the population. This article reviews why and how antigay bias persists using a cognitive-affective-behavioral perspective that touches on sociocognitive factors such as prejudice and stereotyping, as well as features unique to antigay bias, such as its concealable nature. The article concludes with a discussion of how understanding modern antigay bias through a cognitive-affective-behavioral lens can be applied to reduce discrimination against gays and lesbians.
Crocker, Joanna C; Beecham, Emma; Kelly, Paula; Dinsdale, Andrew P; Hemsley, June; Jones, Louise; Bluebond-Langner, Myra
2015-03-01
Recruitment to paediatric palliative care research is challenging, with high rates of non-invitation of eligible families by clinicians. The impact on sample characteristics is unknown. To investigate, using mixed methods, non-invitation of eligible families and ensuing selection bias in an interview study about parents' experiences of advance care planning (ACP). We examined differences between eligible families invited and not invited to participate by clinicians using (1) field notes of discussions with clinicians during the invitation phase and (2) anonymised information from the service's clinical database. Families were eligible for the ACP study if their child was receiving care from a UK-based tertiary palliative care service (Group A; N = 519) or had died 6-10 months previously having received care from the service (Group B; N = 73). Rates of non-invitation to the ACP study were high. A total of 28 (5.4%) Group A families and 21 (28.8%) Group B families (p < 0.0005) were invited. Family-clinician relationship appeared to be a key factor associated qualitatively with invitation in both groups. In Group A, out-of-hours contact with family was statistically associated with invitation (adjusted odds ratio 5.46 (95% confidence interval 2.13-14.00); p < 0.0005). Qualitative findings also indicated that clinicians' perceptions of families' wellbeing, circumstances, characteristics, engagement with clinicians and anticipated reaction to invitation influenced invitation. We found evidence of selective invitation practices that could bias research findings. Non-invitation and selection bias should be considered, assessed and reported in palliative care studies. © The Author(s) 2014.
Hyltoft Petersen, Per; Lund, Flemming; Fraser, Callum G; Sandberg, Sverre; Sölétormos, György
2018-01-01
Background Many clinical decisions are based on comparison of patient results with reference intervals. Therefore, an estimation of the analytical performance specifications for the quality that would be required to allow sharing common reference intervals is needed. The International Federation of Clinical Chemistry (IFCC) recommended a minimum of 120 reference individuals to establish reference intervals. This number implies a certain level of quality, which could then be used for defining analytical performance specifications as the maximum combination of analytical bias and imprecision required for sharing common reference intervals, the aim of this investigation. Methods Two methods were investigated for defining the maximum combination of analytical bias and imprecision that would give the same quality of common reference intervals as the IFCC recommendation. Method 1 is based on a formula for the combination of analytical bias and imprecision and Method 2 is based on the Microsoft Excel formula NORMINV including the fractional probability of reference individuals outside each limit and the Gaussian variables of mean and standard deviation. The combinations of normalized bias and imprecision are illustrated for both methods. The formulae are identical for Gaussian and log-Gaussian distributions. Results Method 2 gives the correct results with a constant percentage of 4.4% for all combinations of bias and imprecision. Conclusion The Microsoft Excel formula NORMINV is useful for the estimation of analytical performance specifications for both Gaussian and log-Gaussian distributions of reference intervals.
Comparison of large-scale structures and velocities in the local universe
NASA Technical Reports Server (NTRS)
Yahil, Amos
1994-01-01
Comparison of the large-scale density and velocity fields in the local universe shows detailed agreement, strengthening the standard paradigm of the gravitational origin of these structures. Quantitative analysis can determine the cosmological density parameter, Omega, and biasing factor, b; there is virtually no sensitivity in any local analyses to the cosmologial constant, lambda. Comparison of the dipole anisotropy of the cosmic microwave background with the acceleration due to the Infrared Astronomy Satellite (IRAS) galaxies puts the linear growth factor in the range beta approximately equals Omega (exp 0.6)/b = 0.6(+0.7/-0.3) (95% confidence). A direct comparison of the density and velocity fields of nearby galaxies gives beta = 1.3 (+0.7/-0.6), and from nonlinear analysis the weaker limit (Omega greater than 0.45 for b greater than 0.5 (again 95% confidence). A tighter limit (Omega greater than 0.3 (4-6 sigma)), is obtained by a reconstruction of the probability distribution function of the initial fluctuations from which the structures observed today arose. The last two methods depend critically on the smooth velocity field determined from the observed velocities of nearby galaxies by the POTENT method. A new analysis of these velocities, with more than three times the data used to obtain the above quoted results, is now underway and promises to tighten the uncertainties considerably, as well as reduce systematic bias.
Normalization, bias correction, and peak calling for ChIP-seq
Diaz, Aaron; Park, Kiyoub; Lim, Daniel A.; Song, Jun S.
2012-01-01
Next-generation sequencing is rapidly transforming our ability to profile the transcriptional, genetic, and epigenetic states of a cell. In particular, sequencing DNA from the immunoprecipitation of protein-DNA complexes (ChIP-seq) and methylated DNA (MeDIP-seq) can reveal the locations of protein binding sites and epigenetic modifications. These approaches contain numerous biases which may significantly influence the interpretation of the resulting data. Rigorous computational methods for detecting and removing such biases are still lacking. Also, multi-sample normalization still remains an important open problem. This theoretical paper systematically characterizes the biases and properties of ChIP-seq data by comparing 62 separate publicly available datasets, using rigorous statistical models and signal processing techniques. Statistical methods for separating ChIP-seq signal from background noise, as well as correcting enrichment test statistics for sequence-dependent and sonication biases, are presented. Our method effectively separates reads into signal and background components prior to normalization, improving the signal-to-noise ratio. Moreover, most peak callers currently use a generic null model which suffers from low specificity at the sensitivity level requisite for detecting subtle, but true, ChIP enrichment. The proposed method of determining a cell type-specific null model, which accounts for cell type-specific biases, is shown to be capable of achieving a lower false discovery rate at a given significance threshold than current methods. PMID:22499706
Lee, Young-Shin
2015-03-01
To identify attitudes and bias toward aging between Asian and White students and identify factors affecting attitudes toward aging. A cross-sectional sample of 308 students in a nursing program completed the measure of Attitudes Toward Older People and Aging Quiz electronically. There were no differences in positive attitudes and pro-aged bias between Asian and White groups, but Asian students had significantly more negative attitudes and anti-aged bias toward older people than White students. Multiple regression analysis showed ethnicity/race was the strongest variable to explain negative attitudes toward older people. Feeling uneasy about talking to older adults was the most significant factor to explain all attitudinal concepts. Asian students were uneasy about talking with older people and had negative attitudes toward older adults. To become competent in cross-cultural care and communication in nursing, educational strategies to reduce negative attitudes on aging are necessary. © The Author(s) 2014.
Parenting practices, interpretive biases, and anxiety in Latino children.
Varela, R Enrique; Niditch, Laura A; Hensley-Maloney, Lauren; Moore, Kathryn W; Creveling, C Christiane
2013-03-01
A number of factors are believed to confer risk for anxiety development in children; however, cultural variation of purported risk factors remains unclear. We examined relations between controlling and rejecting parenting styles, parental modeling of anxious behaviors, child interpretive biases, and child anxiety in a mixed clinically anxious (n=27) and non-clinical (n=20) sample of Latino children and at least one of their parents. Families completed discussion-based tasks and questionnaires in a lab setting. Results indicated that child anxiety was: linked with parental control and child interpretative biases, associated with parental modeling of anxious behaviors at a trend level, and not associated with low parental acceptance. Findings that controlling parenting and child interpretive biases were associated with anxiety extend current theories of anxiety development to the Latino population. We speculate that strong family ties may buffer Latino children from detrimental effects of perceived low parental acceptance. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish
2018-02-01
Conventional bias correction is usually applied on a grid-by-grid basis, meaning that the resulting corrections cannot address biases in the spatial distribution of climate variables. To solve this problem, a two-step bias correction method is proposed here to correct time series at multiple locations conjointly. The first step transforms the data to a set of statistically independent univariate time series, using a technique known as independent component analysis (ICA). The mutually independent signals can then be bias corrected as univariate time series and back-transformed to improve the representation of spatial dependence in the data. The spatially corrected data are then bias corrected at the grid scale in the second step. The method has been applied to two CMIP5 General Circulation Model simulations for six different climate regions of Australia for two climate variables—temperature and precipitation. The results demonstrate that the ICA-based technique leads to considerable improvements in temperature simulations with more modest improvements in precipitation. Overall, the method results in current climate simulations that have greater equivalency in space and time with observational data.
Burmeister, J.; Oehlhof, M. W.; Hinman, N.; LeRoy, M.; Bannon, E.; Koball, A.; Ashrafloun, L.
2012-01-01
Current measures of internalized weight bias assess factors such as responsibility for weight status, mistreatment because of weight, etc. A potential complementary approach for assessing internalized weight bias is to examine the correspondence between individuals’ ratings of obese people, normal weight people, and themselves on personality traits. This investigation examined the relationships among different measures of internalized weight bias, as well as the association between those measures and psychosocial maladjustment. Prior to the beginning of a weight loss intervention, 62 overweight/obese adults completed measures of explicit and internalized weight bias as well as body image, binge eating, and depression. Discrepancies between participants’ ratings of obese people in general and ratings of themselves on both positive and negative traits predicted unique variance in measures of maladjustment above a traditional assessment of internalized weight bias. This novel approach to measuring internalized weight bias provides information above and beyond traditional measures of internalized weight bias and begins to provide insights into social comparison processes involved in weight bias. PMID:22322909
Carels, Robert A; Burmeister, J; Oehlhof, M W; Hinman, N; LeRoy, M; Bannon, E; Koball, A; Ashrafloun, L
2013-02-01
Current measures of internalized weight bias assess factors such as responsibility for weight status, mistreatment because of weight, etc. A potential complementary approach for assessing internalized weight bias is to examine the correspondence between individuals' ratings of obese people, normal weight people, and themselves on personality traits. This investigation examined the relationships among different measures of internalized weight bias, as well as the association between those measures and psychosocial maladjustment. Prior to the beginning of a weight loss intervention, 62 overweight/obese adults completed measures of explicit and internalized weight bias as well as body image, binge eating, and depression. Discrepancies between participants' ratings of obese people in general and ratings of themselves on both positive and negative traits predicted unique variance in measures of maladjustment above a traditional assessment of internalized weight bias. This novel approach to measuring internalized weight bias provides information above and beyond traditional measures of internalized weight bias and begins to provide insights into social comparison processes involved in weight bias.
Schonberger, Robert B.; Burg, Matthew M.; Holt, Natalie; Lukens, Carrie L.; Dai, Feng; Brandt, Cynthia
2011-01-01
Background American College of Cardiology/American Heart Association guidelines describe the perioperative evaluation as “a unique opportunity to identify patients with hypertension,” however factors such as anticipatory stress or medication noncompliance may induce a bias toward higher blood pressure, leaving clinicians unsure about how to interpret preoperative hypertension. Information describing the relationship between preoperative intake blood pressure and primary care measurements could help anesthesiologists make primary care referrals for improved blood pressure control in an evidence-based fashion. We hypothesized that the preoperative examination provides a useful basis for initiating primary care blood pressure referral. Methods We analyzed retrospective data on 2807 patients who arrived from home for surgery and who were subsequently evaluated within 6 months after surgery in the primary care center of the same institution. After descriptive analysis, we conducted multiple linear regression analysis to identify day-of-surgery (DOS) factors associated with subsequent primary care blood pressure. We calculated the sensitivity, specificity, and positive and negative predictive value of different blood pressure referral thresholds using both a single-measurement and a two-stage screen incorporating recent preoperative and DOS measurements for identifying patients with subsequently elevated primary care blood pressure. Results DOS systolic blood pressure (SBP) was higher than subsequent primary care SBP by a mean bias of 5.5mmHg (95% limits of agreement +43.8 to −32.8). DOS diastolic blood pressure (DBP) was higher than subsequent primary care DBP by a mean bias of 1.5mmHg (95% limits of agreement +13.0 to −10.0). Linear regression of DOS factors explained 19% of the variability in primary care SBP and 29% of the variability in DBP. Accounting for the observed bias, a two-stage SBP referral screen requiring preoperative clinic SBP≥140mmHg and DOS SBP≥146mmHg had 95.9% estimated specificity (95% CI 94.4 to 97.0) for identifying subsequent primary care SBP≥140mmHg and estimated sensitivity of 26.8% (95% CI 22.0 to 32.0). A similarly high specificity using a single DOS SBP required a threshold SBP≥160mmHg, for which estimated specificity was 95.2% (95% CI 94.2 to 96.1). For DBP, a presenting DOS DBP≥92mmHg had 95.7% specificity (95% CI 94.8 to 96.4) for subsequent primary care DBP≥90mmHg with a sensitivity of 18.8% (95% CI 14.4 to 24.0). Conclusion A small bias toward higher DOS blood pressures relative to subsequent primary care measurements was observed. DOS factors predicted only a small proportion of the observed variation. Accounting for the observed bias, a two-stage SBP threshold and a single-reading DBP threshold were highly specific though insensitive for identifying subsequent primary care blood pressure elevation. PMID:22075017
When being narrow minded is a good thing: locally biased people show stronger contextual cueing.
Bellaera, Lauren; von Mühlenen, Adrian; Watson, Derrick G
2014-01-01
Repeated contexts allow us to find relevant information more easily. Learning such contexts has been proposed to depend upon either global processing of the repeated contexts, or alternatively processing of the local region surrounding the target information. In this study, we measured the extent to which observers were by default biased to process towards a more global or local level. The findings showed that the ability to use context to help guide their search was strongly related to an observer's local/global processing bias. Locally biased people could use context to help improve their search better than globally biased people. The results suggest that the extent to which context can be used depends crucially on the observer's attentional bias and thus also to factors and influences that can change this bias.
Mi, Xiaojuan; Hammill, Bradley G; Curtis, Lesley H; Lai, Edward Chia-Cheng; Setoguchi, Soko
2016-11-20
Observational comparative effectiveness and safety studies are often subject to immortal person-time, a period of follow-up during which outcomes cannot occur because of the treatment definition. Common approaches, like excluding immortal time from the analysis or naïvely including immortal time in the analysis, are known to result in biased estimates of treatment effect. Other approaches, such as the Mantel-Byar and landmark methods, have been proposed to handle immortal time. Little is known about the performance of the landmark method in different scenarios. We conducted extensive Monte Carlo simulations to assess the performance of the landmark method compared with other methods in settings that reflect realistic scenarios. We considered four landmark times for the landmark method. We found that the Mantel-Byar method provided unbiased estimates in all scenarios, whereas the exclusion and naïve methods resulted in substantial bias when the hazard of the event was constant or decreased over time. The landmark method performed well in correcting immortal person-time bias in all scenarios when the treatment effect was small, and provided unbiased estimates when there was no treatment effect. The bias associated with the landmark method tended to be small when the treatment rate was higher in the early follow-up period than it was later. These findings were confirmed in a case study of chronic obstructive pulmonary disease. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Yao, Ji; Ishak, Mustapha; Lin, Weikang; Troxel, Michael
2017-10-01
Intrinsic alignments (IA) of galaxies have been recognized as one of the most serious contaminants to weak lensing. These systematics need to be isolated and mitigated in order for ongoing and future lensing surveys to reach their full potential. The IA self-calibration (SC) method was shown in previous studies to be able to reduce the GI contamination by up to a factor of 10 for the 2-point and 3-point correlations. The SC method does not require the assumption of an IA model in its working and can extract the GI signal from the same photo-z survey offering the possibility to test and understand structure formation scenarios and their relationship to IA models. In this paper, we study the effects of the IA SC mitigation method on the precision and accuracy of cosmological parameter constraints from future cosmic shear surveys LSST, WFIRST and Euclid. We perform analytical and numerical calculations to estimate the loss of precision and the residual bias in the best fit cosmological parameters after the self-calibration is performed. We take into account uncertainties from photometric redshifts and the galaxy bias. We find that the confidence contours are slightly inflated from applying the SC method itself while a significant increase is due to the inclusion of the photo-z uncertainties. The bias of cosmological parameters is reduced from several-σ, when IA is not corrected for, to below 1-σ after SC is applied. These numbers are comparable to those resulting from applying the method of marginalizing over IA model parameters despite the fact that the two methods operate very differently. We conclude that implementing the SC for these future cosmic-shear surveys will not only allow one to efficiently mitigate the GI contaminant but also help to understand their modeling and link to structure formation.
NASA Technical Reports Server (NTRS)
Willsky, A. S.; Deyst, J. J.; Crawford, B. S.
1975-01-01
The paper describes two self-test procedures applied to the problem of estimating the biases in accelerometers and gyroscopes on an inertial platform. The first technique is the weighted sum-squared residual (WSSR) test, with which accelerator bias jumps are easily isolated, but gyro bias jumps are difficult to isolate. The WSSR method does not take full advantage of the knowledge of system dynamics. The other technique is a multiple hypothesis method developed by Buxbaum and Haddad (1969). It has the advantage of directly providing jump isolation information, but suffers from computational problems. It might be possible to use the WSSR to detect state jumps and then switch to the BH system for jump isolation and estimate compensation.
Rater Perceptions of Bias Using the Multiple Mini-Interview Format: A Qualitative Study
ERIC Educational Resources Information Center
Alweis, Richard L.; Fitzpatrick, Caroline; Donato, Anthony A.
2015-01-01
Introduction: The Multiple Mini-Interview (MMI) format appears to mitigate individual rater biases. However, the format itself may introduce structural systematic bias, favoring extroverted personality types. This study aimed to gain a better understanding of these biases from the perspective of the interviewer. Methods: A sample of MMI…
NASA Astrophysics Data System (ADS)
Grycewicz, Thomas J.; Florio, Christopher J.; Franz, Geoffrey A.; Robinson, Ross E.
2007-09-01
When using Fourier plane digital algorithms or an optical correlator to measure the correlation between digital images, interpolation by center-of-mass or quadratic estimation techniques can be used to estimate image displacement to the sub-pixel level. However, this can lead to a bias in the correlation measurement. This bias shifts the sub-pixel output measurement to be closer to the nearest pixel center than the actual location. The paper investigates the bias in the outputs of both digital and optical correlators, and proposes methods to minimize this effect. We use digital studies and optical implementations of the joint transform correlator to demonstrate optical registration with accuracies better than 0.1 pixels. We use both simulations of image shift and movies of a moving target as inputs. We demonstrate bias error for both center-of-mass and quadratic interpolation, and discuss the reasons that this bias is present. Finally, we suggest measures to reduce or eliminate the bias effects. We show that when sub-pixel bias is present, it can be eliminated by modifying the interpolation method. By removing the bias error, we improve registration accuracy by thirty percent.
Mazoure, Bogdan; Caraus, Iurie; Nadon, Robert; Makarenkov, Vladimir
2018-06-01
Data generated by high-throughput screening (HTS) technologies are prone to spatial bias. Traditionally, bias correction methods used in HTS assume either a simple additive or, more recently, a simple multiplicative spatial bias model. These models do not, however, always provide an accurate correction of measurements in wells located at the intersection of rows and columns affected by spatial bias. The measurements in these wells depend on the nature of interaction between the involved biases. Here, we propose two novel additive and two novel multiplicative spatial bias models accounting for different types of bias interactions. We describe a statistical procedure that allows for detecting and removing different types of additive and multiplicative spatial biases from multiwell plates. We show how this procedure can be applied by analyzing data generated by the four HTS technologies (homogeneous, microorganism, cell-based, and gene expression HTS), the three high-content screening (HCS) technologies (area, intensity, and cell-count HCS), and the only small-molecule microarray technology available in the ChemBank small-molecule screening database. The proposed methods are included in the AssayCorrector program, implemented in R, and available on CRAN.
Associations between cognitive biases and domains of schizotypy in a non-clinical sample.
Aldebot Sacks, Stephanie; Weisman de Mamani, Amy Gina; Garcia, Cristina Phoenix
2012-03-30
Schizotypy is a non-clinical manifestation of the same underlying biological factors that give rise to psychotic disorders (Claridge and Beech, 1995). Research on normative populations scoring high on schizotypy is valuable because it may help elucidate the predisposition to schizophrenia (Jahshan and Sergi, 2007) and because performance is not confounded by issues present in schizophrenia samples. In the current study, a Confirmatory Factor Analysis was conducted using several comprehensive measures of schizotypy. As expected and replicating prior research, a four-factor model of schizotypy emerged including a positive, a negative, a cognitive disorganization, and an impulsive nonconformity factor. We also evaluated how each factor related to distinct cognitive biases. In support of hypotheses, increased self-certainty, decreased theory of mind, and decreased source memory were associated with higher scores on the positive factor; decreased theory of mind was associated with higher scores on the negative factor; and increased self-certainty was associated with greater impulsive nonconformity. Unexpectedly, decreased self-certainty and increased theory of mind were associated with greater cognitive disorganization, and decreased source memory was associated with greater impulsive nonconformity. These findings offer new insights by highlighting cognitive biases that may be risk factors for psychosis. Published by Elsevier Ireland Ltd.
A review of cognitive biases in youth depression: attention, interpretation and memory.
Platt, Belinda; Waters, Allison M; Schulte-Koerne, Gerd; Engelmann, Lina; Salemink, Elske
2017-04-01
Depression is one of the most common mental health problems in childhood and adolescence. Although data consistently show it is associated with self-reported negative cognitive styles, less is known about the mechanisms underlying this relationship. Cognitive biases in attention, interpretation and memory represent plausible mechanisms and are known to characterise adult depression. We provide the first structured review of studies investigating the nature and causal role of cognitive biases in youth depression. Key questions are (i) do cognitive biases characterise youth depression? (ii) are cognitive biases a vulnerability factor for youth depression? and (iii) do cognitive biases play a causal role in youth depression? We find consistent evidence for positive associations between attention and interpretation biases and youth depression. Stronger biases in youth with an elevated risk of depression support cognitive-vulnerability models. Preliminary evidence from cognitive bias modification paradigms supports a causal role of attention and interpretation biases in youth depression but these paradigms require testing in clinical samples before they can be considered treatment tools. Studies of memory biases in youth samples have produced mixed findings and none have investigated the causal role of memory bias. We identify numerous areas for future research in this emerging field.
Keisam, Santosh; Romi, Wahengbam; Ahmed, Giasuddin; Jeyaram, Kumaraswamy
2016-09-27
Cultivation-independent investigation of microbial ecology is biased by the DNA extraction methods used. We aimed to quantify those biases by comparative analysis of the metagenome mined from four diverse naturally fermented foods (bamboo shoot, milk, fish, soybean) using eight different DNA extraction methods with different cell lysis principles. Our findings revealed that the enzymatic lysis yielded higher eubacterial and yeast metagenomic DNA from the food matrices compared to the widely used chemical and mechanical lysis principles. Further analysis of the bacterial community structure by Illumina MiSeq amplicon sequencing revealed a high recovery of lactic acid bacteria by the enzymatic lysis in all food types. However, Bacillaceae, Acetobacteraceae, Clostridiaceae and Proteobacteria were more abundantly recovered when mechanical and chemical lysis principles were applied. The biases generated due to the differential recovery of operational taxonomic units (OTUs) by different DNA extraction methods including DNA and PCR amplicons mix from different methods have been quantitatively demonstrated here. The different methods shared only 29.9-52.0% of the total OTUs recovered. Although similar comparative research has been performed on other ecological niches, this is the first in-depth investigation of quantifying the biases in metagenome mining from naturally fermented foods.
An aerial survey method to estimate sea otter abundance
Bodkin, James L.; Udevitz, Mark S.; Garner, Gerald W.; Amstrup, Steven C.; Laake, Jeffrey L.; Manly, Bryan F.J.; McDonald, Lyman L.; Robertson, Donna G.
1999-01-01
Sea otters (Enhydra lutris) occur in shallow coastal habitats and can be highly visible on the sea surface. They generally rest in groups and their detection depends on factors that include sea conditions, viewing platform, observer technique and skill, distance, habitat and group size. While visible on the surface, they are difficult to see while diving and may dive in response to an approaching survey platform. We developed and tested an aerial survey method that uses intensive searches within portions of strip transects to adjust for availability and sightability biases. Correction factors are estimated independently for each survey and observer. In tests of our method using shore-based observers, we estimated detection probabilities of 0.52-0.72 in standard strip-transects and 0.96 in intensive searches. We used the survey method in Prince William Sound, Alaska to estimate a sea otter population size of 9,092 (SE = 1422). The new method represents an improvement over various aspects of previous methods, but additional development and testing will be required prior to its broad application.
Nørrelykke, Simon F; Flyvbjerg, Henrik
2010-07-01
Optical tweezers and atomic force microscope (AFM) cantilevers are often calibrated by fitting their experimental power spectra of Brownian motion. We demonstrate here that if this is done with typical weighted least-squares methods, the result is a bias of relative size between -2/n and +1/n on the value of the fitted diffusion coefficient. Here, n is the number of power spectra averaged over, so typical calibrations contain 10%-20% bias. Both the sign and the size of the bias depend on the weighting scheme applied. Hence, so do length-scale calibrations based on the diffusion coefficient. The fitted value for the characteristic frequency is not affected by this bias. For the AFM then, force measurements are not affected provided an independent length-scale calibration is available. For optical tweezers there is no such luck, since the spring constant is found as the ratio of the characteristic frequency and the diffusion coefficient. We give analytical results for the weight-dependent bias for the wide class of systems whose dynamics is described by a linear (integro)differential equation with additive noise, white or colored. Examples are optical tweezers with hydrodynamic self-interaction and aliasing, calibration of Ornstein-Uhlenbeck models in finance, models for cell migration in biology, etc. Because the bias takes the form of a simple multiplicative factor on the fitted amplitude (e.g. the diffusion coefficient), it is straightforward to remove and the user will need minimal modifications to his or her favorite least-squares fitting programs. Results are demonstrated and illustrated using synthetic data, so we can compare fits with known true values. We also fit some commonly occurring power spectra once-and-for-all in the sense that we give their parameter values and associated error bars as explicit functions of experimental power-spectral values.
A method of bias correction for maximal reliability with dichotomous measures.
Penev, Spiridon; Raykov, Tenko
2010-02-01
This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.
Johns, Jennifer L.; Moorhead, Kaitlin A.; Hu, Jing; Moorhead, Roberta C.
2018-01-01
Clinical pathology testing of rodents is often challenging due to insufficient sample volume. One solution in clinical veterinary and exploratory research environments is dilution of samples prior to analysis. However, published information on the impact of preanalytical sample dilution on rodent biochemical data is incomplete. The objective of this study was to evaluate the effects of preanalytical sample dilution on biochemical analysis of mouse and rat serum samples utilizing the Siemens Dimension Xpand Plus. Rats were obtained from end of study research projects. Mice were obtained from sentinel testing programs. For both, whole blood was collected via terminal cardiocentesis into empty tubes and serum was harvested. Biochemical parameters were measured on fresh and thawed frozen samples run straight and at dilution factors 2–10. Dilutions were performed manually, utilizing either ultrapure water or enzyme diluent per manufacturer recommendations. All diluted samples were generated directly from the undiluted sample. Preanalytical dilution caused clinically unacceptable bias in most analytes at dilution factors four and above. Dilution-induced bias in total calcium, creatinine, total bilirubin, and uric acid was considered unacceptable with any degree of dilution, based on the more conservative of two definitions of acceptability. Dilution often caused electrolyte values to fall below assay range precluding evaluation of bias. Dilution-induced bias occurred in most biochemical parameters to varying degrees and may render dilution unacceptable in the exploratory research and clinical veterinary environments. Additionally, differences between results obtained at different dilution factors may confound statistical comparisons in research settings. Comparison of data obtained at a single dilution factor is highly recommended. PMID:29497614
Application of Biased Metropolis Algorithms: From protons to proteins
Bazavov, Alexei; Berg, Bernd A.; Zhou, Huan-Xiang
2015-01-01
We show that sampling with a biased Metropolis scheme is essentially equivalent to using the heatbath algorithm. However, the biased Metropolis method can also be applied when an efficient heatbath algorithm does not exist. This is first illustrated with an example from high energy physics (lattice gauge theory simulations). We then illustrate the Rugged Metropolis method, which is based on a similar biased updating scheme, but aims at very different applications. The goal of such applications is to locate the most likely configurations in a rugged free energy landscape, which is most relevant for simulations of biomolecules. PMID:26612967
Xue, Xiaonan; Kim, Mimi Y; Castle, Philip E; Strickler, Howard D
2014-03-01
Studies to evaluate clinical screening tests often face the problem that the "gold standard" diagnostic approach is costly and/or invasive. It is therefore common to verify only a subset of negative screening tests using the gold standard method. However, undersampling the screen negatives can lead to substantial overestimation of the sensitivity and underestimation of the specificity of the diagnostic test. Our objective was to develop a simple and accurate statistical method to address this "verification bias." We developed a weighted generalized estimating equation approach to estimate, in a single model, the accuracy (eg, sensitivity/specificity) of multiple assays and simultaneously compare results between assays while addressing verification bias. This approach can be implemented using standard statistical software. Simulations were conducted to assess the proposed method. An example is provided using a cervical cancer screening trial that compared the accuracy of human papillomavirus and Pap tests, with histologic data as the gold standard. The proposed approach performed well in estimating and comparing the accuracy of multiple assays in the presence of verification bias. The proposed approach is an easy to apply and accurate method for addressing verification bias in studies of multiple screening methods. Copyright © 2014 Elsevier Inc. All rights reserved.
Roh, Min K; Gillespie, Dan T; Petzold, Linda R
2010-11-07
The weighted stochastic simulation algorithm (wSSA) was developed by Kuwahara and Mura [J. Chem. Phys. 129, 165101 (2008)] to efficiently estimate the probabilities of rare events in discrete stochastic systems. The wSSA uses importance sampling to enhance the statistical accuracy in the estimation of the probability of the rare event. The original algorithm biases the reaction selection step with a fixed importance sampling parameter. In this paper, we introduce a novel method where the biasing parameter is state-dependent. The new method features improved accuracy, efficiency, and robustness.
Pontieri, L; Schmidt, A M; Singh, R; Pedersen, J S; Linksvayer, T A
2017-02-01
Social insect sex and caste ratios are well-studied targets of evolutionary conflicts, but the heritable factors affecting these traits remain unknown. To elucidate these factors, we carried out a short-term artificial selection study on female caste ratio in the ant Monomorium pharaonis. Across three generations of bidirectional selection, we observed no response for caste ratio, but sex ratios rapidly became more female-biased in the two replicate high selection lines and less female-biased in the two replicate low selection lines. We hypothesized that this rapid divergence for sex ratio was caused by changes in the frequency of infection by the heritable bacterial endosymbiont Wolbachia, because the initial breeding stock varied for Wolbachia infection, and Wolbachia is known to cause female-biased sex ratios in other insects. Consistent with this hypothesis, the proportions of Wolbachia-infected colonies in the selection lines changed rapidly, mirroring the sex ratio changes. Moreover, the estimated effect of Wolbachia on sex ratio (~13% female bias) was similar in colonies before and during artificial selection, indicating that this Wolbachia effect is likely independent of the effects of artificial selection on other heritable factors. Our study provides evidence for the first case of endosymbiont sex ratio manipulation in a social insect. © 2016 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2016 European Society For Evolutionary Biology.
Leyrat, Clémence; Caille, Agnès; Foucher, Yohann; Giraudeau, Bruno
2016-01-22
Despite randomization, baseline imbalance and confounding bias may occur in cluster randomized trials (CRTs). Covariate imbalance may jeopardize the validity of statistical inferences if they occur on prognostic factors. Thus, the diagnosis of a such imbalance is essential to adjust statistical analysis if required. We developed a tool based on the c-statistic of the propensity score (PS) model to detect global baseline covariate imbalance in CRTs and assess the risk of confounding bias. We performed a simulation study to assess the performance of the proposed tool and applied this method to analyze the data from 2 published CRTs. The proposed method had good performance for large sample sizes (n =500 per arm) and when the number of unbalanced covariates was not too small as compared with the total number of baseline covariates (≥40% of unbalanced covariates). We also provide a strategy for pre selection of the covariates needed to be included in the PS model to enhance imbalance detection. The proposed tool could be useful in deciding whether covariate adjustment is required before performing statistical analyses of CRTs.
Nonlinear vs. linear biasing in Trp-cage folding simulations
NASA Astrophysics Data System (ADS)
Spiwok, Vojtěch; Oborský, Pavel; Pazúriková, Jana; Křenek, Aleš; Králová, Blanka
2015-03-01
Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200 ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energy minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.
Nonlinear vs. linear biasing in Trp-cage folding simulations.
Spiwok, Vojtěch; Oborský, Pavel; Pazúriková, Jana; Křenek, Aleš; Králová, Blanka
2015-03-21
Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200 ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energy minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.
A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield
Ringard, Justine; Seyler, Frederique; Linguet, Laurent
2017-01-01
Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale. PMID:28621723