NASA Astrophysics Data System (ADS)
Smitha, P. S.; Narasimhan, B.; Sudheer, K. P.; Annamalai, H.
2018-01-01
Regional climate models (RCMs) are used to downscale the coarse resolution General Circulation Model (GCM) outputs to a finer resolution for hydrological impact studies. However, RCM outputs often deviate from the observed climatological data, and therefore need bias correction before they are used for hydrological simulations. While there are a number of methods for bias correction, most of them use monthly statistics to derive correction factors, which may cause errors in the rainfall magnitude when applied on a daily scale. This study proposes a sliding window based daily correction factor derivations that help build reliable daily rainfall data from climate models. The procedure is applied to five existing bias correction methods, and is tested on six watersheds in different climatic zones of India for assessing the effectiveness of the corrected rainfall and the consequent hydrological simulations. The bias correction was performed on rainfall data downscaled using Conformal Cubic Atmospheric Model (CCAM) to 0.5° × 0.5° from two different CMIP5 models (CNRM-CM5.0, GFDL-CM3.0). The India Meteorological Department (IMD) gridded (0.25° × 0.25°) observed rainfall data was considered to test the effectiveness of the proposed bias correction method. The quantile-quantile (Q-Q) plots and Nash Sutcliffe efficiency (NSE) were employed for evaluation of different methods of bias correction. The analysis suggested that the proposed method effectively corrects the daily bias in rainfall as compared to using monthly factors. The methods such as local intensity scaling, modified power transformation and distribution mapping, which adjusted the wet day frequencies, performed superior compared to the other methods, which did not consider adjustment of wet day frequencies. The distribution mapping method with daily correction factors was able to replicate the daily rainfall pattern of observed data with NSE value above 0.81 over most parts of India. Hydrological simulations forced using the bias corrected rainfall (distribution mapping and modified power transformation methods that used the proposed daily correction factors) was similar to those simulated by the IMD rainfall. The results demonstrate that the methods and the time scales used for bias correction of RCM rainfall data have a larger impact on the accuracy of the daily rainfall and consequently the simulated streamflow. The analysis suggests that the distribution mapping with daily correction factors can be preferred for adjusting RCM rainfall data irrespective of seasons or climate zones for realistic simulation of streamflow.
Evaluation of Bias Correction Method for Satellite-Based Rainfall Data
Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter
2016-01-01
With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration’s (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003–2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW’s) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach. PMID:27314363
Evaluation of Bias Correction Method for Satellite-Based Rainfall Data.
Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter
2016-06-15
With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration's (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003-2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW's) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach.
Correction factors for self-selection when evaluating screening programmes.
Spix, Claudia; Berthold, Frank; Hero, Barbara; Michaelis, Jörg; Schilling, Freimut H
2016-03-01
In screening programmes there is recognized bias introduced through participant self-selection (the healthy screenee bias). Methods used to evaluate screening programmes include Intention-to-screen, per-protocol, and the "post hoc" approach in which, after introducing screening for everyone, the only evaluation option is participants versus non-participants. All methods are prone to bias through self-selection. We present an overview of approaches to correct for this bias. We considered four methods to quantify and correct for self-selection bias. Simple calculations revealed that these corrections are actually all identical, and can be converted into each other. Based on this, correction factors for further situations and measures were derived. The application of these correction factors requires a number of assumptions. Using as an example the German Neuroblastoma Screening Study, no relevant reduction in mortality or stage 4 incidence due to screening was observed. The largest bias (in favour of screening) was observed when comparing participants with non-participants. Correcting for bias is particularly necessary when using the post hoc evaluation approach, however, in this situation not all required data are available. External data or further assumptions may be required for estimation. © The Author(s) 2015.
ERIC Educational Resources Information Center
Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong
2010-01-01
This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile…
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
2018-06-01
The realized stochastic volatility model has been introduced to estimate more accurate volatility by using both daily returns and realized volatility. The main advantage of the model is that no special bias-correction factor for the realized volatility is required a priori. Instead, the model introduces a bias-correction parameter responsible for the bias hidden in realized volatility. We empirically investigate the bias-correction parameter for realized volatilities calculated at various sampling frequencies for six stocks on the Tokyo Stock Exchange, and then show that the dynamic behavior of the bias-correction parameter as a function of sampling frequency is qualitatively similar to that of the Hansen-Lunde bias-correction factor although their values are substantially different. Under the stochastic diffusion assumption of the return dynamics, we investigate the accuracy of estimated volatilities by examining the standardized returns. We find that while the moments of the standardized returns from low-frequency realized volatilities are consistent with the expectation from the Gaussian variables, the deviation from the expectation becomes considerably large at high frequencies. This indicates that the realized stochastic volatility model itself cannot completely remove bias at high frequencies.
Regression dilution bias: tools for correction methods and sample size calculation.
Berglund, Lars
2012-08-01
Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.
Hypothesis Testing Using Factor Score Regression: A Comparison of Four Methods
ERIC Educational Resources Information Center
Devlieger, Ines; Mayer, Axel; Rosseel, Yves
2016-01-01
In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and…
Hypothesis Testing Using Factor Score Regression
Devlieger, Ines; Mayer, Axel; Rosseel, Yves
2015-01-01
In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886
Lee, Wen-Chung
2014-02-05
The randomized controlled study is the gold-standard research method in biomedicine. In contrast, the validity of a (nonrandomized) observational study is often questioned because of unknown/unmeasured factors, which may have confounding and/or effect-modifying potential. In this paper, the author proposes a perturbation test to detect the bias of unmeasured factors and a perturbation adjustment to correct for such bias. The proposed method circumvents the problem of measuring unknowns by collecting the perturbations of unmeasured factors instead. Specifically, a perturbation is a variable that is readily available (or can be measured easily) and is potentially associated, though perhaps only very weakly, with unmeasured factors. The author conducted extensive computer simulations to provide a proof of concept. Computer simulations show that, as the number of perturbation variables increases from data mining, the power of the perturbation test increased progressively, up to nearly 100%. In addition, after the perturbation adjustment, the bias decreased progressively, down to nearly 0%. The data-mining perturbation analysis described here is recommended for use in detecting and correcting the bias of unmeasured factors in observational studies.
Bias-correction of PERSIANN-CDR Extreme Precipitation Estimates Over the United States
NASA Astrophysics Data System (ADS)
Faridzad, M.; Yang, T.; Hsu, K. L.; Sorooshian, S.
2017-12-01
Ground-based precipitation measurements can be sparse or even nonexistent over remote regions which make it difficult for extreme event analysis. PERSIANN-CDR (CDR), with 30+ years of daily rainfall information, provides an opportunity to study precipitation for regions where ground measurements are limited. In this study, the use of CDR annual extreme precipitation for frequency analysis of extreme events over limited/ungauged basins is explored. The adjustment of CDR is implemented in two steps: (1) Calculated CDR bias correction factor at limited gauge locations based on the linear regression analysis of gauge and CDR annual maxima precipitation; and (2) Extend the bias correction factor to the locations where gauges are not available. The correction factors are estimated at gauge sites over various catchments, elevation zones, and climate regions and the results were generalized to ungauged sites based on regional and climatic similarity. Case studies were conducted on 20 basins with diverse climate and altitudes in the Eastern and Western US. Cross-validation reveals that the bias correction factors estimated on limited calibration data can be extended to regions with similar characteristics. The adjusted CDR estimates also outperform gauge interpolation on validation sites consistently. It is suggested that the CDR with bias adjustment has a potential for study frequency analysis of extreme events, especially for regions with limited gauge observations.
O'Brien, D J; León-Vintró, L; McClean, B
2016-01-01
The use of radiotherapy fields smaller than 3 cm in diameter has resulted in the need for accurate detector correction factors for small field dosimetry. However, published factors do not always agree and errors introduced by biased reference detectors, inaccurate Monte Carlo models, or experimental errors can be difficult to distinguish. The aim of this study was to provide a robust set of detector-correction factors for a range of detectors using numerical, empirical, and semiempirical techniques under the same conditions and to examine the consistency of these factors between techniques. Empirical detector correction factors were derived based on small field output factor measurements for circular field sizes from 3.1 to 0.3 cm in diameter performed with a 6 MV beam. A PTW 60019 microDiamond detector was used as the reference dosimeter. Numerical detector correction factors for the same fields were derived based on calculations from a geant4 Monte Carlo model of the detectors and the Linac treatment head. Semiempirical detector correction factors were derived from the empirical output factors and the numerical dose-to-water calculations. The PTW 60019 microDiamond was found to over-respond at small field sizes resulting in a bias in the empirical detector correction factors. The over-response was similar in magnitude to that of the unshielded diode. Good agreement was generally found between semiempirical and numerical detector correction factors except for the PTW 60016 Diode P, where the numerical values showed a greater over-response than the semiempirical values by a factor of 3.7% for a 1.1 cm diameter field and higher for smaller fields. Detector correction factors based solely on empirical measurement or numerical calculation are subject to potential bias. A semiempirical approach, combining both empirical and numerical data, provided the most reliable results.
Zhao, Huaqing; Rebbeck, Timothy R; Mitra, Nandita
2009-12-01
Confounding due to population stratification (PS) arises when differences in both allele and disease frequencies exist in a population of mixed racial/ethnic subpopulations. Genomic control, structured association, principal components analysis (PCA), and multidimensional scaling (MDS) approaches have been proposed to address this bias using genetic markers. However, confounding due to PS can also be due to non-genetic factors. Propensity scores are widely used to address confounding in observational studies but have not been adapted to deal with PS in genetic association studies. We propose a genomic propensity score (GPS) approach to correct for bias due to PS that considers both genetic and non-genetic factors. We compare the GPS method with PCA and MDS using simulation studies. Our results show that GPS can adequately adjust and consistently correct for bias due to PS. Under no/mild, moderate, and severe PS, GPS yielded estimated with bias close to 0 (mean=-0.0044, standard error=0.0087). Under moderate or severe PS, the GPS method consistently outperforms the PCA method in terms of bias, coverage probability (CP), and type I error. Under moderate PS, the GPS method consistently outperforms the MDS method in terms of CP. PCA maintains relatively high power compared to both MDS and GPS methods under the simulated situations. GPS and MDS are comparable in terms of statistical properties such as bias, type I error, and power. The GPS method provides a novel and robust tool for obtaining less-biased estimates of genetic associations that can consider both genetic and non-genetic factors. 2009 Wiley-Liss, Inc.
Modeling bias and variation in the stochastic processes of small RNA sequencing
Etheridge, Alton; Sakhanenko, Nikita; Galas, David
2017-01-01
Abstract The use of RNA-seq as the preferred method for the discovery and validation of small RNA biomarkers has been hindered by high quantitative variability and biased sequence counts. In this paper we develop a statistical model for sequence counts that accounts for ligase bias and stochastic variation in sequence counts. This model implies a linear quadratic relation between the mean and variance of sequence counts. Using a large number of sequencing datasets, we demonstrate how one can use the generalized additive models for location, scale and shape (GAMLSS) distributional regression framework to calculate and apply empirical correction factors for ligase bias. Bias correction could remove more than 40% of the bias for miRNAs. Empirical bias correction factors appear to be nearly constant over at least one and up to four orders of magnitude of total RNA input and independent of sample composition. Using synthetic mixes of known composition, we show that the GAMLSS approach can analyze differential expression with greater accuracy, higher sensitivity and specificity than six existing algorithms (DESeq2, edgeR, EBSeq, limma, DSS, voom) for the analysis of small RNA-seq data. PMID:28369495
Combinations of Earth Orientation Observations: SPACE94, COMB94, and POLE94
NASA Technical Reports Server (NTRS)
Gross, R. S.
1995-01-01
A Kalman filter has been used to combine all publicly available, independently determined measurements of the Earth's orientation taken by the modern, space-geodetic techniques of very long baseline interferometry, satellite laser ranging, lunar laser ranging, and the global positioning system. Prior to combining the data, tidal terms were removed from the UT1 measurements, outlying data points were deleted, series-specific corrections were applied for bias and rate, and the stated uncertainties of the measurements were adjusted by multiplying them by series-specific scale factors. Values for these bias- rate corrections and uncertainty scale factors were determined by an iterative, round-robin procedure wherein each data set is compared, in turn, to a combination of all other data sets. When applied to the measurements, the bias-rate corrections thus determined make the data sets agree with each other in bias and rate, and the uncertainty scale factors thus determined make the residual of each series (when differenced with a combination of all others) have a reduced chi-square of one. The corrected and adjusted series are then placed within an IERS reference frame by aligning them with the IERS Earth orientation series EOP (IERS)90C04. The result of combining these corrected, adjusted and aligned series is designated SPCE94 and spans October 6.0, 1976 to January 27.0, 1995 at daily intervals.
Bias Correction of MODIS AOD using DragonNET to obtain improved estimation of PM2.5
NASA Astrophysics Data System (ADS)
Gross, B.; Malakar, N. K.; Atia, A.; Moshary, F.; Ahmed, S. A.; Oo, M. M.
2014-12-01
MODIS AOD retreivals using the Dark Target algorithm is strongly affected by the underlying surface reflection properties. In particular, the operational algorithms make use of surface parameterizations trained on global datasets and therefore do not account properly for urban surface differences. This parameterization continues to show an underestimation of the surface reflection which results in a general over-biasing in AOD retrievals. Recent results using the Dragon-Network datasets as well as high resolution retrievals in the NYC area illustrate that this is even more significant at the newest C006 3 km retrievals. In the past, we used AERONET observation in the City College to obtain bias-corrected AOD, but the homogeneity assumptions using only one site for the region is clearly an issue. On the other hand, DragonNET observations provide ample opportunities to obtain better tuning the surface corrections while also providing better statistical validation. In this study we present a neural network method to obtain bias correction of the MODIS AOD using multiple factors including surface reflectivity at 2130nm, sun-view geometrical factors and land-class information. These corrected AOD's are then used together with additional WRF meteorological factors to improve estimates of PM2.5. Efforts to explore the portability to other urban areas will be discussed. In addition, annual surface ratio maps will be developed illustrating that among the land classes, the urban pixels constitute the largest deviations from the operational model.
Good, Nicholas; Mölter, Anna; Peel, Jennifer L; Volckens, John
2017-07-01
The AE51 micro-Aethalometer (microAeth) is a popular and useful tool for assessing personal exposure to particulate black carbon (BC). However, few users of the AE51 are aware that its measurements are biased low (by up to 70%) due to the accumulation of BC on the filter substrate over time; previous studies of personal black carbon exposure are likely to have suffered from this bias. Although methods to correct for bias in micro-Aethalometer measurements of particulate black carbon have been proposed, these methods have not been verified in the context of personal exposure assessment. Here, five Aethalometer loading correction equations based on published methods were evaluated. Laboratory-generated aerosols of varying black carbon content (ammonium sulfate, Aquadag and NIST diesel particulate matter) were used to assess the performance of these methods. Filters from a personal exposure assessment study were also analyzed to determine how the correction methods performed for real-world samples. Standard correction equations produced correction factors with root mean square errors of 0.10 to 0.13 and mean bias within ±0.10. An optimized correction equation is also presented, along with sampling recommendations for minimizing bias when assessing personal exposure to BC using the AE51 micro-Aethalometer.
Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media
Cooley, R.L.; Christensen, S.
2006-01-01
Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.
Assessment of bias in US waterfowl harvest estimates
Padding, Paul I.; Royle, J. Andrew
2012-01-01
Context. North American waterfowl managers have long suspected that waterfowl harvest estimates derived from national harvest surveys in the USA are biased high. Survey bias can be evaluated by comparing survey results with like estimates from independent sources. Aims. We used band-recovery data to assess the magnitude of apparent bias in duck and goose harvest estimates, using mallards (Anas platyrhynchos) and Canada geese (Branta canadensis) as representatives of ducks and geese, respectively. Methods. We compared the number of reported mallard and Canada goose band recoveries, adjusted for band reporting rates, with the estimated harvests of banded mallards and Canada geese from the national harvest surveys. Weused the results of those comparisons to develop correction factors that can be applied to annual duck and goose harvest estimates of the national harvest survey. Key results. National harvest survey estimates of banded mallards harvested annually averaged 1.37 times greater than those calculated from band-recovery data, whereas Canada goose harvest estimates averaged 1.50 or 1.63 times greater than comparable band-recovery estimates, depending on the harvest survey methodology used. Conclusions. Duck harvest estimates produced by the national harvest survey from 1971 to 2010 should be reduced by a factor of 0.73 (95% CI = 0.71–0.75) to correct for apparent bias. Survey-specific correction factors of 0.67 (95% CI = 0.65–0.69) and 0.61 (95% CI = 0.59–0.64) should be applied to the goose harvest estimates for 1971–2001 (duck stamp-based survey) and 1999–2010 (HIP-based survey), respectively. Implications. Although this apparent bias likely has not influenced waterfowl harvest management policy in the USA, it does have negative impacts on some applications of harvest estimates, such as indirect estimation of population size. For those types of analyses, we recommend applying the appropriate correction factor to harvest estimates.
Bolte, John F B
2016-09-01
Personal exposure measurements of radio frequency electromagnetic fields are important for epidemiological studies and developing prediction models. Minimizing biases and uncertainties and handling spatial and temporal variability are important aspects of these measurements. This paper reviews the lessons learnt from testing the different types of exposimeters and from personal exposure measurement surveys performed between 2005 and 2015. Applying them will improve the comparability and ranking of exposure levels for different microenvironments, activities or (groups of) people, such that epidemiological studies are better capable of finding potential weak correlations with health effects. Over 20 papers have been published on how to prevent biases and minimize uncertainties due to: mechanical errors; design of hardware and software filters; anisotropy; and influence of the body. A number of biases can be corrected for by determining multiplicative correction factors. In addition a good protocol on how to wear the exposimeter, a sufficiently small sampling interval and sufficiently long measurement duration will minimize biases. Corrections to biases are possible for: non-detects through detection limit, erroneous manufacturer calibration and temporal drift. Corrections not deemed necessary, because no significant biases have been observed, are: linearity in response and resolution. Corrections difficult to perform after measurements are for: modulation/duty cycle sensitivity; out of band response aka cross talk; temperature and humidity sensitivity. Corrections not possible to perform after measurements are for: multiple signals detection in one band; flatness of response within a frequency band; anisotropy to waves of different elevation angle. An analysis of 20 microenvironmental surveys showed that early studies using exposimeters with logarithmic detectors, overestimated exposure to signals with bursts, such as in uplink signals from mobile phones and WiFi appliances. Further, the possible corrections for biases have not been fully applied. The main findings are that if the biases are not corrected for, the actual exposure will on average be underestimated. Copyright © 2016 Elsevier Ltd. All rights reserved.
Correcting the Relative Bias of Light Obscuration and Flow Imaging Particle Counters.
Ripple, Dean C; Hu, Zhishang
2016-03-01
Industry and regulatory bodies desire more accurate methods for counting and characterizing particles. Measurements of proteinaceous-particle concentrations by light obscuration and flow imaging can differ by factors of ten or more. We propose methods to correct the diameters reported by light obscuration and flow imaging instruments. For light obscuration, diameters were rescaled based on characterization of the refractive index of typical particles and a light scattering model for the extinction efficiency factor. The light obscuration models are applicable for either homogeneous materials (e.g., silicone oil) or for chemically homogeneous, but spatially non-uniform aggregates (e.g., protein aggregates). For flow imaging, the method relied on calibration of the instrument with silica beads suspended in water-glycerol mixtures. These methods were applied to a silicone-oil droplet suspension and four particle suspensions containing particles produced from heat stressed and agitated human serum albumin, agitated polyclonal immunoglobulin, and abraded ethylene tetrafluoroethylene polymer. All suspensions were measured by two flow imaging and one light obscuration apparatus. Prior to correction, results from the three instruments disagreed by a factor ranging from 3.1 to 48 in particle concentration over the size range from 2 to 20 μm. Bias corrections reduced the disagreement from an average factor of 14 down to an average factor of 1.5. The methods presented show promise in reducing the relative bias between light obscuration and flow imaging.
Bias correction for rainrate retrievals from satellite passive microwave sensors
NASA Technical Reports Server (NTRS)
Short, David A.
1990-01-01
Rainrates retrieved from past and present satellite-borne microwave sensors are affected by a fundamental remote sensing problem. Sensor fields-of-view are typically large enough to encompass substantial rainrate variability, whereas the retrieval algorithms, based on radiative transfer calculations, show a non-linear relationship between rainrate and microwave brightness temperature. Retrieved rainrates are systematically too low. A statistical model of the bias problem shows that bias correction factors depend on the probability distribution of instantaneous rainrate and on the average thickness of the rain layer.
Vos, Janet R; Hsu, Li; Brohet, Richard M; Mourits, Marian J E; de Vries, Jakob; Malone, Kathleen E; Oosterwijk, Jan C; de Bock, Geertruida H
2015-08-10
Recommendations for treating patients who carry a BRCA1/2 gene are mainly based on cumulative lifetime risks (CLTRs) of breast cancer determined from retrospective cohorts. These risks vary widely (27% to 88%), and it is important to understand why. We analyzed the effects of methods of risk estimation and bias correction and of population factors on CLTRs in this retrospective clinical cohort of BRCA1/2 carriers. The following methods to estimate the breast cancer risk of BRCA1/2 carriers were identified from the literature: Kaplan-Meier, frailty, and modified segregation analyses with bias correction consisting of including or excluding index patients combined with including or excluding first-degree relatives (FDRs) or different conditional likelihoods. These were applied to clinical data of BRCA1/2 families derived from our family cancer clinic for whom a simulation was also performed to evaluate the methods. CLTRs and 95% CIs were estimated and compared with the reference CLTRs. CLTRs ranged from 35% to 83% for BRCA1 and 41% to 86% for BRCA2 carriers at age 70 years width of 95% CIs: 10% to 35% and 13% to 46%, respectively). Relative bias varied from -38% to +16%. Bias correction with inclusion of index patients and untested FDRs gave the smallest bias: +2% (SD, 2%) in BRCA1 and +0.9% (SD, 3.6%) in BRCA2. Much of the variation in breast cancer CLTRs in retrospective clinical BRCA1/2 cohorts is due to the bias-correction method, whereas a smaller part is due to population differences. Kaplan-Meier analyses with bias correction that includes index patients and a proportion of untested FDRs provide suitable CLTRs for carriers counseled in the clinic. © 2015 by American Society of Clinical Oncology.
Automation bias: empirical results assessing influencing factors.
Goddard, Kate; Roudsari, Abdul; Wyatt, Jeremy C
2014-05-01
To investigate the rate of automation bias - the propensity of people to over rely on automated advice and the factors associated with it. Tested factors were attitudinal - trust and confidence, non-attitudinal - decision support experience and clinical experience, and environmental - task difficulty. The paradigm of simulated decision support advice within a prescribing context was used. The study employed within participant before-after design, whereby 26 UK NHS General Practitioners were shown 20 hypothetical prescribing scenarios with prevalidated correct and incorrect answers - advice was incorrect in 6 scenarios. They were asked to prescribe for each case, followed by being shown simulated advice. Participants were then asked whether they wished to change their prescription, and the post-advice prescription was recorded. Rate of overall decision switching was captured. Automation bias was measured by negative consultations - correct to incorrect prescription switching. Participants changed prescriptions in 22.5% of scenarios. The pre-advice accuracy rate of the clinicians was 50.38%, which improved to 58.27% post-advice. The CDSS improved the decision accuracy in 13.1% of prescribing cases. The rate of automation bias, as measured by decision switches from correct pre-advice, to incorrect post-advice was 5.2% of all cases - a net improvement of 8%. More immediate factors such as trust in the specific CDSS, decision confidence, and task difficulty influenced rate of decision switching. Lower clinical experience was associated with more decision switching. Age, DSS experience and trust in CDSS generally were not significantly associated with decision switching. This study adds to the literature surrounding automation bias in terms of its potential frequency and influencing factors. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Intercalibration of research survey vessels on Lake Erie
Tyson, J.T.; Johnson, T.B.; Knight, C.T.; Bur, M.T.
2006-01-01
Fish abundance indices obtained from annual research trawl surveys are an integral part of fisheries stock assessment and management in the Great Lakes. It is difficult, however, to administer trawl surveys using a single vessel-gear combination owing to the large size of these systems, the jurisdictional boundaries that bisect the Great Lakes, and changes in vessels as a result of fleet replacement. When trawl surveys are administered by multiple vessel-gear combinations, systematic error may be introduced in combining catch-per-unit-effort (CPUE) data across vessels. This bias is associated with relative differences in catchability among vessel-gear combinations. In Lake Erie, five different research vessels conduct seasonal trawl surveys in the western half of the lake. To eliminate this systematic bias, the Lake Erie agencies conducted a side-by-side trawling experiment in 2003 to develop correction factors for CPUE data associated with different vessel-gear combinations. Correcting for systematic bias in CPUE data should lead to more accurate and comparable estimates of species density and biomass. We estimated correction factors for the 10 most commonly collected species age-groups for each vessel during the experiment. Most of the correction factors (70%) ranged from 0.5 to 2.0, indicating that the systematic bias associated with different vessel-gear combinations was not large. Differences in CPUE were most evident for vessels using different sampling gears, although significant differences also existed for vessels using the same gears. These results suggest that standardizing gear is important for multiple-vessel surveys, but there will still be significant differences in catchability stemming from the vessel effects and agencies must correct for this. With standardized estimates of CPUE, the Lake Erie agencies will have the ability to directly compare and combine time series for species abundance. ?? Copyright by the American Fisheries Society 2006.
Li, Xinpeng; Li, Hong; Liu, Yun; Xiong, Wei; Fang, Sheng
2018-03-05
The release rate of atmospheric radionuclide emissions is a critical factor in the emergency response to nuclear accidents. However, there are unavoidable biases in radionuclide transport models, leading to inaccurate estimates. In this study, a method that simultaneously corrects these biases and estimates the release rate is developed. Our approach provides a more complete measurement-by-measurement correction of the biases with a coefficient matrix that considers both deterministic and stochastic deviations. This matrix and the release rate are jointly solved by the alternating minimization algorithm. The proposed method is generic because it does not rely on specific features of transport models or scenarios. It is validated against wind tunnel experiments that simulate accidental releases in a heterogonous and densely built nuclear power plant site. The sensitivities to the position, number, and quality of measurements and extendibility of the method are also investigated. The results demonstrate that this method effectively corrects the model biases, and therefore outperforms Tikhonov's method in both release rate estimation and model prediction. The proposed approach is robust to uncertainties and extendible with various center estimators, thus providing a flexible framework for robust source inversion in real accidents, even if large uncertainties exist in multiple factors. Copyright © 2017 Elsevier B.V. All rights reserved.
EMC Global Climate And Weather Modeling Branch Personnel
Comparison Statistics which includes: NCEP Raw and Bias-Corrected Ensemble Domain Averaged Bias NCEP Raw and Bias-Corrected Ensemble Domain Averaged Bias Reduction (Percents) CMC Raw and Bias-Corrected Control Forecast Domain Averaged Bias CMC Raw and Bias-Corrected Control Forecast Domain Averaged Bias Reduction
Experiences from the testing of a theory for modelling groundwater flow in heterogeneous media
Christensen, S.; Cooley, R.L.
2002-01-01
Usually, small-scale model error is present in groundwater modelling because the model only represents average system characteristics having the same form as the drift and small-scale variability is neglected. These errors cause the true errors of a regression model to be correlated. Theory and an example show that the errors also contribute to bias in the estimates of model parameters. This bias originates from model nonlinearity. In spite of this bias, predictions of hydraulic head are nearly unbiased if the model intrinsic nonlinearity is small. Individual confidence and prediction intervals are accurate if the t-statistic is multiplied by a correction factor. The correction factor can be computed from the true error second moment matrix, which can be determined when the stochastic properties of the system characteristics are known.
Experience gained in testing a theory for modelling groundwater flow in heterogeneous media
Christensen, S.; Cooley, R.L.
2002-01-01
Usually, small-scale model error is present in groundwater modelling because the model only represents average system characteristics having the same form as the drift, and small-scale variability is neglected. These errors cause the true errors of a regression model to be correlated. Theory and an example show that the errors also contribute to bias in the estimates of model parameters. This bias originates from model nonlinearity. In spite of this bias, predictions of hydraulic head are nearly unbiased if the model intrinsic nonlinearity is small. Individual confidence and prediction intervals are accurate if the t-statistic is multiplied by a correction factor. The correction factor can be computed from the true error second moment matrix, which can be determined when the stochastic properties of the system characteristics are known.
Towards process-informed bias correction of climate change simulations
NASA Astrophysics Data System (ADS)
Maraun, Douglas; Shepherd, Theodore G.; Widmann, Martin; Zappa, Giuseppe; Walton, Daniel; Gutiérrez, José M.; Hagemann, Stefan; Richter, Ingo; Soares, Pedro M. M.; Hall, Alex; Mearns, Linda O.
2017-11-01
Biases in climate model simulations introduce biases in subsequent impact simulations. Therefore, bias correction methods are operationally used to post-process regional climate projections. However, many problems have been identified, and some researchers question the very basis of the approach. Here we demonstrate that a typical cross-validation is unable to identify improper use of bias correction. Several examples show the limited ability of bias correction to correct and to downscale variability, and demonstrate that bias correction can cause implausible climate change signals. Bias correction cannot overcome major model errors, and naive application might result in ill-informed adaptation decisions. We conclude with a list of recommendations and suggestions for future research to reduce, post-process, and cope with climate model biases.
Attenuation correction for the large non-human primate brain imaging using microPET.
Naidoo-Variawa, S; Lehnert, W; Kassiou, M; Banati, R; Meikle, S R
2010-04-21
Assessment of the biodistribution and pharmacokinetics of radiopharmaceuticals in vivo is often performed on animal models of human disease prior to their use in humans. The baboon brain is physiologically and neuro-anatomically similar to the human brain and is therefore a suitable model for evaluating novel CNS radioligands. We previously demonstrated the feasibility of performing baboon brain imaging on a dedicated small animal PET scanner provided that the data are accurately corrected for degrading physical effects such as photon attenuation in the body. In this study, we investigated factors affecting the accuracy and reliability of alternative attenuation correction strategies when imaging the brain of a large non-human primate (papio hamadryas) using the microPET Focus 220 animal scanner. For measured attenuation correction, the best bias versus noise performance was achieved using a (57)Co transmission point source with a 4% energy window. The optimal energy window for a (68)Ge transmission source operating in singles acquisition mode was 20%, independent of the source strength, providing bias-noise performance almost as good as for (57)Co. For both transmission sources, doubling the acquisition time had minimal impact on the bias-noise trade-off for corrected emission images, despite observable improvements in reconstructed attenuation values. In a [(18)F]FDG brain scan of a female baboon, both measured attenuation correction strategies achieved good results and similar SNR, while segmented attenuation correction (based on uncorrected emission images) resulted in appreciable regional bias in deep grey matter structures and the skull. We conclude that measured attenuation correction using a single pass (57)Co (4% energy window) or (68)Ge (20% window) transmission scan achieves an excellent trade-off between bias and propagation of noise when imaging the large non-human primate brain with a microPET scanner.
Attenuation correction for the large non-human primate brain imaging using microPET
NASA Astrophysics Data System (ADS)
Naidoo-Variawa, S.; Lehnert, W.; Kassiou, M.; Banati, R.; Meikle, S. R.
2010-04-01
Assessment of the biodistribution and pharmacokinetics of radiopharmaceuticals in vivo is often performed on animal models of human disease prior to their use in humans. The baboon brain is physiologically and neuro-anatomically similar to the human brain and is therefore a suitable model for evaluating novel CNS radioligands. We previously demonstrated the feasibility of performing baboon brain imaging on a dedicated small animal PET scanner provided that the data are accurately corrected for degrading physical effects such as photon attenuation in the body. In this study, we investigated factors affecting the accuracy and reliability of alternative attenuation correction strategies when imaging the brain of a large non-human primate (papio hamadryas) using the microPET Focus 220 animal scanner. For measured attenuation correction, the best bias versus noise performance was achieved using a 57Co transmission point source with a 4% energy window. The optimal energy window for a 68Ge transmission source operating in singles acquisition mode was 20%, independent of the source strength, providing bias-noise performance almost as good as for 57Co. For both transmission sources, doubling the acquisition time had minimal impact on the bias-noise trade-off for corrected emission images, despite observable improvements in reconstructed attenuation values. In a [18F]FDG brain scan of a female baboon, both measured attenuation correction strategies achieved good results and similar SNR, while segmented attenuation correction (based on uncorrected emission images) resulted in appreciable regional bias in deep grey matter structures and the skull. We conclude that measured attenuation correction using a single pass 57Co (4% energy window) or 68Ge (20% window) transmission scan achieves an excellent trade-off between bias and propagation of noise when imaging the large non-human primate brain with a microPET scanner.
Adjusting for partial verification or workup bias in meta-analyses of diagnostic accuracy studies.
de Groot, Joris A H; Dendukuri, Nandini; Janssen, Kristel J M; Reitsma, Johannes B; Brophy, James; Joseph, Lawrence; Bossuyt, Patrick M M; Moons, Karel G M
2012-04-15
A key requirement in the design of diagnostic accuracy studies is that all study participants receive both the test under evaluation and the reference standard test. For a variety of practical and ethical reasons, sometimes only a proportion of patients receive the reference standard, which can bias the accuracy estimates. Numerous methods have been described for correcting this partial verification bias or workup bias in individual studies. In this article, the authors describe a Bayesian method for obtaining adjusted results from a diagnostic meta-analysis when partial verification or workup bias is present in a subset of the primary studies. The method corrects for verification bias without having to exclude primary studies with verification bias, thus preserving the main advantages of a meta-analysis: increased precision and better generalizability. The results of this method are compared with the existing methods for dealing with verification bias in diagnostic meta-analyses. For illustration, the authors use empirical data from a systematic review of studies of the accuracy of the immunohistochemistry test for diagnosis of human epidermal growth factor receptor 2 status in breast cancer patients.
Ward, Zachary J.; Long, Michael W.; Resch, Stephen C.; Gortmaker, Steven L.; Cradock, Angie L.; Giles, Catherine; Hsiao, Amber; Wang, Y. Claire
2016-01-01
Background State-level estimates from the Centers for Disease Control and Prevention (CDC) underestimate the obesity epidemic because they use self-reported height and weight. We describe a novel bias-correction method and produce corrected state-level estimates of obesity and severe obesity. Methods Using non-parametric statistical matching, we adjusted self-reported data from the Behavioral Risk Factor Surveillance System (BRFSS) 2013 (n = 386,795) using measured data from the National Health and Nutrition Examination Survey (NHANES) (n = 16,924). We validated our national estimates against NHANES and estimated bias-corrected state-specific prevalence of obesity (BMI≥30) and severe obesity (BMI≥35). We compared these results with previous adjustment methods. Results Compared to NHANES, self-reported BRFSS data underestimated national prevalence of obesity by 16% (28.67% vs 34.01%), and severe obesity by 23% (11.03% vs 14.26%). Our method was not significantly different from NHANES for obesity or severe obesity, while previous methods underestimated both. Only four states had a corrected obesity prevalence below 30%, with four exceeding 40%–in contrast, most states were below 30% in CDC maps. Conclusions Twelve million adults with obesity (including 6.7 million with severe obesity) were misclassified by CDC state-level estimates. Previous bias-correction methods also resulted in underestimates. Accurate state-level estimates are necessary to plan for resources to address the obesity epidemic. PMID:26954566
On the Performance of T2∗ Correction Methods for Quantification of Hepatic Fat Content
Reeder, Scott B.; Bice, Emily K.; Yu, Huanzhou; Hernando, Diego; Pineda, Angel R.
2014-01-01
Nonalcoholic fatty liver disease is the most prevalent chronic liver disease in Western societies. MRI can quantify liver fat, the hallmark feature of nonalcoholic fatty liver disease, so long as multiple confounding factors including T2∗ decay are addressed. Recently developed MRI methods that correct for T2∗ to improve the accuracy of fat quantification either assume a common T2∗ (single- T2∗) for better stability and noise performance or independently estimate the T2∗ for water and fat (dual- T2∗) for reduced bias, but with noise performance penalty. In this study, the tradeoff between bias and variance for different T2∗ correction methods is analyzed using the Cramér-Rao bound analysis for biased estimators and is validated using Monte Carlo experiments. A noise performance metric for estimation of fat fraction is proposed. Cramér-Rao bound analysis for biased estimators was used to compute the metric at different echo combinations. Optimization was performed for six echoes and typical T2∗ values. This analysis showed that all methods have better noise performance with very short first echo times and echo spacing of ∼π/2 for single- T2∗ correction, and ∼2π/3 for dual- T2∗ correction. Interestingly, when an echo spacing and first echo shift of ∼π/2 are used, methods without T2∗ correction have less than 5% bias in the estimates of fat fraction. PMID:21661045
Thomas, Austen C; Jarman, Simon N; Haman, Katherine H; Trites, Andrew W; Deagle, Bruce E
2014-08-01
Ecologists are increasingly interested in quantifying consumer diets based on food DNA in dietary samples and high-throughput sequencing of marker genes. It is tempting to assume that food DNA sequence proportions recovered from diet samples are representative of consumer's diet proportions, despite the fact that captive feeding studies do not support that assumption. Here, we examine the idea of sequencing control materials of known composition along with dietary samples in order to correct for technical biases introduced during amplicon sequencing and biological biases such as variable gene copy number. Using the Ion Torrent PGM(©) , we sequenced prey DNA amplified from scats of captive harbour seals (Phoca vitulina) fed a constant diet including three fish species in known proportions. Alongside, we sequenced a prey tissue mix matching the seals' diet to generate tissue correction factors (TCFs). TCFs improved the diet estimates (based on sequence proportions) for all species and reduced the average estimate error from 28 ± 15% (uncorrected) to 14 ± 9% (TCF-corrected). The experimental design also allowed us to infer the magnitude of prey-specific digestion biases and calculate digestion correction factors (DCFs). The DCFs were compared with possible proxies for differential digestion (e.g. fish protein%, fish lipid%) revealing a strong relationship between the DCFs and percent lipid of the fish prey, suggesting prey-specific corrections based on lipid content would produce accurate diet estimates in this study system. These findings demonstrate the value of parallel sequencing of food tissue mixtures in diet studies and offer new directions for future research in quantitative DNA diet analysis. © 2013 John Wiley & Sons Ltd.
Zhu, Lei; Jacob, Daniel J.; Kim, Patrick S.; Fisher, Jenny A.; Yu, Karen; Travis, Katherine R.; Mickley, Loretta J.; Yantosca, Robert M.; Sulprizio, Melissa P.; De Smedt, Isabelle; Abad, Gonzalo Gonzalez; Chance, Kelly; Li, Can; Ferrare, Richard; Fried, Alan; Hair, Johnathan W.; Hanisco, Thomas F.; Richter, Dirk; Scarino, Amy Jo; Walega, James; Weibring, Petter; Wolfe, Glenn M.
2018-01-01
Formaldehyde (HCHO) column data from satellites are widely used as a proxy for emissions of volatile organic compounds (VOCs) but validation of the data has been extremely limited. Here we use highly accurate HCHO aircraft observations from the NASA SEAC4RS campaign over the Southeast US in August–September 2013 to validate and intercompare six retrievals of HCHO columns from four different satellite instruments (OMI, GOME2A, GOME2B and OMPS) and three different research groups. The GEOS-Chem chemical transport model is used as a common intercomparison platform. All retrievals feature a HCHO maximum over Arkansas and Louisiana, consistent with the aircraft observations and reflecting high emissions of biogenic isoprene. The retrievals are also interconsistent in their spatial variability over the Southeast US (r=0.4–0.8 on a 0.5°×0.5° grid) and in their day-to-day variability (r=0.5–0.8). However, all retrievals are biased low in the mean by 20–51%, which would lead to corresponding bias in estimates of isoprene emissions from the satellite data. The smallest bias is for OMI-BIRA, which has high corrected slant columns relative to the other retrievals and low scattering weights in its air mass factor (AMF) calculation. OMI-BIRA has systematic error in its assumed vertical HCHO shape profiles for the AMF calculation and correcting this would eliminate its bias relative to the SEAC4RS data. Our results support the use of satellite HCHO data as a quantitative proxy for isoprene emission after correction of the low mean bias. There is no evident pattern in the bias, suggesting that a uniform correction factor may be applied to the data until better understanding is achieved. PMID:29619044
2008112500 2008112400 Background information bias reduction = ( | domain-averaged ensemble mean bias | - | domain-averaged bias-corrected ensemble mean bias | / | domain-averaged bias-corrected ensemble mean bias
Timing group delay and differential code bias corrections for BeiDou positioning
NASA Astrophysics Data System (ADS)
Guo, Fei; Zhang, Xiaohong; Wang, Jinling
2015-05-01
This article first clearly figures out the relationship between parameters of timing group delay (TGD) and differential code bias (DCB) for BDS, and demonstrates the equivalence of TGD and DCB correction models combining theory with practice. The TGD/DCB correction models have been extended to various occasions for BDS positioning, and such models have been evaluated by real triple-frequency datasets. To test the effectiveness of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both standard point positioning (SPP) and precise point positioning (PPP) tests are carried out for BDS signals with different schemes. Furthermore, the influence of differential code biases on BDS positioning estimates such as coordinates, receiver clock biases, tropospheric delays and carrier phase ambiguities is investigated comprehensively. Comparative analysis show that the unmodeled differential code biases degrade the performance of BDS SPP by a factor of two or more, whereas the estimates of PPP are subject to varying degrees of influences. For SPP, the accuracy of dual-frequency combinations is slightly worse than that of single-frequency, and they are much more sensitive to the differential code biases, particularly for the B2B3 combination. For PPP, the uncorrected differential code biases are mostly absorbed into the receiver clock bias and carrier phase ambiguities and thus resulting in a much longer convergence time. Even though the influence of the differential code biases could be mitigated over time and comparable positioning accuracy could be achieved after convergence, it is suggested to properly handle with the differential code biases since it is vital for PPP convergence and integer ambiguity resolution.
The cost of adherence mismeasurement in serious mental illness: a claims-based analysis.
Shafrin, Jason; Forma, Felicia; Scherer, Ethan; Hatch, Ainslie; Vytlacil, Edward; Lakdawalla, Darius
2017-05-01
To quantify how adherence mismeasurement affects the estimated impact of adherence on inpatient costs among patients with serious mental illness (SMI). Proportion of days covered (PDC) is a common claims-based measure of medication adherence. Because PDC does not measure medication ingestion, however, it may inaccurately measure adherence. We derived a formula to correct the bias that occurs in adherence-utilization studies resulting from errors in claims-based measures of adherence. We conducted a literature review to identify the correlation between gold-standard and claims-based adherence measures. We derived a bias-correction methodology to address claims-based medication adherence measurement error. We then applied this methodology to a case study of patients with SMI who initiated atypical antipsychotics in 2 large claims databases. Our literature review identified 6 studies of interest. The 4 most relevant ones measured correlations between 0.38 and 0.91. Our preferred estimate implies that the effect of adherence on inpatient spending estimated from claims data would understate the true effect by a factor of 5.3, if there were no other sources of bias. Although our procedure corrects for measurement error, such error also may amplify or mitigate other potential biases. For instance, if adherent patients are healthier than nonadherent ones, measurement error makes the resulting bias worse. On the other hand, if adherent patients are sicker, measurement error mitigates the other bias. Measurement error due to claims-based adherence measures is worth addressing, alongside other more widely emphasized sources of bias in inference.
An advanced method to assess the diet of free-ranging large carnivores based on scats.
Wachter, Bettina; Blanc, Anne-Sophie; Melzheimer, Jörg; Höner, Oliver P; Jago, Mark; Hofer, Heribert
2012-01-01
The diet of free-ranging carnivores is an important part of their ecology. It is often determined from prey remains in scats. In many cases, scat analyses are the most efficient method but they require correction for potential biases. When the diet is expressed as proportions of consumed mass of each prey species, the consumed prey mass to excrete one scat needs to be determined and corrected for prey body mass because the proportion of digestible to indigestible matter increases with prey body mass. Prey body mass can be corrected for by conducting feeding experiments using prey of various body masses and fitting a regression between consumed prey mass to excrete one scat and prey body mass (correction factor 1). When the diet is expressed as proportions of consumed individuals of each prey species and includes prey animals not completely consumed, the actual mass of each prey consumed by the carnivore needs to be controlled for (correction factor 2). No previous study controlled for this second bias. Here we use an extended series of feeding experiments on a large carnivore, the cheetah (Acinonyx jubatus), to establish both correction factors. In contrast to previous studies which fitted a linear regression for correction factor 1, we fitted a biologically more meaningful exponential regression model where the consumed prey mass to excrete one scat reaches an asymptote at large prey sizes. Using our protocol, we also derive correction factor 1 and 2 for other carnivore species and apply them to published studies. We show that the new method increases the number and proportion of consumed individuals in the diet for large prey animals compared to the conventional method. Our results have important implications for the interpretation of scat-based studies in feeding ecology and the resolution of human-wildlife conflicts for the conservation of large carnivores.
An Advanced Method to Assess the Diet of Free-Ranging Large Carnivores Based on Scats
Wachter, Bettina; Blanc, Anne-Sophie; Melzheimer, Jörg; Höner, Oliver P.; Jago, Mark; Hofer, Heribert
2012-01-01
Background The diet of free-ranging carnivores is an important part of their ecology. It is often determined from prey remains in scats. In many cases, scat analyses are the most efficient method but they require correction for potential biases. When the diet is expressed as proportions of consumed mass of each prey species, the consumed prey mass to excrete one scat needs to be determined and corrected for prey body mass because the proportion of digestible to indigestible matter increases with prey body mass. Prey body mass can be corrected for by conducting feeding experiments using prey of various body masses and fitting a regression between consumed prey mass to excrete one scat and prey body mass (correction factor 1). When the diet is expressed as proportions of consumed individuals of each prey species and includes prey animals not completely consumed, the actual mass of each prey consumed by the carnivore needs to be controlled for (correction factor 2). No previous study controlled for this second bias. Methodology/Principal Findings Here we use an extended series of feeding experiments on a large carnivore, the cheetah (Acinonyx jubatus), to establish both correction factors. In contrast to previous studies which fitted a linear regression for correction factor 1, we fitted a biologically more meaningful exponential regression model where the consumed prey mass to excrete one scat reaches an asymptote at large prey sizes. Using our protocol, we also derive correction factor 1 and 2 for other carnivore species and apply them to published studies. We show that the new method increases the number and proportion of consumed individuals in the diet for large prey animals compared to the conventional method. Conclusion/Significance Our results have important implications for the interpretation of scat-based studies in feeding ecology and the resolution of human-wildlife conflicts for the conservation of large carnivores. PMID:22715373
Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean
2014-01-01
MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.
Fourcade, Yoan; Engler, Jan O.; Rödder, Dennis; Secondi, Jean
2014-01-01
MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one “virtual” derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases. PMID:24818607
Quality Controlled Radiosonde Profile from MC3E
Toto, Tami; Jensen, Michael
2014-11-13
The sonde-adjust VAP produces data that corrects documented biases in radiosonde humidity measurements. Unique fields contained within this datastream include smoothed original relative humidity, dry bias corrected relative humidity, and final corrected relative humidity. The smoothed RH field refines the relative humidity from integers - the resolution of the instrument - to fractions of a percent. This profile is then used to calculate the dry bias corrected field. The final correction fixes a time-lag problem and uses the dry-bias field as input into the algorithm. In addition to dry bias, solar heating is another correction that is encompassed in the final corrected relative humidity field. Additional corrections were made to soundings at the extended facility sites (S0*) as necessary: Corrected erroneous surface elevation (and up through rest of height of sounding), for S03, S04 and S05. Corrected erroneous surface pressure at Chanute (S02).
How does bias correction of RCM precipitation affect modelled runoff?
NASA Astrophysics Data System (ADS)
Teng, J.; Potter, N. J.; Chiew, F. H. S.; Zhang, L.; Vaze, J.; Evans, J. P.
2014-09-01
Many studies bias correct daily precipitation from climate models to match the observed precipitation statistics, and the bias corrected data are then used for various modelling applications. This paper presents a review of recent methods used to bias correct precipitation from regional climate models (RCMs). The paper then assesses four bias correction methods applied to the weather research and forecasting (WRF) model simulated precipitation, and the follow-on impact on modelled runoff for eight catchments in southeast Australia. Overall, the best results are produced by either quantile mapping or a newly proposed two-state gamma distribution mapping method. However, the difference between the tested methods is small in the modelling experiments here (and as reported in the literature), mainly because of the substantial corrections required and inconsistent errors over time (non-stationarity). The errors remaining in bias corrected precipitation are typically amplified in modelled runoff. The tested methods cannot overcome limitation of RCM in simulating precipitation sequence, which affects runoff generation. Results further show that whereas bias correction does not seem to alter change signals in precipitation means, it can introduce additional uncertainty to change signals in high precipitation amounts and, consequently, in runoff. Future climate change impact studies need to take this into account when deciding whether to use raw or bias corrected RCM results. Nevertheless, RCMs will continue to improve and will become increasingly useful for hydrological applications as the bias in RCM simulations reduces.
Darbani, Behrooz; Stewart, C Neal; Noeparvar, Shahin; Borg, Søren
2014-10-20
This report investigates for the first time the potential inter-treatment bias source of cell number for gene expression studies. Cell-number bias can affect gene expression analysis when comparing samples with unequal total cellular RNA content or with different RNA extraction efficiencies. For maximal reliability of analysis, therefore, comparisons should be performed at the cellular level. This could be accomplished using an appropriate correction method that can detect and remove the inter-treatment bias for cell-number. Based on inter-treatment variations of reference genes, we introduce an analytical approach to examine the suitability of correction methods by considering the inter-treatment bias as well as the inter-replicate variance, which allows use of the best correction method with minimum residual bias. Analyses of RNA sequencing and microarray data showed that the efficiencies of correction methods are influenced by the inter-treatment bias as well as the inter-replicate variance. Therefore, we recommend inspecting both of the bias sources in order to apply the most efficient correction method. As an alternative correction strategy, sequential application of different correction approaches is also advised. Copyright © 2014 Elsevier B.V. All rights reserved.
Development of Spatiotemporal Bias-Correction Techniques for Downscaling GCM Predictions
NASA Astrophysics Data System (ADS)
Hwang, S.; Graham, W. D.; Geurink, J.; Adams, A.; Martinez, C. J.
2010-12-01
Accurately representing the spatial variability of precipitation is an important factor for predicting watershed response to climatic forcing, particularly in small, low-relief watersheds affected by convective storm systems. Although Global Circulation Models (GCMs) generally preserve spatial relationships between large-scale and local-scale mean precipitation trends, most GCM downscaling techniques focus on preserving only observed temporal variability on point by point basis, not spatial patterns of events. Downscaled GCM results (e.g., CMIP3 ensembles) have been widely used to predict hydrologic implications of climate variability and climate change in large snow-dominated river basins in the western United States (Diffenbaugh et al., 2008; Adam et al., 2009). However fewer applications to smaller rain-driven river basins in the southeastern US (where preserving spatial variability of rainfall patterns may be more important) have been reported. In this study a new method was developed to bias-correct GCMs to preserve both the long term temporal mean and variance of the precipitation data, and the spatial structure of daily precipitation fields. Forty-year retrospective simulations (1960-1999) from 16 GCMs were collected (IPCC, 2007; WCRP CMIP3 multi-model database: https://esg.llnl.gov:8443/), and the daily precipitation data at coarse resolution (i.e., 280km) were interpolated to 12km spatial resolution and bias corrected using gridded observations over the state of Florida (Maurer et al., 2002; Wood et al, 2002; Wood et al, 2004). In this method spatial random fields which preserved the observed spatial correlation structure of the historic gridded observations and the spatial mean corresponding to the coarse scale GCM daily rainfall were generated. The spatiotemporal variability of the spatio-temporally bias-corrected GCMs were evaluated against gridded observations, and compared to the original temporally bias-corrected and downscaled CMIP3 data for the central Florida. The hydrologic response of two southwest Florida watersheds to the gridded observation data, the original bias corrected CMIP3 data, and the new spatiotemporally corrected CMIP3 predictions was compared using an integrated surface-subsurface hydrologic model developed by Tampa Bay Water.
2008073000 2008072900 2008072800 Background information bias reduction = ( | domain-averaged ensemble mean bias | - | domain-averaged bias-corrected ensemble mean bias | / | domain-averaged bias-corrected ensemble mean bias | NAEFS Products | NAEFS | EMC Ensemble Products EMC | NCEP | National Weather Service
Comparing State SAT Scores: Problems, Biases, and Corrections.
ERIC Educational Resources Information Center
Gohmann, Stephen F.
1988-01-01
One method to correct for selection bias in comparing Scholastic Aptitude Test (SAT) scores among states is presented, which is a modification of J. J. Heckman's Selection Bias Correction (1976, 1979). Empirical results suggest that sample selection bias is present in SAT score regressions. (SLD)
Dawani, Narendar; Nisar, Nighat; Khan, Nazeer; Syed, Shahbano; Tanweer, Navara
2012-12-27
Dental caries is highly prevalent and a significant public health problem among children throughout the world. Epidemiological data regarding prevalence of dental caries amongst Pakistani pre-school children is very limited. The objective of this study is to determine the frequency of dental caries among pre-school children of Saddar Town, Karachi, Pakistan and the factors related to caries. A cross-sectional study of 1000 preschool children was conducted in Saddar town, Karachi. Two-stage cluster sampling was used to select the sample. At first stage, eight clusters were selected randomly from total 11 clusters. In second stage, from the eight selected clusters, preschools were identified and children between 3- to 6-years age group were assessed for dental caries. Caries prevalence was 51% with a mean dmft score being 2.08 (±2.97) of which decayed teeth constituted 1.95. The mean dmft of males was 2.3 (±3.08) and of females was 1.90 (±2.90). The mean dmft of 3, 4, 5 and 6-year olds was 1.65, 2.11, 2.16 and 3.11 respectively. A significant association was found between dental caries and following variables: age group of 4-years (p-value < 0.029, RR = 1.248, 95% Bias corrected CI 0.029-0.437) and 5-years (p-value < 0.009, RR = 1.545, 95% Bias corrected CI 0.047-0.739), presence of dental plaque (p-value < 0.003, RR = 0.744, 95% Bias corrected CI (-0.433)-(-0.169)), poor oral hygiene (p-value < 0.000, RR = 0.661, 95% Bias corrected CI (-0.532)-(-0.284)), as well as consumption of non-sweetened milk (p-value < 0.049, RR = 1.232, 95% Bias corrected CI 0.061-0.367). Half of the preschoolers had dental caries coupled with a high prevalence of unmet dental treatment needs. Association between caries experience and age of child, consumption of non-sweetened milk, dental plaque and poor oral hygiene had been established.
NASA Technical Reports Server (NTRS)
Zhu, Lei; Jacob, Daniel J.; Kim, Patrick S.; Fisher, Jenny A.; Yu, Karen; Travis, Katherine R.; Mickley, Loretta J.; Yantosca, Robert M.; Sulprizio, Melissa P.; De Smedt, Isabelle;
2016-01-01
Formaldehyde (HCHO) column data from satellites are widely used as a proxy for emissions of volatile organic compounds (VOCs), but validation of the data has been extremely limited. Here we use highly accurate HCHO aircraft observations from the NASA SEAC4RS (Studies of Emissions, Atmospheric Composition, Clouds and Climate Coupling by Regional Surveys) campaign over the southeast US in August-September 2013 to validate and intercompare six retrievals of HCHO columns from four different satellite instruments (OMI (Ozone Monitoring Instrument), GOME (Global Ozone Monitoring Experiment) 2A, GOME (Global Ozone Monitoring Experiment) 2B and OMPS (Ozone Mapping and Profiler Suite)) and three different research groups. The GEOS (Goddard Earth Observing System)-Chem chemical transport model is used as a common intercomparison platform. All retrievals feature a HCHO maximum over Arkansas and Louisiana, consistent with the aircraft observations and reflecting high emissions of biogenic isoprene. The retrievals are also interconsistent in their spatial variability over the southeast US (r equals 0.4 to 0.8 on a 0.5 degree by 0.5 degree grid) and in their day-to-day variability (r equals 0.5 to 0.8). However, all retrievals are biased low in the mean by 20 to 51 percent, which would lead to corresponding bias in estimates of isoprene emissions from the satellite data. The smallest bias is for OMI-BIRA (Ozone Monitoring Instrument - Belgian Institute for Space Aeronomy), which has high corrected slant columns relative to the other retrievals and low scattering weights in its air mass factor (AMF) calculation. OMI-BIRA has systematic error in its assumed vertical HCHO shape profiles for the AMF calculation, and correcting this would eliminate its bias relative to the SEAC (sup 4) RS data. Our results support the use of satellite HCHO data as a quantitative proxy for isoprene emission after correction of the low mean bias. There is no evident pattern in the bias, suggesting that a uniform correction factor may be applied to the data until better understanding is achieved.
Malkyarenko, Dariya I; Chenevert, Thomas L
2014-12-01
To describe an efficient procedure to empirically characterize gradient nonlinearity and correct for the corresponding apparent diffusion coefficient (ADC) bias on a clinical magnetic resonance imaging (MRI) scanner. Spatial nonlinearity scalars for individual gradient coils along superior and right directions were estimated via diffusion measurements of an isotropicic e-water phantom. Digital nonlinearity model from an independent scanner, described in the literature, was rescaled by system-specific scalars to approximate 3D bias correction maps. Correction efficacy was assessed by comparison to unbiased ADC values measured at isocenter. Empirically estimated nonlinearity scalars were confirmed by geometric distortion measurements of a regular grid phantom. The applied nonlinearity correction for arbitrarily oriented diffusion gradients reduced ADC bias from 20% down to 2% at clinically relevant offsets both for isotropic and anisotropic media. Identical performance was achieved using either corrected diffusion-weighted imaging (DWI) intensities or corrected b-values for each direction in brain and ice-water. Direction-average trace image correction was adequate only for isotropic medium. Empiric scalar adjustment of an independent gradient nonlinearity model adequately described DWI bias for a clinical scanner. Observed efficiency of implemented ADC bias correction quantitatively agreed with previous theoretical predictions and numerical simulations. The described procedure provides an independent benchmark for nonlinearity bias correction of clinical MRI scanners.
How does bias correction of regional climate model precipitation affect modelled runoff?
NASA Astrophysics Data System (ADS)
Teng, J.; Potter, N. J.; Chiew, F. H. S.; Zhang, L.; Wang, B.; Vaze, J.; Evans, J. P.
2015-02-01
Many studies bias correct daily precipitation from climate models to match the observed precipitation statistics, and the bias corrected data are then used for various modelling applications. This paper presents a review of recent methods used to bias correct precipitation from regional climate models (RCMs). The paper then assesses four bias correction methods applied to the weather research and forecasting (WRF) model simulated precipitation, and the follow-on impact on modelled runoff for eight catchments in southeast Australia. Overall, the best results are produced by either quantile mapping or a newly proposed two-state gamma distribution mapping method. However, the differences between the methods are small in the modelling experiments here (and as reported in the literature), mainly due to the substantial corrections required and inconsistent errors over time (non-stationarity). The errors in bias corrected precipitation are typically amplified in modelled runoff. The tested methods cannot overcome limitations of the RCM in simulating precipitation sequence, which affects runoff generation. Results further show that whereas bias correction does not seem to alter change signals in precipitation means, it can introduce additional uncertainty to change signals in high precipitation amounts and, consequently, in runoff. Future climate change impact studies need to take this into account when deciding whether to use raw or bias corrected RCM results. Nevertheless, RCMs will continue to improve and will become increasingly useful for hydrological applications as the bias in RCM simulations reduces.
Can quantile mapping improve precipitation extremes from regional climate models?
NASA Astrophysics Data System (ADS)
Tani, Satyanarayana; Gobiet, Andreas
2015-04-01
The ability of quantile mapping to accurately bias correct regard to precipitation extremes is investigated in this study. We developed new methods by extending standard quantile mapping (QMα) to improve the quality of bias corrected extreme precipitation events as simulated by regional climate model (RCM) output. The new QM version (QMβ) was developed by combining parametric and nonparametric bias correction methods. The new nonparametric method is tested with and without a controlling shape parameter (Qmβ1 and Qmβ0, respectively). Bias corrections are applied on hindcast simulations for a small ensemble of RCMs at six different locations over Europe. We examined the quality of the extremes through split sample and cross validation approaches of these three bias correction methods. This split-sample approach mimics the application to future climate scenarios. A cross validation framework with particular focus on new extremes was developed. Error characteristics, q-q plots and Mean Absolute Error (MAEx) skill scores are used for evaluation. We demonstrate the unstable behaviour of correction function at higher quantiles with QMα, whereas the correction functions with for QMβ0 and QMβ1 are smoother, with QMβ1 providing the most reasonable correction values. The result from q-q plots demonstrates that, all bias correction methods are capable of producing new extremes but QMβ1 reproduces new extremes with low biases in all seasons compared to QMα, QMβ0. Our results clearly demonstrate the inherent limitations of empirical bias correction methods employed for extremes, particularly new extremes, and our findings reveals that the new bias correction method (Qmß1) produces more reliable climate scenarios for new extremes. These findings present a methodology that can better capture future extreme precipitation events, which is necessary to improve regional climate change impact studies.
Lin, Muqing; Chan, Siwa; Chen, Jeon-Hor; Chang, Daniel; Nie, Ke; Chen, Shih-Ting; Lin, Cheng-Ju; Shih, Tzu-Ching; Nalcioglu, Orhan; Su, Min-Ying
2011-01-01
Quantitative breast density is known as a strong risk factor associated with the development of breast cancer. Measurement of breast density based on three-dimensional breast MRI may provide very useful information. One important step for quantitative analysis of breast density on MRI is the correction of field inhomogeneity to allow an accurate segmentation of the fibroglandular tissue (dense tissue). A new bias field correction method by combining the nonparametric nonuniformity normalization (N3) algorithm and fuzzy-C-means (FCM)-based inhomogeneity correction algorithm is developed in this work. The analysis is performed on non-fat-sat T1-weighted images acquired using a 1.5 T MRI scanner. A total of 60 breasts from 30 healthy volunteers was analyzed. N3 is known as a robust correction method, but it cannot correct a strong bias field on a large area. FCM-based algorithm can correct the bias field on a large area, but it may change the tissue contrast and affect the segmentation quality. The proposed algorithm applies N3 first, followed by FCM, and then the generated bias field is smoothed using Gaussian kernal and B-spline surface fitting to minimize the problem of mistakenly changed tissue contrast. The segmentation results based on the N3+FCM corrected images were compared to the N3 and FCM alone corrected images and another method, coherent local intensity clustering (CLIC), corrected images. The segmentation quality based on different correction methods were evaluated by a radiologist and ranked. The authors demonstrated that the iterative N3+FCM correction method brightens the signal intensity of fatty tissues and that separates the histogram peaks between the fibroglandular and fatty tissues to allow an accurate segmentation between them. In the first reading session, the radiologist found (N3+FCM > N3 > FCM) ranking in 17 breasts, (N3+FCM > N3 = FCM) ranking in 7 breasts, (N3+FCM = N3 > FCM) in 32 breasts, (N3+FCM = N3 = FCM) in 2 breasts, and (N3 > N3+FCM > FCM) in 2 breasts. The results of the second reading session were similar. The performance in each pairwise Wilcoxon signed-rank test is significant, showing N3+FCM superior to both N3 and FCM, and N3 superior to FCM. The performance of the new N3+FCM algorithm was comparable to that of CLIC, showing equivalent quality in 57/60 breasts. Choosing an appropriate bias field correction method is a very important preprocessing step to allow an accurate segmentation of fibroglandular tissues based on breast MRI for quantitative measurement of breast density. The proposed algorithm combining N3+FCM and CLIC both yield satisfactory results.
Junk, J; Ulber, B; Vidal, S; Eickermann, M
2015-11-01
Agricultural production is directly affected by projected increases in air temperature and changes in precipitation. A multi-model ensemble of regional climate change projections indicated shifts towards higher air temperatures and changing precipitation patterns during the summer and winter seasons up to the year 2100 for the region of Goettingen (Lower Saxony, Germany). A second major controlling factor of the agricultural production is the infestation level by pests. Based on long-term field surveys and meteorological observations, a calibration of an existing model describing the migration of the pest insect Ceutorhynchus napi was possible. To assess the impacts of climate on pests under projected changing environmental conditions, we combined the results of regional climate models with the phenological model to describe the crop invasion of this species. In order to reduce systematic differences between the output of the regional climate models and observational data sets, two different bias correction methods were applied: a linear correction for air temperature and a quantile mapping approach for precipitation. Only the results derived from the bias-corrected output of the regional climate models showed satisfying results. An earlier onset, as well as a prolongation of the possible time window for the immigration of Ceutorhynchus napi, was projected by the majority of the ensemble members.
NASA Astrophysics Data System (ADS)
Junk, J.; Ulber, B.; Vidal, S.; Eickermann, M.
2015-11-01
Agricultural production is directly affected by projected increases in air temperature and changes in precipitation. A multi-model ensemble of regional climate change projections indicated shifts towards higher air temperatures and changing precipitation patterns during the summer and winter seasons up to the year 2100 for the region of Goettingen (Lower Saxony, Germany). A second major controlling factor of the agricultural production is the infestation level by pests. Based on long-term field surveys and meteorological observations, a calibration of an existing model describing the migration of the pest insect Ceutorhynchus napi was possible. To assess the impacts of climate on pests under projected changing environmental conditions, we combined the results of regional climate models with the phenological model to describe the crop invasion of this species. In order to reduce systematic differences between the output of the regional climate models and observational data sets, two different bias correction methods were applied: a linear correction for air temperature and a quantile mapping approach for precipitation. Only the results derived from the bias-corrected output of the regional climate models showed satisfying results. An earlier onset, as well as a prolongation of the possible time window for the immigration of Ceutorhynchus napi, was projected by the majority of the ensemble members.
An improved level set method for brain MR images segmentation and bias correction.
Chen, Yunjie; Zhang, Jianwei; Macione, Jim
2009-10-01
Intensity inhomogeneities cause considerable difficulty in the quantitative analysis of magnetic resonance (MR) images. Thus, bias field estimation is a necessary step before quantitative analysis of MR data can be undertaken. This paper presents a variational level set approach to bias correction and segmentation for images with intensity inhomogeneities. Our method is based on an observation that intensities in a relatively small local region are separable, despite of the inseparability of the intensities in the whole image caused by the overall intensity inhomogeneity. We first define a localized K-means-type clustering objective function for image intensities in a neighborhood around each point. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. The objective function is then integrated over the entire domain to define the data term into the level set framework. Our method is able to capture bias of quite general profiles. Moreover, it is robust to initialization, and thereby allows fully automated applications. The proposed method has been used for images of various modalities with promising results.
NASA Astrophysics Data System (ADS)
Passow, Christian; Donner, Reik
2017-04-01
Quantile mapping (QM) is an established concept that allows to correct systematic biases in multiple quantiles of the distribution of a climatic observable. It shows remarkable results in correcting biases in historical simulations through observational data and outperforms simpler correction methods which relate only to the mean or variance. Since it has been shown that bias correction of future predictions or scenario runs with basic QM can result in misleading trends in the projection, adjusted, trend preserving, versions of QM were introduced in the form of detrended quantile mapping (DQM) and quantile delta mapping (QDM) (Cannon, 2015, 2016). Still, all previous versions and applications of QM based bias correction rely on the assumption of time-independent quantiles over the investigated period, which can be misleading in the context of a changing climate. Here, we propose a novel combination of linear quantile regression (QR) with the classical QM method to introduce a consistent, time-dependent and trend preserving approach of bias correction for historical and future projections. Since QR is a regression method, it is possible to estimate quantiles in the same resolution as the given data and include trends or other dependencies. We demonstrate the performance of the new method of linear regression quantile mapping (RQM) in correcting biases of temperature and precipitation products from historical runs (1959 - 2005) of the COSMO model in climate mode (CCLM) from the Euro-CORDEX ensemble relative to gridded E-OBS data of the same spatial and temporal resolution. A thorough comparison with established bias correction methods highlights the strengths and potential weaknesses of the new RQM approach. References: A.J. Cannon, S.R. Sorbie, T.Q. Murdock: Bias Correction of GCM Precipitation by Quantile Mapping - How Well Do Methods Preserve Changes in Quantiles and Extremes? Journal of Climate, 28, 6038, 2015 A.J. Cannon: Multivariate Bias Correction of Climate Model Outputs - Matching Marginal Distributions and Inter-variable Dependence Structure. Journal of Climate, 29, 7045, 2016
Stability and bias of classification rates in biological applications of discriminant analysis
Williams, B.K.; Titus, K.; Hines, J.E.
1990-01-01
We assessed the sampling stability of classification rates in discriminant analysis by using a factorial design with factors for multivariate dimensionality, dispersion structure, configuration of group means, and sample size. A total of 32,400 discriminant analyses were conducted, based on data from simulated populations with appropriate underlying statistical distributions. Simulation results indicated strong bias in correct classification rates when group sample sizes were small and when overlap among groups was high. We also found that stability of the correct classification rates was influenced by these factors, indicating that the number of samples required for a given level of precision increases with the amount of overlap among groups. In a review of 60 published studies, we found that 57% of the articles presented results on classification rates, though few of them mentioned potential biases in their results. Wildlife researchers should choose the total number of samples per group to be at least 2 times the number of variables to be measured when overlap among groups is low. Substantially more samples are required as the overlap among groups increases
Sved, John A; Cameron, Emilie C; Gilchrist, A Stuart
2013-01-01
There is a substantial literature on the use of linkage disequilibrium (LD) to estimate effective population size using unlinked loci. The Ne estimates are extremely sensitive to the sampling process, and there is currently no theory to cope with the possible biases. We derive formulae for the analysis of idealised populations mating at random with multi-allelic (microsatellite) loci. The 'Burrows composite index' is introduced in a novel way with a 'composite haplotype table'. We show that in a sample of diploid size S, the mean value of x2 or r2 from the composite haplotype table is biased by a factor of 1-1/(2S-1)2, rather than the usual factor 1+1/(2S-1) for a conventional haplotype table. But analysis of population data using these formulae leads to Ne estimates that are unrealistically low. We provide theory and simulation to show that this bias towards low Ne estimates is due to null alleles, and introduce a randomised permutation correction to compensate for the bias. We also consider the effect of introducing a within-locus disequilibrium factor to r2, and find that this factor leads to a bias in the Ne estimate. However this bias can be overcome using the same randomised permutation correction, to yield an altered r2 with lower variance than the original r2, and one that is also insensitive to null alleles. The resulting formulae are used to provide Ne estimates on 40 samples of the Queensland fruit fly, Bactrocera tryoni, from populations with widely divergent Ne expectations. Linkage relationships are known for most of the microsatellite loci in this species. We find that there is little difference in the estimated Ne values from using known unlinked loci as compared to using all loci, which is important for conservation studies where linkage relationships are unknown.
Eliminating bias in rainfall estimates from microwave links due to antenna wetting
NASA Astrophysics Data System (ADS)
Fencl, Martin; Rieckermann, Jörg; Bareš, Vojtěch
2014-05-01
Commercial microwave links (MWLs) are point-to-point radio systems which are widely used in telecommunication systems. They operate at frequencies where the transmitted power is mainly disturbed by precipitation. Thus, signal attenuation from MWLs can be used to estimate path-averaged rain rates, which is conceptually very promising, since MWLs cover about 20 % of surface area. Unfortunately, MWL rainfall estimates are often positively biased due to additional attenuation caused by antenna wetting. To correct MWL observations a posteriori to reduce the wet antenna effect (WAE), both empirically and physically based models have been suggested. However, it is challenging to calibrate these models, because the wet antenna attenuation depends both on the MWL properties (frequency, type of antennas, shielding etc.) and different climatic factors (temperature, due point, wind velocity and direction, etc.). Instead, it seems straight forward to keep antennas dry by shielding them. In this investigation we compare the effectiveness of antenna shielding to model-based corrections to reduce the WAE. The experimental setup, located in Dübendorf-Switzerland, consisted of 1.85-km long commercial dual-polarization microwave link at 38 GHz and 5 optical disdrometers. The MWL was operated without shielding in the period from March to October 2011 and with shielding from October 2011 to July 2012. This unique experimental design made it possible to identify the attenuation due to antenna wetting, which can be computed as the difference between the measured and theoretical attenuation. The theoretical path-averaged attenuation was calculated from the path-averaged drop size distribution. During the unshielded periods, the total bias caused by WAE was 0.74 dB, which was reduced by shielding to 0.39 dB for the horizontal polarization (vertical: reduction from 0.96 dB to 0.44 dB). Interestingly, the model-based correction (Schleiss et al. 2013) was more effective because it reduced the bias of unshielded periods to 0.07 dB for the horizontal polarization (vertical: 0.06 dB). Applying the same model-based correction to shielded periods reduces the bias even more, to -0.03 dB and -0.01 dB, respectively. This indicates that additional attenuation could be caused also by different effects, such as reflection of sidelobes from wet surfaces and other environmental factors. Further, model-based corrections do not capture correctly the nature of WAE, but more likely provide only an empirical correction. This claim is supported by the fact that detailed analysis of particular events reveals that both antenna shielding and model-based correction performance differ substantially from event to event. Further investigation based on direct observation of antenna wetting and other environmental variables needs to be performed to identify more properly the nature of the attenuation bias. Schleiss, M., J. Rieckermann, and A. Berne, 2013: Quantification and modeling of wet-antenna attenuation for commercial microwave links. IEEE Geosci. Remote Sens. Lett., 10.1109/LGRS.2012.2236074.
A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield
Ringard, Justine; Seyler, Frederique; Linguet, Laurent
2017-01-01
Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale. PMID:28621723
Ringard, Justine; Seyler, Frederique; Linguet, Laurent
2017-06-16
Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale.
HESS Opinions "Should we apply bias correction to global and regional climate model data?"
NASA Astrophysics Data System (ADS)
Ehret, U.; Zehe, E.; Wulfmeyer, V.; Warrach-Sagi, K.; Liebert, J.
2012-04-01
Despite considerable progress in recent years, output of both Global and Regional Circulation Models is still afflicted with biases to a degree that precludes its direct use, especially in climate change impact studies. This is well known, and to overcome this problem bias correction (BC), i.e. the correction of model output towards observations in a post processing step for its subsequent application in climate change impact studies has now become a standard procedure. In this paper we argue that bias correction, which has a considerable influence on the results of impact studies, is not a valid procedure in the way it is currently used: it impairs the advantages of Circulation Models which are based on established physical laws by altering spatiotemporal field consistency, relations among variables and by violating conservation principles. Bias correction largely neglects feedback mechanisms and it is unclear whether bias correction methods are time-invariant under climate change conditions. Applying bias correction increases agreement of Climate Model output with observations in hind casts and hence narrows the uncertainty range of simulations and predictions without, however, providing a satisfactory physical justification. This is in most cases not transparent to the end user. We argue that this masks rather than reduces uncertainty, which may lead to avoidable forejudging of end users and decision makers. We present here a brief overview of state-of-the-art bias correction methods, discuss the related assumptions and implications, draw conclusions on the validity of bias correction and propose ways to cope with biased output of Circulation Models in the short term and how to reduce the bias in the long term. The most promising strategy for improved future Global and Regional Circulation Model simulations is the increase in model resolution to the convection-permitting scale in combination with ensemble predictions based on sophisticated approaches for ensemble perturbation. With this article, we advocate communicating the entire uncertainty range associated with climate change predictions openly and hope to stimulate a lively discussion on bias correction among the atmospheric and hydrological community and end users of climate change impact studies.
NASA Astrophysics Data System (ADS)
Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish
2018-02-01
Conventional bias correction is usually applied on a grid-by-grid basis, meaning that the resulting corrections cannot address biases in the spatial distribution of climate variables. To solve this problem, a two-step bias correction method is proposed here to correct time series at multiple locations conjointly. The first step transforms the data to a set of statistically independent univariate time series, using a technique known as independent component analysis (ICA). The mutually independent signals can then be bias corrected as univariate time series and back-transformed to improve the representation of spatial dependence in the data. The spatially corrected data are then bias corrected at the grid scale in the second step. The method has been applied to two CMIP5 General Circulation Model simulations for six different climate regions of Australia for two climate variables—temperature and precipitation. The results demonstrate that the ICA-based technique leads to considerable improvements in temperature simulations with more modest improvements in precipitation. Overall, the method results in current climate simulations that have greater equivalency in space and time with observational data.
A Bayesian approach to truncated data sets: An application to Malmquist bias in Supernova Cosmology
NASA Astrophysics Data System (ADS)
March, Marisa Cristina
2018-01-01
A problem commonly encountered in statistical analysis of data is that of truncated data sets. A truncated data set is one in which a number of data points are completely missing from a sample, this is in contrast to a censored sample in which partial information is missing from some data points. In astrophysics this problem is commonly seen in a magnitude limited survey such that the survey is incomplete at fainter magnitudes, that is, certain faint objects are simply not observed. The effect of this `missing data' is manifested as Malmquist bias and can result in biases in parameter inference if it is not accounted for. In Frequentist methodologies the Malmquist bias is often corrected for by analysing many simulations and computing the appropriate correction factors. One problem with this methodology is that the corrections are model dependent. In this poster we derive a Bayesian methodology for accounting for truncated data sets in problems of parameter inference and model selection. We first show the methodology for a simple Gaussian linear model and then go on to show the method for accounting for a truncated data set in the case for cosmological parameter inference with a magnitude limited supernova Ia survey.
NASA Astrophysics Data System (ADS)
Stisen, S.; Højberg, A. L.; Troldborg, L.; Refsgaard, J. C.; Christensen, B. S. B.; Olsen, M.; Henriksen, H. J.
2012-11-01
Precipitation gauge catch correction is often given very little attention in hydrological modelling compared to model parameter calibration. This is critical because significant precipitation biases often make the calibration exercise pointless, especially when supposedly physically-based models are in play. This study addresses the general importance of appropriate precipitation catch correction through a detailed modelling exercise. An existing precipitation gauge catch correction method addressing solid and liquid precipitation is applied, both as national mean monthly correction factors based on a historic 30 yr record and as gridded daily correction factors based on local daily observations of wind speed and temperature. The two methods, named the historic mean monthly (HMM) and the time-space variable (TSV) correction, resulted in different winter precipitation rates for the period 1990-2010. The resulting precipitation datasets were evaluated through the comprehensive Danish National Water Resources model (DK-Model), revealing major differences in both model performance and optimised model parameter sets. Simulated stream discharge is improved significantly when introducing the TSV correction, whereas the simulated hydraulic heads and multi-annual water balances performed similarly due to recalibration adjusting model parameters to compensate for input biases. The resulting optimised model parameters are much more physically plausible for the model based on the TSV correction of precipitation. A proxy-basin test where calibrated DK-Model parameters were transferred to another region without site specific calibration showed better performance for parameter values based on the TSV correction. Similarly, the performances of the TSV correction method were superior when considering two single years with a much dryer and a much wetter winter, respectively, as compared to the winters in the calibration period (differential split-sample tests). We conclude that TSV precipitation correction should be carried out for studies requiring a sound dynamic description of hydrological processes, and it is of particular importance when using hydrological models to make predictions for future climates when the snow/rain composition will differ from the past climate. This conclusion is expected to be applicable for mid to high latitudes, especially in coastal climates where winter precipitation types (solid/liquid) fluctuate significantly, causing climatological mean correction factors to be inadequate.
Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models
ERIC Educational Resources Information Center
Raykov, Tenko
2005-01-01
A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…
NASA Astrophysics Data System (ADS)
Mehrotra, Rajeshwar; Sharma, Ashish
2012-12-01
The quality of the absolute estimates of general circulation models (GCMs) calls into question the direct use of GCM outputs for climate change impact assessment studies, particularly at regional scales. Statistical correction of GCM output is often necessary when significant systematic biasesoccur between the modeled output and observations. A common procedure is to correct the GCM output by removing the systematic biases in low-order moments relative to observations or to reanalysis data at daily, monthly, or seasonal timescales. In this paper, we present an extension of a recently published nested bias correction (NBC) technique to correct for the low- as well as higher-order moments biases in the GCM-derived variables across selected multiple time-scales. The proposed recursive nested bias correction (RNBC) approach offers an improved basis for applying bias correction at multiple timescales over the original NBC procedure. The method ensures that the bias-corrected series exhibits improvements that are consistently spread over all of the timescales considered. Different variations of the approach starting from the standard NBC to the more complex recursive alternatives are tested to assess their impacts on a range of GCM-simulated atmospheric variables of interest in downscaling applications related to hydrology and water resources. Results of the study suggest that three to five iteration RNBCs are the most effective in removing distributional and persistence related biases across the timescales considered.
2012-01-01
Background Dental caries is highly prevalent and a significant public health problem among children throughout the world. Epidemiological data regarding prevalence of dental caries amongst Pakistani pre-school children is very limited. The objective of this study is to determine the frequency of dental caries among pre-school children of Saddar Town, Karachi, Pakistan and the factors related to caries. Methods A cross-sectional study of 1000 preschool children was conducted in Saddar town, Karachi. Two-stage cluster sampling was used to select the sample. At first stage, eight clusters were selected randomly from total 11 clusters. In second stage, from the eight selected clusters, preschools were identified and children between 3- to 6-years age group were assessed for dental caries. Results Caries prevalence was 51% with a mean dmft score being 2.08 (±2.97) of which decayed teeth constituted 1.95. The mean dmft of males was 2.3 (±3.08) and of females was 1.90 (±2.90). The mean dmft of 3, 4, 5 and 6- year olds was 1.65, 2.11, 2.16 and 3.11 respectively. A significant association was found between dental caries and following variables: age group of 4-years (p-value ² 0.029, RR = 1.248, 95% Bias corrected CI 0.029-0.437) and 5-years (p-value ² 0.009, RR = 1.545, 95% Bias corrected CI 0.047-0.739), presence of dental plaque (p-value ² 0.003, RR = 0.744, 95% Bias corrected CI (−0.433)-(−0.169)), poor oral hygiene (p-value ² 0.000, RR = 0.661, 95% Bias corrected CI (−0.532)-(−0.284)), as well as consumption of non-sweetened milk (p-value ² 0.049, RR = 1.232, 95% Bias corrected CI 0.061-0.367). Conclusion Half of the preschoolers had dental caries coupled with a high prevalence of unmet dental treatment needs. Association between caries experience and age of child, consumption of non-sweetened milk, dental plaque and poor oral hygiene had been established. PMID:23270546
A retrieval-based approach to eliminating hindsight bias.
Van Boekel, Martin; Varma, Keisha; Varma, Sashank
2017-03-01
Individuals exhibit hindsight bias when they are unable to recall their original responses to novel questions after correct answers are provided to them. Prior studies have eliminated hindsight bias by modifying the conditions under which original judgments or correct answers are encoded. Here, we explored whether hindsight bias can be eliminated by manipulating the conditions that hold at retrieval. Our retrieval-based approach predicts that if the conditions at retrieval enable sufficient discrimination of memory representations of original judgments from memory representations of correct answers, then hindsight bias will be reduced or eliminated. Experiment 1 used the standard memory design to replicate the hindsight bias effect in middle-school students. Experiments 2 and 3 modified the retrieval phase of this design, instructing participants beforehand that they would be recalling both their original judgments and the correct answers. As predicted, this enabled participants to form compound retrieval cues that discriminated original judgment traces from correct answer traces, and eliminated hindsight bias. Experiment 4 found that when participants were not instructed beforehand that they would be making both recalls, they did not form discriminating retrieval cues, and hindsight bias returned. These experiments delineate the retrieval conditions that produce-and fail to produce-hindsight bias.
Benchmarking by HbA1c in a national diabetes quality register--does measurement bias matter?
Carlsen, Siri; Thue, Geir; Cooper, John Graham; Røraas, Thomas; Gøransson, Lasse Gunnar; Løvaas, Karianne; Sandberg, Sverre
2015-08-01
Bias in HbA1c measurement could give a wrong impression of the standard of care when benchmarking diabetes care. The aim of this study was to evaluate how measurement bias in HbA1c results may influence the benchmarking process performed by a national diabetes register. Using data from 2012 from the Norwegian Diabetes Register for Adults, we included HbA1c results from 3584 patients with type 1 diabetes attending 13 hospital clinics, and 1366 patients with type 2 diabetes attending 18 GP offices. Correction factors for HbA1c were obtained by comparing the results of the hospital laboratories'/GP offices' external quality assurance scheme with the target value from a reference method. Compared with the uncorrected yearly median HbA1c values for hospital clinics and GP offices, EQA corrected HbA1c values were within ±0.2% (2 mmol/mol) for all but one hospital clinic whose value was reduced by 0.4% (4 mmol/mol). Three hospital clinics reduced the proportion of patients with poor glycemic control, one by 9% and two by 4%. For most participants in our study, correcting for measurement bias had little effect on the yearly median HbA1c value or the percentage of patients achieving glycemic goals. However, at three hospital clinics correcting for measurement bias had an important effect on HbA1c benchmarking results especially with regard to percentages of patients achieving glycemic targets. The analytical quality of HbA1c should be taken into account when comparing benchmarking results.
Wang, Chaolong; Schroeder, Kari B.; Rosenberg, Noah A.
2012-01-01
Allelic dropout is a commonly observed source of missing data in microsatellite genotypes, in which one or both allelic copies at a locus fail to be amplified by the polymerase chain reaction. Especially for samples with poor DNA quality, this problem causes a downward bias in estimates of observed heterozygosity and an upward bias in estimates of inbreeding, owing to mistaken classifications of heterozygotes as homozygotes when one of the two copies drops out. One general approach for avoiding allelic dropout involves repeated genotyping of homozygous loci to minimize the effects of experimental error. Existing computational alternatives often require replicate genotyping as well. These approaches, however, are costly and are suitable only when enough DNA is available for repeated genotyping. In this study, we propose a maximum-likelihood approach together with an expectation-maximization algorithm to jointly estimate allelic dropout rates and allele frequencies when only one set of nonreplicated genotypes is available. Our method considers estimates of allelic dropout caused by both sample-specific factors and locus-specific factors, and it allows for deviation from Hardy–Weinberg equilibrium owing to inbreeding. Using the estimated parameters, we correct the bias in the estimation of observed heterozygosity through the use of multiple imputations of alleles in cases where dropout might have occurred. With simulated data, we show that our method can (1) effectively reproduce patterns of missing data and heterozygosity observed in real data; (2) correctly estimate model parameters, including sample-specific dropout rates, locus-specific dropout rates, and the inbreeding coefficient; and (3) successfully correct the downward bias in estimating the observed heterozygosity. We find that our method is fairly robust to violations of model assumptions caused by population structure and by genotyping errors from sources other than allelic dropout. Because the data sets imputed under our model can be investigated in additional subsequent analyses, our method will be useful for preparing data for applications in diverse contexts in population genetics and molecular ecology. PMID:22851645
Estimation and correction of visibility bias in aerial surveys of wintering ducks
Pearse, A.T.; Gerard, P.D.; Dinsmore, S.J.; Kaminski, R.M.; Reinecke, K.J.
2008-01-01
Incomplete detection of all individuals leading to negative bias in abundance estimates is a pervasive source of error in aerial surveys of wildlife, and correcting that bias is a critical step in improving surveys. We conducted experiments using duck decoys as surrogates for live ducks to estimate bias associated with surveys of wintering ducks in Mississippi, USA. We found detection of decoy groups was related to wetland cover type (open vs. forested), group size (1?100 decoys), and interaction of these variables. Observers who detected decoy groups reported counts that averaged 78% of the decoys actually present, and this counting bias was not influenced by either covariate cited above. We integrated this sightability model into estimation procedures for our sample surveys with weight adjustments derived from probabilities of group detection (estimated by logistic regression) and count bias. To estimate variances of abundance estimates, we used bootstrap resampling of transects included in aerial surveys and data from the bias-correction experiment. When we implemented bias correction procedures on data from a field survey conducted in January 2004, we found bias-corrected estimates of abundance increased 36?42%, and associated standard errors increased 38?55%, depending on species or group estimated. We deemed our method successful for integrating correction of visibility bias in an existing sample survey design for wintering ducks in Mississippi, and we believe this procedure could be implemented in a variety of sampling problems for other locations and species.
NASA Astrophysics Data System (ADS)
Worqlul, Abeyou W.; Ayana, Essayas K.; Maathuis, Ben H. P.; MacAlister, Charlotte; Philpot, William D.; Osorio Leyton, Javier M.; Steenhuis, Tammo S.
2018-01-01
In many developing countries and remote areas of important ecosystems, good quality precipitation data are neither available nor readily accessible. Satellite observations and processing algorithms are being extensively used to produce satellite rainfall products (SREs). Nevertheless, these products are prone to systematic errors and need extensive validation before to be usable for streamflow simulations. In this study, we investigated and corrected the bias of Multi-Sensor Precipitation Estimate-Geostationary (MPEG) data. The corrected MPEG dataset was used as input to a semi-distributed hydrological model Hydrologiska Byråns Vattenbalansavdelning (HBV) for simulation of discharge of the Gilgel Abay and Gumara watersheds in the Upper Blue Nile basin, Ethiopia. The result indicated that the MPEG satellite rainfall captured 81% and 78% of the gauged rainfall variability with a consistent bias of underestimating the gauged rainfall by 60%. A linear bias correction applied significantly reduced the bias while maintaining the coefficient of correlation. The simulated flow using bias corrected MPEG SRE resulted in a simulated flow comparable to the gauge rainfall for both watersheds. The study indicated the potential of MPEG SRE in water budget studies after applying a linear bias correction.
Bias correction of satellite-based rainfall data
NASA Astrophysics Data System (ADS)
Bhattacharya, Biswa; Solomatine, Dimitri
2015-04-01
Limitation in hydro-meteorological data availability in many catchments limits the possibility of reliable hydrological analyses especially for near-real-time predictions. However, the variety of satellite based and meteorological model products for rainfall provides new opportunities. Often times the accuracy of these rainfall products, when compared to rain gauge measurements, is not impressive. The systematic differences of these rainfall products from gauge observations can be partially compensated by adopting a bias (error) correction. Many of such methods correct the satellite based rainfall data by comparing their mean value to the mean value of rain gauge data. Refined approaches may also first find out a suitable time scale at which different data products are better comparable and then employ a bias correction at that time scale. More elegant methods use quantile-to-quantile bias correction, which however, assumes that the available (often limited) sample size can be useful in comparing probabilities of different rainfall products. Analysis of rainfall data and understanding of the process of its generation reveals that the bias in different rainfall data varies in space and time. The time aspect is sometimes taken into account by considering the seasonality. In this research we have adopted a bias correction approach that takes into account the variation of rainfall in space and time. A clustering based approach is employed in which every new data point (e.g. of Tropical Rainfall Measuring Mission (TRMM)) is first assigned to a specific cluster of that data product and then, by identifying the corresponding cluster of gauge data, the bias correction specific to that cluster is adopted. The presented approach considers the space-time variation of rainfall and as a result the corrected data is more realistic. Keywords: bias correction, rainfall, TRMM, satellite rainfall
A Dynamical Downscaling Approach with GCM Bias Corrections and Spectral Nudging
NASA Astrophysics Data System (ADS)
Xu, Z.; Yang, Z.
2013-12-01
To reduce the biases in the regional climate downscaling simulations, a dynamical downscaling approach with GCM bias corrections and spectral nudging is developed and assessed over North America. Regional climate simulations are performed with the Weather Research and Forecasting (WRF) model embedded in the National Center for Atmospheric Research (NCAR) Community Atmosphere Model (CAM). To reduce the GCM biases, the GCM climatological means and the variances of interannual variations are adjusted based on the National Centers for Environmental Prediction-NCAR global reanalysis products (NNRP) before using them to drive WRF which is the same as our previous method. In this study, we further introduce spectral nudging to reduce the RCM-based biases. Two sets of WRF experiments are performed with and without spectral nudging. All WRF experiments are identical except that the initial and lateral boundary conditions are derived from the NNRP, the original GCM output, and the bias corrected GCM output, respectively. The GCM-driven RCM simulations with bias corrections and spectral nudging (IDDng) are compared with those without spectral nudging (IDD) and North American Regional Reanalysis (NARR) data to assess the additional reduction in RCM biases relative to the IDD approach. The results show that the spectral nudging introduces the effect of GCM bias correction into the RCM domain, thereby minimizing the climate drift resulting from the RCM biases. The GCM bias corrections and spectral nudging significantly improve the downscaled mean climate and extreme temperature simulations. Our results suggest that both GCM bias corrections or spectral nudging are necessary to reduce the error of downscaled climate. Only one of them does not guarantee better downscaling simulation. The new dynamical downscaling method can be applied to regional projection of future climate or downscaling of GCM sensitivity simulations. Annual mean RMSEs. The RMSEs are computed over the verification region by monthly mean data over 1981-2010. Experimental design
ERIC Educational Resources Information Center
Dimitrov, Dimiter M.
2017-01-01
This article offers an approach to examining differential item functioning (DIF) under its item response theory (IRT) treatment in the framework of confirmatory factor analysis (CFA). The approach is based on integrating IRT- and CFA-based testing of DIF and using bias-corrected bootstrap confidence intervals with a syntax code in Mplus.
Explanation of Two Anomalous Results in Statistical Mediation Analysis
ERIC Educational Resources Information Center
Fritz, Matthew S.; Taylor, Aaron B.; MacKinnon, David P.
2012-01-01
Previous studies of different methods of testing mediation models have consistently found two anomalous results. The first result is elevated Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap tests not found in nonresampling tests or in resampling tests that did not include a bias correction. This is of special…
Statistical bias correction modelling for seasonal rainfall forecast for the case of Bali island
NASA Astrophysics Data System (ADS)
Lealdi, D.; Nurdiati, S.; Sopaheluwakan, A.
2018-04-01
Rainfall is an element of climate which is highly influential to the agricultural sector. Rain pattern and distribution highly determines the sustainability of agricultural activities. Therefore, information on rainfall is very useful for agriculture sector and farmers in anticipating the possibility of extreme events which often cause failures of agricultural production. This research aims to identify the biases from seasonal forecast products from ECMWF (European Centre for Medium-Range Weather Forecasts) rainfall forecast and to build a transfer function in order to correct the distribution biases as a new prediction model using quantile mapping approach. We apply this approach to the case of Bali Island, and as a result, the use of bias correction methods in correcting systematic biases from the model gives better results. The new prediction model obtained with this approach is better than ever. We found generally that during rainy season, the bias correction approach performs better than in dry season.
The Impact of Assimilation of GPM Clear Sky Radiance on HWRF Hurricane Track and Intensity Forecasts
NASA Astrophysics Data System (ADS)
Yu, C. L.; Pu, Z.
2016-12-01
The impact of GPM microwave imager (GMI) clear sky radiances on hurricane forecasting is examined by ingesting GMI level 1C recalibrated brightness temperature into the NCEP Gridpoint Statistical Interpolation (GSI)- based ensemble-variational hybrid data assimilation system for the operational Hurricane Weather Research and Forecast (HWRF) system. The GMI clear sky radiances are compared with the Community Radiative Transfer Model (CRTM) simulated radiances to closely study the quality of the radiance observations. The quality check result indicates the presence of bias in various channels. A static bias correction scheme, in which the appropriate bias correction coefficients for GMI data is evaluated by applying regression method on a sufficiently large sample of data representative to the observational bias in the regions of concern, is used to correct the observational bias in GMI clear sky radiances. Forecast results with and without assimilation of GMI radiance are compared using hurricane cases from recent hurricane seasons (e.g., Hurricane Joaquin in 2015). Diagnoses of data assimilation results show that the bias correction coefficients obtained from the regression method can correct the inherent biases in GMI radiance data, significantly reducing observational residuals. The removal of biases also allows more data to pass GSI quality control and hence to be assimilated into the model. Forecast results for hurricane Joaquin demonstrates that the quality of analysis from the data assimilation is sensitive to the bias correction, with positive impacts on the hurricane track forecast when systematic biases are removed from the radiance data. Details will be presented at the symposium.
NASA Astrophysics Data System (ADS)
Pérez-Ràfols, Ignasi; Font-Ribera, Andreu; Miralda-Escudé, Jordi; Blomqvist, Michael; Bird, Simeon; Busca, Nicolás; du Mas des Bourboux, Hélion; Mas-Ribas, Lluís; Noterdaeme, Pasquier; Petitjean, Patrick; Rich, James; Schneider, Donald P.
2018-01-01
We present a measurement of the damped Ly α absorber (DLA) mean bias from the cross-correlation of DLAs and the Ly α forest, updating earlier results of Font-Ribera et al. (2012) with the final Baryon Oscillations Spectroscopic Survey data release and an improved method to address continuum fitting corrections. Our cross-correlation is well fitted by linear theory with the standard ΛCDM model, with a DLA bias of bDLA = 1.99 ± 0.11; a more conservative analysis, which removes DLA in the Ly β forest and uses only the cross-correlation at r > 10 h-1 Mpc, yields bDLA = 2.00 ± 0.19. This assumes the cosmological model from Planck Collaboration (2016) and the Ly α forest bias factors of Bautista et al. (2017) and includes only statistical errors obtained from bootstrap analysis. The main systematic errors arise from possible impurities and selection effects in the DLA catalogue and from uncertainties in the determination of the Ly α forest bias factors and a correction for effects of high column density absorbers. We find no dependence of the DLA bias on column density or redshift. The measured bias value corresponds to a host halo mass ∼4 × 1011 h-1 M⊙ if all DLAs were hosted in haloes of a similar mass. In a realistic model where host haloes over a broad mass range have a DLA cross-section Σ (M_h) ∝ M_h^{α } down to Mh > Mmin = 108.5 h-1 M⊙, we find that α > 1 is required to have bDLA > 1.7, implying a steeper relation or higher value of Mmin than is generally predicted in numerical simulations of galaxy formation.
Accounting for Selection Bias in Studies of Acute Cardiac Events.
Banack, Hailey R; Harper, Sam; Kaufman, Jay S
2018-06-01
In cardiovascular research, pre-hospital mortality represents an important potential source of selection bias. Inverse probability of censoring weights are a method to account for this source of bias. The objective of this article is to examine and correct for the influence of selection bias due to pre-hospital mortality on the relationship between cardiovascular risk factors and all-cause mortality after an acute cardiac event. The relationship between the number of cardiovascular disease (CVD) risk factors (0-5; smoking status, diabetes, hypertension, dyslipidemia, and obesity) and all-cause mortality was examined using data from the Atherosclerosis Risk in Communities (ARIC) study. To illustrate the magnitude of selection bias, estimates from an unweighted generalized linear model with a log link and binomial distribution were compared with estimates from an inverse probability of censoring weighted model. In unweighted multivariable analyses the estimated risk ratio for mortality ranged from 1.09 (95% confidence interval [CI], 0.98-1.21) for 1 CVD risk factor to 1.95 (95% CI, 1.41-2.68) for 5 CVD risk factors. In the inverse probability of censoring weights weighted analyses, the risk ratios ranged from 1.14 (95% CI, 0.94-1.39) to 4.23 (95% CI, 2.69-6.66). Estimates from the inverse probability of censoring weighted model were substantially greater than unweighted, adjusted estimates across all risk factor categories. This shows the magnitude of selection bias due to pre-hospital mortality and effect on estimates of the effect of CVD risk factors on mortality. Moreover, the results highlight the utility of using this method to address a common form of bias in cardiovascular research. Copyright © 2018 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.
A method of bias correction for maximal reliability with dichotomous measures.
Penev, Spiridon; Raykov, Tenko
2010-02-01
This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.
Saska, Pavel; van der Werf, Wopke; Hemerik, Lia; Luff, Martin L; Hatten, Timothy D; Honek, Alois; Pocock, Michael
2013-02-01
Carabids and other epigeal arthropods make important contributions to biodiversity, food webs and biocontrol of invertebrate pests and weeds. Pitfall trapping is widely used for sampling carabid populations, but this technique yields biased estimates of abundance ('activity-density') because individual activity - which is affected by climatic factors - affects the rate of catch. To date, the impact of temperature on pitfall catches, while suspected to be large, has not been quantified, and no method is available to account for it. This lack of knowledge and the unavailability of a method for bias correction affect the confidence that can be placed on results of ecological field studies based on pitfall data.Here, we develop a simple model for the effect of temperature, assuming a constant proportional change in the rate of catch per °C change in temperature, r , consistent with an exponential Q 10 response to temperature. We fit this model to 38 time series of pitfall catches and accompanying temperature records from the literature, using first differences and other detrending methods to account for seasonality. We use meta-analysis to assess consistency of the estimated parameter r among studies.The mean rate of increase in total catch across data sets was 0·0863 ± 0·0058 per °C of maximum temperature and 0·0497 ± 0·0107 per °C of minimum temperature. Multiple regression analyses of 19 data sets showed that temperature is the key climatic variable affecting total catch. Relationships between temperature and catch were also identified at species level. Correction for temperature bias had substantial effects on seasonal trends of carabid catches. Synthesis and Applications . The effect of temperature on pitfall catches is shown here to be substantial and worthy of consideration when interpreting results of pitfall trapping. The exponential model can be used both for effect estimation and for bias correction of observed data. Correcting for temperature-related trapping bias is straightforward and enables population estimates to be more comparable. It may thus improve data interpretation in ecological, conservation and monitoring studies, and assist in better management and conservation of habitats and ecosystem services. Nevertheless, field ecologists should remain vigilant for other sources of bias.
A groundwater data assimilation application study in the Heihe mid-reach
NASA Astrophysics Data System (ADS)
Ragettli, S.; Marti, B. S.; Wolfgang, K.; Li, N.
2017-12-01
The present work focuses on modelling of the groundwater flow in the mid-reach of the endorheic river Heihe in the Zhangye oasis (Gansu province) in arid north-west China. In order to optimise the water resources management in the oasis, reliable forecasts of groundwater level development under different management options and environmental boundary conditions have to be produced. For this means, groundwater flow is modelled with Modflow and coupled to an Ensemble Kalman Filter programmed in Matlab. The model is updated with monthly time steps, featuring perturbed boundary conditions to account for uncertainty in model forcing. Constant biases between model and observations have been corrected prior to updating and compared to model runs without bias correction. Different options for data assimilation (states and/or parameters), updating frequency, and measures against filter inbreeding (damping factor, covariance inflation, spatial localization) have been tested against each other. Results show a high dependency of the Ensemble Kalman filter performance on the selection of observations for data assimilation. For the present regional model, bias correction is necessary for a good filter performance. A combination of spatial localization and covariance inflation is further advisable to reduce filter inbreeding problems. Best performance is achieved if parameter updates are not large, an indication for good prior model calibration. Asynchronous updating of parameter values once every five years (with data of the past five years) and synchronous updating of the groundwater levels is better suited for this groundwater system with not or slow changing parameter values than synchronous updating of both groundwater levels and parameters at every time step applying a damping factor. The filter is not able to correct time lags of signals.
The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehoucq, R. B.; Reu, P. L.; Turner, D. Z.
Here, this work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. 2009). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.
The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation
Lehoucq, R. B.; Reu, P. L.; Turner, D. Z.
2017-11-27
Here, this work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. 2009). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.
Malyarenko, Dariya I; Ross, Brian D; Chenevert, Thomas L
2014-03-01
Gradient nonlinearity of MRI systems leads to spatially dependent b-values and consequently high non-uniformity errors (10-20%) in apparent diffusion coefficient (ADC) measurements over clinically relevant field-of-views. This work seeks practical correction procedure that effectively reduces observed ADC bias for media of arbitrary anisotropy in the fewest measurements. All-inclusive bias analysis considers spatial and time-domain cross-terms for diffusion and imaging gradients. The proposed correction is based on rotation of the gradient nonlinearity tensor into the diffusion gradient frame where spatial bias of b-matrix can be approximated by its Euclidean norm. Correction efficiency of the proposed procedure is numerically evaluated for a range of model diffusion tensor anisotropies and orientations. Spatial dependence of nonlinearity correction terms accounts for the bulk (75-95%) of ADC bias for FA = 0.3-0.9. Residual ADC non-uniformity errors are amplified for anisotropic diffusion. This approximation obviates need for full diffusion tensor measurement and diagonalization to derive a corrected ADC. Practical scenarios are outlined for implementation of the correction on clinical MRI systems. The proposed simplified correction algorithm appears sufficient to control ADC non-uniformity errors in clinical studies using three orthogonal diffusion measurements. The most efficient reduction of ADC bias for anisotropic medium is achieved with non-lab-based diffusion gradients. Copyright © 2013 Wiley Periodicals, Inc.
Analysis and correction of gradient nonlinearity bias in ADC measurements
Malyarenko, Dariya I.; Ross, Brian D.; Chenevert, Thomas L.
2013-01-01
Purpose Gradient nonlinearity of MRI systems leads to spatially-dependent b-values and consequently high non-uniformity errors (10–20%) in ADC measurements over clinically relevant field-of-views. This work seeks practical correction procedure that effectively reduces observed ADC bias for media of arbitrary anisotropy in the fewest measurements. Methods All-inclusive bias analysis considers spatial and time-domain cross-terms for diffusion and imaging gradients. The proposed correction is based on rotation of the gradient nonlinearity tensor into the diffusion gradient frame where spatial bias of b-matrix can be approximated by its Euclidean norm. Correction efficiency of the proposed procedure is numerically evaluated for a range of model diffusion tensor anisotropies and orientations. Results Spatial dependence of nonlinearity correction terms accounts for the bulk (75–95%) of ADC bias for FA = 0.3–0.9. Residual ADC non-uniformity errors are amplified for anisotropic diffusion. This approximation obviates need for full diffusion tensor measurement and diagonalization to derive a corrected ADC. Practical scenarios are outlined for implementation of the correction on clinical MRI systems. Conclusions The proposed simplified correction algorithm appears sufficient to control ADC non-uniformity errors in clinical studies using three orthogonal diffusion measurements. The most efficient reduction of ADC bias for anisotropic medium is achieved with non-lab-based diffusion gradients. PMID:23794533
Impact of bias-corrected reanalysis-derived lateral boundary conditions on WRF simulations
NASA Astrophysics Data System (ADS)
Moalafhi, Ditiro Benson; Sharma, Ashish; Evans, Jason Peter; Mehrotra, Rajeshwar; Rocheta, Eytan
2017-08-01
Lateral and lower boundary conditions derived from a suitable global reanalysis data set form the basis for deriving a dynamically consistent finer resolution downscaled product for climate and hydrological assessment studies. A problem with this, however, is that systematic biases have been noted to be present in the global reanalysis data sets that form these boundaries, biases which can be carried into the downscaled simulations thereby reducing their accuracy or efficacy. In this work, three Weather Research and Forecasting (WRF) model downscaling experiments are undertaken to investigate the impact of bias correcting European Centre for Medium range Weather Forecasting Reanalysis ERA-Interim (ERA-I) atmospheric temperature and relative humidity using Atmospheric Infrared Sounder (AIRS) satellite data. The downscaling is performed over a domain centered over southern Africa between the years 2003 and 2012. The sample mean and the mean as well as standard deviation at each grid cell for each variable are used for bias correction. The resultant WRF simulations of near-surface temperature and precipitation are evaluated seasonally and annually against global gridded observational data sets and compared with ERA-I reanalysis driving field. The study reveals inconsistencies between the impact of the bias correction prior to downscaling and the resultant model simulations after downscaling. Mean and standard deviation bias-corrected WRF simulations are, however, found to be marginally better than mean only bias-corrected WRF simulations and raw ERA-I reanalysis-driven WRF simulations. Performances, however, differ when assessing different attributes in the downscaled field. This raises questions about the efficacy of the correction procedures adopted.
Near-Surface PM2.5 Concentrations Derived from Satellites, Simulation and Ground Monitors
NASA Astrophysics Data System (ADS)
van Donkelaar, A.; Martin, R.; Hsu, N. Y. C.; Kahn, R. A.; Levy, R. C.; Lyapustin, A.; Sayer, A. M.; Brauer, M.
2015-12-01
Exposure to fine particulate matter (PM2.5) is globally associated with 3.2 million premature deaths annually. Satellite retrievals of total column aerosol optical depth (AOD) from instruments such as MODIS, MISR and SeaWiFS are related to PM2.5 through local aerosol vertical profiles and optical properties. A globally applicable and geophysically-based AOD to PM2.5 relationship can be calculated from chemical transport model (CTM) simulations. This approach, while effective, ignores the wealth of ground monitoring data that exist in some regions of the world. We therefore use ground monitors to develop a geographically weighted regression (GWR) that predicts the residual bias in geophysically-based satellite-derived PM2.5. Predictors such as the AOD to PM2.5 relationship resolution, land cover type, and chemical composition are used to predict this bias, which can then be used to improve the initial PM2.5 estimates. This approach not only allows for direct bias correction, but also provides insight into factors biasing the initial CTM-derived AOD to PM2.5 relationship. Over North America, we find significant improvement in bias-corrected PM2.5 (r2=0.82 versus r2=0.62), with evidence that fine-scale variability in surface elevation and urban factors are major sources of error in the CTM-derived relationships. Agreement remains high (r2=0.78) even when a large fraction of ground monitors (70%) are withheld from the GWR, suggesting this technique may add value in regions with even sparse ground monitoring networks, and potentially worldwide.
RELIC: a novel dye-bias correction method for Illumina Methylation BeadChip.
Xu, Zongli; Langie, Sabine A S; De Boever, Patrick; Taylor, Jack A; Niu, Liang
2017-01-03
The Illumina Infinium HumanMethylation450 BeadChip and its successor, Infinium MethylationEPIC BeadChip, have been extensively utilized in epigenome-wide association studies. Both arrays use two fluorescent dyes (Cy3-green/Cy5-red) to measure methylation level at CpG sites. However, performance difference between dyes can result in biased estimates of methylation levels. Here we describe a novel method, called REgression on Logarithm of Internal Control probes (RELIC) to correct for dye bias on whole array by utilizing the intensity values of paired internal control probes that monitor the two color channels. We evaluate the method in several datasets against other widely used dye-bias correction methods. Results on data quality improvement showed that RELIC correction statistically significantly outperforms alternative dye-bias correction methods. We incorporated the method into the R package ENmix, which is freely available from the Bioconductor website ( https://www.bioconductor.org/packages/release/bioc/html/ENmix.html ). RELIC is an efficient and robust method to correct for dye-bias in Illumina Methylation BeadChip data. It outperforms other alternative methods and conveniently implemented in R package ENmix to facilitate DNA methylation studies.
BeiDou Geostationary Satellite Code Bias Modeling Using Fengyun-3C Onboard Measurements.
Jiang, Kecai; Li, Min; Zhao, Qile; Li, Wenwen; Guo, Xiang
2017-10-27
This study validated and investigated elevation- and frequency-dependent systematic biases observed in ground-based code measurements of the Chinese BeiDou navigation satellite system, using the onboard BeiDou code measurement data from the Chinese meteorological satellite Fengyun-3C. Particularly for geostationary earth orbit satellites, sky-view coverage can be achieved over the entire elevation and azimuth angle ranges with the available onboard tracking data, which is more favorable to modeling code biases. Apart from the BeiDou-satellite-induced biases, the onboard BeiDou code multipath effects also indicate pronounced near-field systematic biases that depend only on signal frequency and the line-of-sight directions. To correct these biases, we developed a proposed code correction model by estimating the BeiDou-satellite-induced biases as linear piece-wise functions in different satellite groups and the near-field systematic biases in a grid approach. To validate the code bias model, we carried out orbit determination using single-frequency BeiDou data with and without code bias corrections applied. Orbit precision statistics indicate that those code biases can seriously degrade single-frequency orbit determination. After the correction model was applied, the orbit position errors, 3D root mean square, were reduced from 150.6 to 56.3 cm.
BeiDou Geostationary Satellite Code Bias Modeling Using Fengyun-3C Onboard Measurements
Jiang, Kecai; Li, Min; Zhao, Qile; Li, Wenwen; Guo, Xiang
2017-01-01
This study validated and investigated elevation- and frequency-dependent systematic biases observed in ground-based code measurements of the Chinese BeiDou navigation satellite system, using the onboard BeiDou code measurement data from the Chinese meteorological satellite Fengyun-3C. Particularly for geostationary earth orbit satellites, sky-view coverage can be achieved over the entire elevation and azimuth angle ranges with the available onboard tracking data, which is more favorable to modeling code biases. Apart from the BeiDou-satellite-induced biases, the onboard BeiDou code multipath effects also indicate pronounced near-field systematic biases that depend only on signal frequency and the line-of-sight directions. To correct these biases, we developed a proposed code correction model by estimating the BeiDou-satellite-induced biases as linear piece-wise functions in different satellite groups and the near-field systematic biases in a grid approach. To validate the code bias model, we carried out orbit determination using single-frequency BeiDou data with and without code bias corrections applied. Orbit precision statistics indicate that those code biases can seriously degrade single-frequency orbit determination. After the correction model was applied, the orbit position errors, 3D root mean square, were reduced from 150.6 to 56.3 cm. PMID:29076998
Derivation and correction of the Tsu-Esaki tunneling current formula
NASA Astrophysics Data System (ADS)
Bandara, K. M. S. V.; Coon, D. D.
1989-07-01
The theoretical basis of the Tsu-Esaki tunneling current formula [Appl. Phys. Lett. 22, 562 (1973)] is examined in detail and corrections are found. The starting point is an independent particle picture with fully antisymmetrized N-electron wave functions. Unitarity is used to resolve an orthonormality issue raised in earlier work. A new set of mutually consistent equations is derived for bias voltage, tunneling current, and electron densities in the emitter and collector. Corrections include a previously noted kinematic factor and a modification of emitter and collector Fermi levels. The magnitude of the corrections is illustrated numerically for the case of a resonant tunneling current-voltage characteristic.
Malyarenko, Dariya I; Wilmes, Lisa J; Arlinghaus, Lori R; Jacobs, Michael A; Huang, Wei; Helmer, Karl G; Taouli, Bachir; Yankeelov, Thomas E; Newitt, David; Chenevert, Thomas L
2016-12-01
Previous research has shown that system-dependent gradient nonlinearity (GNL) introduces a significant spatial bias (nonuniformity) in apparent diffusion coefficient (ADC) maps. Here, the feasibility of centralized retrospective system-specific correction of GNL bias for quantitative diffusion-weighted imaging (DWI) in multisite clinical trials is demonstrated across diverse scanners independent of the scanned object. Using corrector maps generated from system characterization by ice-water phantom measurement completed in the previous project phase, GNL bias correction was performed for test ADC measurements from an independent DWI phantom (room temperature agar) at two offset locations in the bore. The precomputed three-dimensional GNL correctors were retrospectively applied to test DWI scans by the central analysis site. The correction was blinded to reference DWI of the agar phantom at magnet isocenter where the GNL bias is negligible. The performance was evaluated from changes in ADC region of interest histogram statistics before and after correction with respect to the unbiased reference ADC values provided by sites. Both absolute error and nonuniformity of the ADC map induced by GNL (median, 12%; range, -35% to +10%) were substantially reduced by correction (7-fold in median and 3-fold in range). The residual ADC nonuniformity errors were attributed to measurement noise and other non-GNL sources. Correction of systematic GNL bias resulted in a 2-fold decrease in technical variability across scanners (down to site temperature range). The described validation of GNL bias correction marks progress toward implementation of this technology in multicenter trials that utilize quantitative DWI.
Malyarenko, Dariya I.; Wilmes, Lisa J.; Arlinghaus, Lori R.; Jacobs, Michael A.; Huang, Wei; Helmer, Karl G.; Taouli, Bachir; Yankeelov, Thomas E.; Newitt, David; Chenevert, Thomas L.
2017-01-01
Previous research has shown that system-dependent gradient nonlinearity (GNL) introduces a significant spatial bias (nonuniformity) in apparent diffusion coefficient (ADC) maps. Here, the feasibility of centralized retrospective system-specific correction of GNL bias for quantitative diffusion-weighted imaging (DWI) in multisite clinical trials is demonstrated across diverse scanners independent of the scanned object. Using corrector maps generated from system characterization by ice-water phantom measurement completed in the previous project phase, GNL bias correction was performed for test ADC measurements from an independent DWI phantom (room temperature agar) at two offset locations in the bore. The precomputed three-dimensional GNL correctors were retrospectively applied to test DWI scans by the central analysis site. The correction was blinded to reference DWI of the agar phantom at magnet isocenter where the GNL bias is negligible. The performance was evaluated from changes in ADC region of interest histogram statistics before and after correction with respect to the unbiased reference ADC values provided by sites. Both absolute error and nonuniformity of the ADC map induced by GNL (median, 12%; range, −35% to +10%) were substantially reduced by correction (7-fold in median and 3-fold in range). The residual ADC nonuniformity errors were attributed to measurement noise and other non-GNL sources. Correction of systematic GNL bias resulted in a 2-fold decrease in technical variability across scanners (down to site temperature range). The described validation of GNL bias correction marks progress toward implementation of this technology in multicenter trials that utilize quantitative DWI. PMID:28105469
NASA Astrophysics Data System (ADS)
Johnstone, Samuel; Hourigan, Jeremy; Gallagher, Christopher
2013-05-01
Heterogeneous concentrations of α-producing nuclides in apatite have been recognized through a variety of methods. The presence of zonation in apatite complicates both traditional α-ejection corrections and diffusive models, both of which operate under the assumption of homogeneous concentrations. In this work we develop a method for measuring radial concentration profiles of 238U and 232Th in apatite by laser ablation ICP-MS depth profiling. We then focus on one application of this method, removing bias introduced by applying inappropriate α-ejection corrections. Formal treatment of laser ablation ICP-MS depth profile calibration for apatite includes construction and calibration of matrix-matched standards and quantification of rates of elemental fractionation. From this we conclude that matrix-matched standards provide more robust monitors of fractionation rate and concentrations than doped silicate glass standards. We apply laser ablation ICP-MS depth profiling to apatites from three unknown populations and small, intact crystals of Durango fluorapatite. Accurate and reproducible Durango apatite dates suggest that prolonged exposure to laser drilling does not impact cooling ages. Intracrystalline concentrations vary by at least a factor of 2 in the majority of the samples analyzed, but concentration variation only exceeds 5x in 5 grains and 10x in 1 out of the 63 grains analyzed. Modeling of synthetic concentration profiles suggests that for concentration variations of 2x and 10x individual homogeneous versus zonation dependent α-ejection corrections could lead to age bias of >5% and >20%, respectively. However, models based on measured concentration profiles only generated biases exceeding 5% in 13 of the 63 cases modeled. Application of zonation dependent α-ejection corrections did not significantly reduce the age dispersion present in any of the populations studied. This suggests that factors beyond homogeneous α-ejection corrections are the dominant source of overdispersion in apatite (U-Th)/He cooling ages.
Marcos, Raül; Llasat, Ma Carmen; Quintana-Seguí, Pere; Turco, Marco
2018-01-01
In this paper, we have compared different bias correction methodologies to assess whether they could be advantageous for improving the performance of a seasonal prediction model for volume anomalies in the Boadella reservoir (northwestern Mediterranean). The bias correction adjustments have been applied on precipitation and temperature from the European Centre for Middle-range Weather Forecasting System 4 (S4). We have used three bias correction strategies: two linear (mean bias correction, BC, and linear regression, LR) and one non-linear (Model Output Statistics analogs, MOS-analog). The results have been compared with climatology and persistence. The volume-anomaly model is a previously computed Multiple Linear Regression that ingests precipitation, temperature and in-flow anomaly data to simulate monthly volume anomalies. The potential utility for end-users has been assessed using economic value curve areas. We have studied the S4 hindcast period 1981-2010 for each month of the year and up to seven months ahead considering an ensemble of 15 members. We have shown that the MOS-analog and LR bias corrections can improve the original S4. The application to volume anomalies points towards the possibility to introduce bias correction methods as a tool to improve water resource seasonal forecasts in an end-user context of climate services. Particularly, the MOS-analog approach gives generally better results than the other approaches in late autumn and early winter. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Prasanna, V.
2018-01-01
This study makes use of temperature and precipitation from CMIP5 climate model output for climate change application studies over the Indian region during the summer monsoon season (JJAS). Bias correction of temperature and precipitation from CMIP5 GCM simulation results with respect to observation is discussed in detail. The non-linear statistical bias correction is a suitable bias correction method for climate change data because it is simple and does not add up artificial uncertainties to the impact assessment of climate change scenarios for climate change application studies (agricultural production changes) in the future. The simple statistical bias correction uses observational constraints on the GCM baseline, and the projected results are scaled with respect to the changing magnitude in future scenarios, varying from one model to the other. Two types of bias correction techniques are shown here: (1) a simple bias correction using a percentile-based quantile-mapping algorithm and (2) a simple but improved bias correction method, a cumulative distribution function (CDF; Weibull distribution function)-based quantile-mapping algorithm. This study shows that the percentile-based quantile mapping method gives results similar to the CDF (Weibull)-based quantile mapping method, and both the methods are comparable. The bias correction is applied on temperature and precipitation variables for present climate and future projected data to make use of it in a simple statistical model to understand the future changes in crop production over the Indian region during the summer monsoon season. In total, 12 CMIP5 models are used for Historical (1901-2005), RCP4.5 (2005-2100), and RCP8.5 (2005-2100) scenarios. The climate index from each CMIP5 model and the observed agricultural yield index over the Indian region are used in a regression model to project the changes in the agricultural yield over India from RCP4.5 and RCP8.5 scenarios. The results revealed a better convergence of model projections in the bias corrected data compared to the uncorrected data. The study can be extended to localized regional domains aimed at understanding the changes in the agricultural productivity in the future with an agro-economy or a simple statistical model. The statistical model indicated that the total food grain yield is going to increase over the Indian region in the future, the increase in the total food grain yield is approximately 50 kg/ ha for the RCP4.5 scenario from 2001 until the end of 2100, and the increase in the total food grain yield is approximately 90 kg/ha for the RCP8.5 scenario from 2001 until the end of 2100. There are many studies using bias correction techniques, but this study applies the bias correction technique to future climate scenario data from CMIP5 models and applied it to crop statistics to find future crop yield changes over the Indian region.
NASA Astrophysics Data System (ADS)
Sorba, Robert; Sawicki, Marcin
2018-05-01
We perform spatially resolved, pixel-by-pixel Spectral Energy Distribution (SED) fitting on galaxies up to z ˜ 2.5 in the Hubble eXtreme Deep Field (XDF). Comparing stellar mass estimates from spatially resolved and spatially unresolved photometry we find that unresolved masses can be systematically underestimated by factors of up to 5. The ratio of the unresolved to resolved mass measurement depends on the galaxy's specific star formation rate (sSFR): at low sSFRs the bias is small, but above sSFR ˜ 10-9.5 yr-1 the discrepancy increases rapidly such that galaxies with sSFRs ˜ 10-8 yr-1 have unresolved mass estimates of only one-half to one-fifth of the resolved value. This result indicates that stellar masses estimated from spatially unresolved data sets need to be systematically corrected, in some cases by large amounts, and we provide an analytic prescription for applying this correction. We show that correcting stellar mass measurements for this bias changes the normalization and slope of the star-forming main sequence and reduces its intrinsic width; most dramatically, correcting for the mass bias increases the stellar mass density of the Universe at high redshift and can resolve the long-standing discrepancy between the directly measured cosmic SFR density at z ≳ 1 and that inferred from stellar mass densities (`the missing mass problem').
A new dynamical downscaling approach with GCM bias corrections and spectral nudging
NASA Astrophysics Data System (ADS)
Xu, Zhongfeng; Yang, Zong-Liang
2015-04-01
To improve confidence in regional projections of future climate, a new dynamical downscaling (NDD) approach with both general circulation model (GCM) bias corrections and spectral nudging is developed and assessed over North America. GCM biases are corrected by adjusting GCM climatological means and variances based on reanalysis data before the GCM output is used to drive a regional climate model (RCM). Spectral nudging is also applied to constrain RCM-based biases. Three sets of RCM experiments are integrated over a 31 year period. In the first set of experiments, the model configurations are identical except that the initial and lateral boundary conditions are derived from either the original GCM output, the bias-corrected GCM output, or the reanalysis data. The second set of experiments is the same as the first set except spectral nudging is applied. The third set of experiments includes two sensitivity runs with both GCM bias corrections and nudging where the nudging strength is progressively reduced. All RCM simulations are assessed against North American Regional Reanalysis. The results show that NDD significantly improves the downscaled mean climate and climate variability relative to other GCM-driven RCM downscaling approach in terms of climatological mean air temperature, geopotential height, wind vectors, and surface air temperature variability. In the NDD approach, spectral nudging introduces the effects of GCM bias corrections throughout the RCM domain rather than just limiting them to the initial and lateral boundary conditions, thereby minimizing climate drifts resulting from both the GCM and RCM biases.
Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao
2016-06-01
An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.
Fletcher, E; Carmichael, O; Decarli, C
2012-01-01
We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer's disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions.
Fletcher, E.; Carmichael, O.; DeCarli, C.
2013-01-01
We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer’s disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions. PMID:23365843
NASA Astrophysics Data System (ADS)
Chen, Jie; Li, Chao; Brissette, François P.; Chen, Hua; Wang, Mingna; Essou, Gilles R. C.
2018-05-01
Bias correction is usually implemented prior to using climate model outputs for impact studies. However, bias correction methods that are commonly used treat climate variables independently and often ignore inter-variable dependencies. The effects of ignoring such dependencies on impact studies need to be investigated. This study aims to assess the impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling. To this end, a joint bias correction (JBC) method which corrects the joint distribution of two variables as a whole is compared with an independent bias correction (IBC) method; this is considered in terms of correcting simulations of precipitation and temperature from 26 climate models for hydrological modeling over 12 watersheds located in various climate regimes. The results show that the simulated precipitation and temperature are considerably biased not only in the individual distributions, but also in their correlations, which in turn result in biased hydrological simulations. In addition to reducing the biases of the individual characteristics of precipitation and temperature, the JBC method can also reduce the bias in precipitation-temperature (P-T) correlations. In terms of hydrological modeling, the JBC method performs significantly better than the IBC method for 11 out of the 12 watersheds over the calibration period. For the validation period, the advantages of the JBC method are greatly reduced as the performance becomes dependent on the watershed, GCM and hydrological metric considered. For arid/tropical and snowfall-rainfall-mixed watersheds, JBC performs better than IBC. For snowfall- or rainfall-dominated watersheds, however, the two methods behave similarly, with IBC performing somewhat better than JBC. Overall, the results emphasize the advantages of correcting the P-T correlation when using climate model-simulated precipitation and temperature to assess the impact of climate change on watershed hydrology. However, a thorough validation and a comparison with other methods are recommended before using the JBC method, since it may perform worse than the IBC method for some cases due to bias nonstationarity of climate model outputs.
Bias-correction of CORDEX-MENA projections using the Distribution Based Scaling method
NASA Astrophysics Data System (ADS)
Bosshard, Thomas; Yang, Wei; Sjökvist, Elin; Arheimer, Berit; Graham, L. Phil
2014-05-01
Within the Regional Initiative for the Assessment of the Impact of Climate Change on Water Resources and Socio-Economic Vulnerability in the Arab Region (RICCAR) lead by UN ESCWA, CORDEX RCM projections for the Middle East Northern Africa (MENA) domain are used to drive hydrological impacts models. Bias-correction of newly available CORDEX-MENA projections is a central part of this project. In this study, the distribution based scaling (DBS) method has been applied to 6 regional climate model projections driven by 2 RCP emission scenarios. The DBS method uses a quantile mapping approach and features a conditional temperature correction dependent on the wet/dry state in the climate model data. The CORDEX-MENA domain is particularly challenging for bias-correction as it spans very diverse climates showing pronounced dry and wet seasons. Results show that the regional climate models simulate too low temperatures and often have a displaced rainfall band compared to WATCH ERA-Interim forcing data in the reference period 1979-2008. DBS is able to correct the temperature biases as well as some aspects of the precipitation biases. Special focus is given to the analysis of the influence of the dry-frequency bias (i.e. climate models simulating too few rain days) on the bias-corrected projections and on the modification of the climate change signal by the DBS method.
Causes of model dry and warm bias over central U.S. and impact on climate projections.
Lin, Yanluan; Dong, Wenhao; Zhang, Minghua; Xie, Yuanyu; Xue, Wei; Huang, Jianbin; Luo, Yong
2017-10-12
Climate models show a conspicuous summer warm and dry bias over the central United States. Using results from 19 climate models in the Coupled Model Intercomparison Project Phase 5 (CMIP5), we report a persistent dependence of warm bias on dry bias with the precipitation deficit leading the warm bias over this region. The precipitation deficit is associated with the widespread failure of models in capturing strong rainfall events in summer over the central U.S. A robust linear relationship between the projected warming and the present-day warm bias enables us to empirically correct future temperature projections. By the end of the 21st century under the RCP8.5 scenario, the corrections substantially narrow the intermodel spread of the projections and reduce the projected temperature by 2.5 K, resulting mainly from the removal of the warm bias. Instead of a sharp decrease, after this correction the projected precipitation is nearly neutral for all scenarios.Climate models repeatedly show a warm and dry bias over the central United States, but the origin of this bias remains unclear. Here the authors associate this bias to precipitation deficits in models and after applying a correction, projected precipitation in this region shows no significant changes.
Bias correction of daily satellite precipitation data using genetic algorithm
NASA Astrophysics Data System (ADS)
Pratama, A. W.; Buono, A.; Hidayat, R.; Harsa, H.
2018-05-01
Climate Hazards Group InfraRed Precipitation with Stations (CHIRPS) was producted by blending Satellite-only Climate Hazards Group InfraRed Precipitation (CHIRP) with Stasion observations data. The blending process was aimed to reduce bias of CHIRP. However, Biases of CHIRPS on statistical moment and quantil values were high during wet season over Java Island. This paper presented a bias correction scheme to adjust statistical moment of CHIRP using observation precipitation data. The scheme combined Genetic Algorithm and Nonlinear Power Transformation, the results was evaluated based on different season and different elevation level. The experiment results revealed that the scheme robustly reduced bias on variance around 100% reduction and leaded to reduction of first, and second quantile biases. However, bias on third quantile only reduced during dry months. Based on different level of elevation, the performance of bias correction process is only significantly different on skewness indicators.
Effect of Malmquist bias on correlation studies with IRAS data base
NASA Technical Reports Server (NTRS)
Verter, Frances
1993-01-01
The relationships between galaxy properties in the sample of Trinchieri et al. (1989) are reexamined with corrections for Malmquist bias. The linear correlations are tested and linear regressions are fit for log-log plots of L(FIR), L(H-alpha), and L(B) as well as ratios of these quantities. The linear correlations for Malmquist bias are corrected using the method of Verter (1988), in which each galaxy observation is weighted by the inverse of its sampling volume. The linear regressions are corrected for Malmquist bias by a new method invented here in which each galaxy observation is weighted by its sampling volume. The results of correlation and regressions among the sample are significantly changed in the anticipated sense that the corrected correlation confidences are lower and the corrected slopes of the linear regressions are lower. The elimination of Malmquist bias eliminates the nonlinear rise in luminosity that has caused some authors to hypothesize additional components of FIR emission.
NASA Astrophysics Data System (ADS)
Beria, H.; Nanda, T., Sr.; Chatterjee, C.
2015-12-01
High resolution satellite precipitation products such as Tropical Rainfall Measuring Mission (TRMM), Climate Forecast System Reanalysis (CFSR), European Centre for Medium-Range Weather Forecasts (ECMWF), etc., offer a promising alternative to flood forecasting in data scarce regions. At the current state-of-art, these products cannot be used in the raw form for flood forecasting, even at smaller lead times. In the current study, these precipitation products are bias corrected using statistical techniques, such as additive and multiplicative bias corrections, and wavelet multi-resolution analysis (MRA) with India Meteorological Department (IMD) gridded precipitation product,obtained from gauge-based rainfall estimates. Neural network based rainfall-runoff modeling using these bias corrected products provide encouraging results for flood forecasting upto 48 hours lead time. We will present various statistical and graphical interpretations of catchment response to high rainfall events using both the raw and bias corrected precipitation products at different lead times.
Improved Correction of Misclassification Bias With Bootstrap Imputation.
van Walraven, Carl
2018-07-01
Diagnostic codes used in administrative database research can create bias due to misclassification. Quantitative bias analysis (QBA) can correct for this bias, requires only code sensitivity and specificity, but may return invalid results. Bootstrap imputation (BI) can also address misclassification bias but traditionally requires multivariate models to accurately estimate disease probability. This study compared misclassification bias correction using QBA and BI. Serum creatinine measures were used to determine severe renal failure status in 100,000 hospitalized patients. Prevalence of severe renal failure in 86 patient strata and its association with 43 covariates was determined and compared with results in which renal failure status was determined using diagnostic codes (sensitivity 71.3%, specificity 96.2%). Differences in results (misclassification bias) were then corrected with QBA or BI (using progressively more complex methods to estimate disease probability). In total, 7.4% of patients had severe renal failure. Imputing disease status with diagnostic codes exaggerated prevalence estimates [median relative change (range), 16.6% (0.8%-74.5%)] and its association with covariates [median (range) exponentiated absolute parameter estimate difference, 1.16 (1.01-2.04)]. QBA produced invalid results 9.3% of the time and increased bias in estimates of both disease prevalence and covariate associations. BI decreased misclassification bias with increasingly accurate disease probability estimates. QBA can produce invalid results and increase misclassification bias. BI avoids invalid results and can importantly decrease misclassification bias when accurate disease probability estimates are used.
Detection rates of geckos in visual surveys: Turning confounding variables into useful knowledge
Lardner, Bjorn; Rodda, Gordon H.; Yackel Adams, Amy A.; Savidge, Julie A.; Reed, Robert N.
2016-01-01
Transect surveys without some means of estimating detection probabilities generate population size indices prone to bias because survey conditions differ in time and space. Knowing what causes such bias can help guide the collection of relevant survey covariates, correct the survey data, anticipate situations where bias might be unacceptably large, and elucidate the ecology of target species. We used negative binomial regression to evaluate confounding variables for gecko (primarily Hemidactylus frenatus and Lepidodactylus lugubris) counts on 220-m-long transects surveyed at night, primarily for snakes, on 9,475 occasions. Searchers differed in gecko detection rates by up to a factor of six. The worst and best headlamps differed by a factor of at least two. Strong winds had a negative effect potentially as large as those of searchers or headlamps. More geckos were seen during wet weather conditions, but the effect size was small. Compared with a detection nadir during waxing gibbous (nearly full) moons above the horizon, we saw 28% more geckos during waning crescent moons below the horizon. A sine function suggested that we saw 24% more geckos at the end of the wet season than at the end of the dry season. Fluctuations on a longer timescale also were verified. Disturbingly, corrected data exhibited strong short-term fluctuations that covariates apparently failed to capture. Although some biases can be addressed with measured covariates, others will be difficult to eliminate as a significant source of error in longterm monitoring programs.
Tang, Jian; Jiang, Xiaoliang
2017-01-01
Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.
NASA Astrophysics Data System (ADS)
Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish
2017-07-01
Use of General Circulation Model (GCM) precipitation and evapotranspiration sequences for hydrologic modelling can result in unrealistic simulations due to the coarse scales at which GCMs operate and the systematic biases they contain. The Bias Correction Spatial Disaggregation (BCSD) method is a popular statistical downscaling and bias correction method developed to address this issue. The advantage of BCSD is its ability to reduce biases in the distribution of precipitation totals at the GCM scale and then introduce more realistic variability at finer scales than simpler spatial interpolation schemes. Although BCSD corrects biases at the GCM scale before disaggregation; at finer spatial scales biases are re-introduced by the assumptions made in the spatial disaggregation process. Our study focuses on this limitation of BCSD and proposes a rank-based approach that aims to reduce the spatial disaggregation bias especially for both low and high precipitation extremes. BCSD requires the specification of a multiplicative bias correction anomaly field that represents the ratio of the fine scale precipitation to the disaggregated precipitation. It is shown that there is significant temporal variation in the anomalies, which is masked when a mean anomaly field is used. This can be improved by modelling the anomalies in rank-space. Results from the application of the rank-BCSD procedure improve the match between the distributions of observed and downscaled precipitation at the fine scale compared to the original BCSD approach. Further improvements in the distribution are identified when a scaling correction to preserve mass in the disaggregation process is implemented. An assessment of the approach using a single GCM over Australia shows clear advantages especially in the simulation of particularly low and high downscaled precipitation amounts.
NASA Technical Reports Server (NTRS)
Ahmed, Kazi Farzan; Wang, Guiling; Silander, John; Wilson, Adam M.; Allen, Jenica M.; Horton, Radley; Anyah, Richard
2013-01-01
Statistical downscaling can be used to efficiently downscale a large number of General Circulation Model (GCM) outputs to a fine temporal and spatial scale. To facilitate regional impact assessments, this study statistically downscales (to 1/8deg spatial resolution) and corrects the bias of daily maximum and minimum temperature and daily precipitation data from six GCMs and four Regional Climate Models (RCMs) for the northeast United States (US) using the Statistical Downscaling and Bias Correction (SDBC) approach. Based on these downscaled data from multiple models, five extreme indices were analyzed for the future climate to quantify future changes of climate extremes. For a subset of models and indices, results based on raw and bias corrected model outputs for the present-day climate were compared with observations, which demonstrated that bias correction is important not only for GCM outputs, but also for RCM outputs. For future climate, bias correction led to a higher level of agreements among the models in predicting the magnitude and capturing the spatial pattern of the extreme climate indices. We found that the incorporation of dynamical downscaling as an intermediate step does not lead to considerable differences in the results of statistical downscaling for the study domain.
Müller, Jörg M; Furniss, Tilman
2013-11-30
The often-reported low informant agreement about child psychopathology between multiple informants has lead to various suggestions about how to address discrepant ratings. Among the factors that may lower agreement that have been discussed is informant credibility, reliability, or psychopathology, which is of interest in this paper. We tested three different models, namely, the accuracy, the distortion, and an integrated so-called combined model, that conceptualize parental ratings to assess child psychopathology. The data comprise ratings of child psychopathology from multiple informants (mother, therapist and kindergarten teacher) and ratings of maternal psychopathology. The children were patients in a preschool psychiatry unit (N=247). The results from structural equation modeling show that maternal ratings of child psychopathology were biased by maternal psychopathology (distortion model). Based on this statistical background, we suggest a method to adjust biased maternal ratings. We illustrate the maternal bias by comparing the ratings of mother to expert ratings (combined kindergarten teacher and therapist ratings) and show that the correction equation increases the agreement between maternal and expert ratings. We conclude that this approach may help to reduce misclassification of preschool children as 'clinical' on the basis of biased maternal ratings. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Rushi, B. R.; Ellenburg, W. L.; Adams, E. C.; Flores, A.; Limaye, A. S.; Valdés-Pineda, R.; Roy, T.; Valdés, J. B.; Mithieu, F.; Omondi, S.
2017-12-01
SERVIR, a joint NASA-USAID initiative, works to build capacity in Earth observation technologies in developing countries for improved environmental decision making in the arena of: weather and climate, water and disasters, food security and land use/land cover. SERVIR partners with leading regional organizations in Eastern and Southern Africa, Hindu Kush-Himalaya, Mekong region, and West Africa to achieve its objectives. SERVIR develops hydrological applications to address specific needs articulated by key stakeholders and daily rainfall estimates are a vital input for these applications. Satellite-derived rainfall is subjected to systemic biases which need to be corrected before it can be used for any hydrologic application such as real-time or seasonal forecasting. SERVIR and the SWAAT team at the University of Arizona, have co-developed an open-source and user friendly tool of rainfall bias correction approaches for SPPs. Bias correction tools were developed based on Linear Scaling and Quantile Mapping techniques. A set of SPPs, such as PERSIANN-CCS, TMPA-RT, and CMORPH, are bias corrected using Climate Hazards Group InfraRed Precipitation with Station (CHIRPS) data which incorporates ground based precipitation observations. This bias correction tools also contains a component, which is included to improve monthly mean of CHIRPS using precipitation products of the Global Surface Summary of the Day (GSOD) database developed by the National Climatic Data Center (NCDC). This tool takes input from command-line which makes it user-friendly and applicable in any operating platform without prior programming skills. This presentation will focus on this bias-correction tool for SPPs, including application scenarios.
Centrality categorization for Rp (d)+A in high-energy collisions
NASA Astrophysics Data System (ADS)
Adare, A.; Aidala, C.; Ajitanand, N. N.; Akiba, Y.; Al-Bataineh, H.; Alexander, J.; Angerami, A.; Aoki, K.; Apadula, N.; Aramaki, Y.; Atomssa, E. T.; Averbeck, R.; Awes, T. C.; Azmoun, B.; Babintsev, V.; Bai, M.; Baksay, G.; Baksay, L.; Barish, K. N.; Bassalleck, B.; Basye, A. T.; Bathe, S.; Baublis, V.; Baumann, C.; Bazilevsky, A.; Belikov, S.; Belmont, R.; Bennett, R.; Bhom, J. H.; Blau, D. S.; Bok, J. S.; Boyle, K.; Brooks, M. L.; Buesching, H.; Bumazhnov, V.; Bunce, G.; Butsyk, S.; Campbell, S.; Caringi, A.; Chen, C.-H.; Chi, C. Y.; Chiu, M.; Choi, I. J.; Choi, J. B.; Choudhury, R. K.; Christiansen, P.; Chujo, T.; Chung, P.; Chvala, O.; Cianciolo, V.; Citron, Z.; Cole, B. A.; Conesa Del Valle, Z.; Connors, M.; Csanád, M.; Csörgő, T.; Dahms, T.; Dairaku, S.; Danchev, I.; Das, K.; Datta, A.; David, G.; Dayananda, M. K.; Denisov, A.; Deshpande, A.; Desmond, E. J.; Dharmawardane, K. V.; Dietzsch, O.; Dion, A.; Donadelli, M.; Drapier, O.; Drees, A.; Drees, K. A.; Durham, J. M.; Durum, A.; Dutta, D.; D'Orazio, L.; Edwards, S.; Efremenko, Y. V.; Ellinghaus, F.; Engelmore, T.; Enokizono, A.; En'yo, H.; Esumi, S.; Fadem, B.; Fields, D. E.; Finger, M.; Finger, M.; Fleuret, F.; Fokin, S. L.; Fraenkel, Z.; Frantz, J. E.; Franz, A.; Frawley, A. D.; Fujiwara, K.; Fukao, Y.; Fusayasu, T.; Garishvili, I.; Glenn, A.; Gong, H.; Gonin, M.; Goto, Y.; Granier de Cassagnac, R.; Grau, N.; Greene, S. V.; Grim, G.; Grosse Perdekamp, M.; Gunji, T.; Gustafsson, H.-Å.; Haggerty, J. S.; Hahn, K. I.; Hamagaki, H.; Hamblen, J.; Han, R.; Hanks, J.; Haslum, E.; Hayano, R.; He, X.; Heffner, M.; Hemmick, T. K.; Hester, T.; Hill, J. C.; Hohlmann, M.; Holzmann, W.; Homma, K.; Hong, B.; Horaguchi, T.; Hornback, D.; Huang, S.; Ichihara, T.; Ichimiya, R.; Ikeda, Y.; Imai, K.; Inaba, M.; Isenhower, D.; Ishihara, M.; Issah, M.; Ivanischev, D.; Iwanaga, Y.; Jacak, B. V.; Jia, J.; Jiang, X.; Jin, J.; Johnson, B. M.; Jones, T.; Joo, K. S.; Jouan, D.; Jumper, D. S.; Kajihara, F.; Kamin, J.; Kang, J. H.; Kapustinsky, J.; Karatsu, K.; Kasai, M.; Kawall, D.; Kawashima, M.; Kazantsev, A. V.; Kempel, T.; Khanzadeev, A.; Kijima, K. M.; Kikuchi, J.; Kim, A.; Kim, B. I.; Kim, D. J.; Kim, E.-J.; Kim, Y.-J.; Kinney, E.; Kiss, Á.; Kistenev, E.; Kleinjan, D.; Kochenda, L.; Komkov, B.; Konno, M.; Koster, J.; Král, A.; Kravitz, A.; Kunde, G. J.; Kurita, K.; Kurosawa, M.; Kwon, Y.; Kyle, G. S.; Lacey, R.; Lai, Y. S.; Lajoie, J. G.; Lebedev, A.; Lee, D. M.; Lee, J.; Lee, K. B.; Lee, K. S.; Leitch, M. J.; Leite, M. A. L.; Li, X.; Lichtenwalner, P.; Liebing, P.; Linden Levy, L. A.; Liška, T.; Liu, H.; Liu, M. X.; Love, B.; Lynch, D.; Maguire, C. F.; Makdisi, Y. I.; Malik, M. D.; Manko, V. I.; Mannel, E.; Mao, Y.; Masui, H.; Matathias, F.; McCumber, M.; McGaughey, P. L.; McGlinchey, D.; Means, N.; Meredith, B.; Miake, Y.; Mibe, T.; Mignerey, A. C.; Miki, K.; Milov, A.; Mitchell, J. T.; Mohanty, A. K.; Moon, H. J.; Morino, Y.; Morreale, A.; Morrison, D. P.; Moukhanova, T. V.; Murakami, T.; Murata, J.; Nagamiya, S.; Nagle, J. L.; Naglis, M.; Nagy, M. I.; Nakagawa, I.; Nakamiya, Y.; Nakamura, K. R.; Nakamura, T.; Nakano, K.; Nam, S.; Newby, J.; Nguyen, M.; Nihashi, M.; Nouicer, R.; Nyanin, A. S.; Oakley, C.; O'Brien, E.; Oda, S. X.; Ogilvie, C. A.; Oka, M.; Okada, K.; Onuki, Y.; Orjuela Koop, J. D.; Oskarsson, A.; Ouchida, M.; Ozawa, K.; Pak, R.; Pantuev, V.; Papavassiliou, V.; Park, I. H.; Park, S. K.; Park, W. J.; Pate, S. F.; Pei, H.; Peng, J.-C.; Pereira, H.; Perepelitsa, D.; Peressounko, D. Yu.; Petti, R.; Pinkenburg, C.; Pisani, R. P.; Proissl, M.; Purschke, M. L.; Qu, H.; Rak, J.; Ravinovich, I.; Read, K. F.; Rembeczki, S.; Reygers, K.; Riabov, V.; Riabov, Y.; Richardson, E.; Roach, D.; Roche, G.; Rolnick, S. D.; Rosati, M.; Rosen, C. A.; Rosendahl, S. S. E.; Ružička, P.; Sahlmueller, B.; Saito, N.; Sakaguchi, T.; Sakashita, K.; Samsonov, V.; Sano, S.; Sato, T.; Sawada, S.; Sedgwick, K.; Seele, J.; Seidl, R.; Seto, R.; Sharma, D.; Shein, I.; Shibata, T.-A.; Shigaki, K.; Shimomura, M.; Shoji, K.; Shukla, P.; Sickles, A.; Silva, C. L.; Silvermyr, D.; Silvestre, C.; Sim, K. S.; Singh, B. K.; Singh, C. P.; Singh, V.; Slunečka, M.; Soltz, R. A.; Sondheim, W. E.; Sorensen, S. P.; Sourikova, I. V.; Stankus, P. W.; Stenlund, E.; Stoll, S. P.; Sugitate, T.; Sukhanov, A.; Sziklai, J.; Takagui, E. M.; Taketani, A.; Tanabe, R.; Tanaka, Y.; Taneja, S.; Tanida, K.; Tannenbaum, M. J.; Tarafdar, S.; Taranenko, A.; Themann, H.; Thomas, D.; Thomas, T. L.; Togawa, M.; Toia, A.; Tomášek, L.; Torii, H.; Towell, R. S.; Tserruya, I.; Tsuchimoto, Y.; Vale, C.; Valle, H.; van Hecke, H. W.; Vazquez-Zambrano, E.; Veicht, A.; Velkovska, J.; Vértesi, R.; Virius, M.; Vrba, V.; Vznuzdaev, E.; Wang, X. R.; Watanabe, D.; Watanabe, K.; Watanabe, Y.; Wei, F.; Wei, R.; Wessels, J.; White, S. N.; Winter, D.; Woody, C. L.; Wright, R. M.; Wysocki, M.; Yamaguchi, Y. L.; Yamaura, K.; Yang, R.; Yanovich, A.; Ying, J.; Yokkaichi, S.; You, Z.; Young, G. R.; Younus, I.; Yushmanov, I. E.; Zajc, W. A.; Zhou, S.; Phenix Collaboration
2014-09-01
High-energy proton- and deuteron-nucleus collisions provide an excellent tool for studying a wide array of physics effects, including modifications of parton distribution functions in nuclei, gluon saturation, and color neutralization and hadronization in a nuclear environment, among others. All of these effects are expected to have a significant dependence on the size of the nuclear target and the impact parameter of the collision, also known as the collision centrality. In this article, we detail a method for determining centrality classes in p (d)+A collisions via cuts on the multiplicity at backward rapidity (i.e., the nucleus-going direction) and for determining systematic uncertainties in this procedure. For d +Au collisions at √sNN =200 GeV we find that the connection to geometry is confirmed by measuring the fraction of events in which a neutron from the deuteron does not interact with the nucleus. As an application, we consider the nuclear modification factors Rp (d)+A, for which there is a bias in the measured centrality-dependent yields owing to auto correlations between the process of interest and the backward-rapidity multiplicity. We determine the bias-correction factors within this framework. This method is further tested using the hijing Monte Carlo generator. We find that for d +Au collisions at √sNN =200 GeV, these bias corrections are small and vary by less than 5% (10%) up to pT=10 (20) GeV/c. In contrast, for p +Pb collisions at √sNN =5.02 TeV we find that these bias factors are an order of magnitude larger and strongly pT dependent, likely attributable to the larger effect of multiparton interactions.
Investigating bias in squared regression structure coefficients
Nimon, Kim F.; Zientek, Linda R.; Thompson, Bruce
2015-01-01
The importance of structure coefficients and analogs of regression weights for analysis within the general linear model (GLM) has been well-documented. The purpose of this study was to investigate bias in squared structure coefficients in the context of multiple regression and to determine if a formula that had been shown to correct for bias in squared Pearson correlation coefficients and coefficients of determination could be used to correct for bias in squared regression structure coefficients. Using data from a Monte Carlo simulation, this study found that squared regression structure coefficients corrected with Pratt's formula produced less biased estimates and might be more accurate and stable estimates of population squared regression structure coefficients than estimates with no such corrections. While our findings are in line with prior literature that identified multicollinearity as a predictor of bias in squared regression structure coefficients but not coefficients of determination, the findings from this study are unique in that the level of predictive power, number of predictors, and sample size were also observed to contribute bias in squared regression structure coefficients. PMID:26217273
Mark L. Messonnier; John C. Bergstrom; Chrisopher M. Cornwell; R. Jeff Teasley; H. Ken Cordell
2000-01-01
Simple nonresponse and selection biases that may occur in survey research such as contingent valuation applications are discussed and tested. Correction mechanisms for these types of biases are demonstrated. Results indicate the importance of testing and correcting for unit and item nonresponse bias in contingent valuation survey data. When sample nonresponse and...
An empirical determination of the effects of sea state bias on Seasat altimetry
NASA Technical Reports Server (NTRS)
Born, G. H.; Richards, M. A.; Rosborough, G. W.
1982-01-01
A linear empirical model has been developed for the correction of sea state bias effects, in Seasat altimetry data altitude measurements, that are due to (1) electromagnetic bias caused by the fact that ocean wave troughs reflect the altimeter signal more strongly than the crests, shifting the apparent mean sea level toward the wave troughs, and (2) an independent instrument-related bias resulting from the inability of height corrections applied in the ground processor to compensate for simplifying assumptions made for the processor aboard Seasat. After applying appropriate corrections to the altimetry data, an empirical model for the sea state bias is obtained by differencing significant wave height and height measurements from coincident ground tracks. Height differences are minimized by solving for the coefficient of a linear relationship between height differences and wave height differences that minimize the height differences. In more than 50% of the 36 cases examined, 7% of the value of significant wave height should be subtracted for sea state bias correction.
Non-Gaussian bias: insights from discrete density peaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desjacques, Vincent; Riotto, Antonio; Gong, Jinn-Ouk, E-mail: Vincent.Desjacques@unige.ch, E-mail: jinn-ouk.gong@apctp.org, E-mail: Antonio.Riotto@unige.ch
2013-09-01
Corrections induced by primordial non-Gaussianity to the linear halo bias can be computed from a peak-background split or the widespread local bias model. However, numerical simulations clearly support the prediction of the former, in which the non-Gaussian amplitude is proportional to the linear halo bias. To understand better the reasons behind the failure of standard Lagrangian local bias, in which the halo overdensity is a function of the local mass overdensity only, we explore the effect of a primordial bispectrum on the 2-point correlation of discrete density peaks. We show that the effective local bias expansion to peak clustering vastlymore » simplifies the calculation. We generalize this approach to excursion set peaks and demonstrate that the resulting non-Gaussian amplitude, which is a weighted sum of quadratic bias factors, precisely agrees with the peak-background split expectation, which is a logarithmic derivative of the halo mass function with respect to the normalisation amplitude. We point out that statistics of thresholded regions can be computed using the same formalism. Our results suggest that halo clustering statistics can be modelled consistently (in the sense that the Gaussian and non-Gaussian bias factors agree with peak-background split expectations) from a Lagrangian bias relation only if the latter is specified as a set of constraints imposed on the linear density field. This is clearly not the case of standard Lagrangian local bias. Therefore, one is led to consider additional variables beyond the local mass overdensity.« less
Zhang, Haixia; Zhao, Junkang; Gu, Caijiao; Cui, Yan; Rong, Huiying; Meng, Fanlong; Wang, Tong
2015-05-01
The study of the medical expenditure and its influencing factors among the students enrolling in Urban Resident Basic Medical Insurance (URBMI) in Taiyuan indicated that non response bias and selection bias coexist in dependent variable of the survey data. Unlike previous studies only focused on one missing mechanism, a two-stage method to deal with two missing mechanisms simultaneously was suggested in this study, combining multiple imputation with sample selection model. A total of 1 190 questionnaires were returned by the students (or their parents) selected in child care settings, schools and universities in Taiyuan by stratified cluster random sampling in 2012. In the returned questionnaires, 2.52% existed not missing at random (NMAR) of dependent variable and 7.14% existed missing at random (MAR) of dependent variable. First, multiple imputation was conducted for MAR by using completed data, then sample selection model was used to correct NMAR in multiple imputation, and a multi influencing factor analysis model was established. Based on 1 000 times resampling, the best scheme of filling the random missing values is the predictive mean matching (PMM) method under the missing proportion. With this optimal scheme, a two stage survey was conducted. Finally, it was found that the influencing factors on annual medical expenditure among the students enrolling in URBMI in Taiyuan included population group, annual household gross income, affordability of medical insurance expenditure, chronic disease, seeking medical care in hospital, seeking medical care in community health center or private clinic, hospitalization, hospitalization canceled due to certain reason, self medication and acceptable proportion of self-paid medical expenditure. The two-stage method combining multiple imputation with sample selection model can deal with non response bias and selection bias effectively in dependent variable of the survey data.
Meta-analysis of the effect of road work zones on crash occurrence.
Theofilatos, Athanasios; Ziakopoulos, Apostolos; Papadimitriou, Eleonora; Yannis, George; Diamandouros, Konstantinos
2017-11-01
There is strong evidence that work zones pose increased risk of crashes and injuries. The two most common risk factors associated with increased crash frequencies are work zone duration and length. However, relevant research on the topic is relatively limited. For that reason, this paper presents formal meta-analyses of studies that have estimated the relationship between the number of crashes and work zone duration and length, in order to provide overall estimates of those effects on crash frequencies. All studies presented in this paper are crash prediction models with similar specifications. According to the meta-analyses and after correcting for publication bias when it was considered appropriate, the summary estimates of regression coefficients were found to be 0.1703 for duration and 0.862 for length. These effects were significant for length but not for duration. However, the overall estimate of duration was significant before correcting for publication bias. Separate meta-analyses on the studies examining both duration and length was also carried out in order to have rough estimates of the combined effects. The estimate of duration was found to be 0.953, while for length was 0.847. Similar to previous meta-analyses the effect of duration after correcting for publication bias is not significant, while the effect of length was significant at a 95% level. Meta-regression findings indicate that the main factors influencing the overall estimates of the beta coefficients are study year and region for duration and study year and model specification for length. Copyright © 2017 Elsevier Ltd. All rights reserved.
Analysis of Developmental Data: Comparison Among Alternative Methods
ERIC Educational Resources Information Center
Wilson, Ronald S.
1975-01-01
To examine the ability of the correction factor epsilon to counteract statistical bias in univariate analysis, an analysis of variance (adjusted by epsilon) and a multivariate analysis of variance were performed on the same data. The results indicated that univariate analysis is a fully protected design when used with epsilon. (JMB)
Hydraulic correction method (HCM) to enhance the efficiency of SRTM DEM in flood modeling
NASA Astrophysics Data System (ADS)
Chen, Huili; Liang, Qiuhua; Liu, Yong; Xie, Shuguang
2018-04-01
Digital Elevation Model (DEM) is one of the most important controlling factors determining the simulation accuracy of hydraulic models. However, the currently available global topographic data is confronted with limitations for application in 2-D hydraulic modeling, mainly due to the existence of vegetation bias, random errors and insufficient spatial resolution. A hydraulic correction method (HCM) for the SRTM DEM is proposed in this study to improve modeling accuracy. Firstly, we employ the global vegetation corrected DEM (i.e. Bare-Earth DEM), developed from the SRTM DEM to include both vegetation height and SRTM vegetation signal. Then, a newly released DEM, removing both vegetation bias and random errors (i.e. Multi-Error Removed DEM), is employed to overcome the limitation of height errors. Last, an approach to correct the Multi-Error Removed DEM is presented to account for the insufficiency of spatial resolution, ensuring flow connectivity of the river networks. The approach involves: (a) extracting river networks from the Multi-Error Removed DEM using an automated algorithm in ArcGIS; (b) correcting the location and layout of extracted streams with the aid of Google Earth platform and Remote Sensing imagery; and (c) removing the positive biases of the raised segment in the river networks based on bed slope to generate the hydraulically corrected DEM. The proposed HCM utilizes easily available data and tools to improve the flow connectivity of river networks without manual adjustment. To demonstrate the advantages of HCM, an extreme flood event in Huifa River Basin (China) is simulated on the original DEM, Bare-Earth DEM, Multi-Error removed DEM, and hydraulically corrected DEM using an integrated hydrologic-hydraulic model. A comparative analysis is subsequently performed to assess the simulation accuracy and performance of four different DEMs and favorable results have been obtained on the corrected DEM.
Yao, Ning; Li, Yi; Li, Na; Yang, Daqing; Ayantobo, Olusola Olaitan
2018-10-15
The accuracy of gauge-measured precipitation (P m ) affects drought assessment since drought severity changes due to precipitation bias correction. This research investigates how drought severity changes as the result of bias-corrected precipitation (P c ) using the Erinc's index I m and standardized precipitation evapotranspiration index (SPEI). Daily and monthly P c values at 552 sites in China were determined using daily P m and wind speed and air temperature data over 1961-2015. P c -based I m values were generally larger than P m -based I m for most sub-regions in China. The increased P c and P c -based I m values indicated wetter climate conditions than previously reported for China. After precipitation bias-correction, Climate types changed, e.g., 20 sites from severe-arid to arid, and 11 sites from arid to semi-arid. However, the changes in SPEI were not that obvious due to precipitation bias correction because the standardized index SPEI removed the effects of mean precipitation values. In conclusion, precipitation bias in different sub-regions of China changed the spatial and temporal characteristics of drought assessment. Copyright © 2018 Elsevier B.V. All rights reserved.
Kwan, Johnny S H; Kung, Annie W C; Sham, Pak C
2011-09-01
Selective genotyping can increase power in quantitative trait association. One example of selective genotyping is two-tail extreme selection, but simple linear regression analysis gives a biased genetic effect estimate. Here, we present a simple correction for the bias.
Streamflow Bias Correction for Climate Change Impact Studies: Harmless Correction or Wrecking Ball?
NASA Astrophysics Data System (ADS)
Nijssen, B.; Chegwidden, O.
2017-12-01
Projections of the hydrologic impacts of climate change rely on a modeling chain that includes estimates of future greenhouse gas emissions, global climate models, and hydrologic models. The resulting streamflow time series are used in turn as input to impact studies. While these flows can sometimes be used directly in these impact studies, many applications require additional post-processing to remove model errors. Water resources models and regulation studies are a prime example of this type of application. These models rely on specific flows and reservoir levels to trigger reservoir releases and diversions and do not function well if the unregulated streamflow inputs are significantly biased in time and/or amount. This post-processing step is typically referred to as bias-correction, even though this step corrects not just the mean but the entire distribution of flows. Various quantile-mapping approaches have been developed that adjust the modeled flows to match a reference distribution for some historic period. Simulations of future flows are then post-processed using this same mapping to remove hydrologic model errors. These streamflow bias-correction methods have received far less scrutiny than the downscaling and bias-correction methods that are used for climate model output, mostly because they are less widely used. However, some of these methods introduce large artifacts in the resulting flow series, in some cases severely distorting the climate change signal that is present in future flows. In this presentation, we discuss our experience with streamflow bias-correction methods as part of a climate change impact study in the Columbia River basin in the Pacific Northwest region of the United States. To support this discussion, we present a novel way to assess whether a streamflow bias-correction method is merely a harmless correction or is more akin to taking a wrecking ball to the climate change signal.
NASA Astrophysics Data System (ADS)
Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.
2017-12-01
Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub-grid scale physical parameterizations, more accurate discretization of the model dynamics, boundary conditions, radiative transfer codes, and other potential model improvements which can then replace the empirical correction scheme. The analysis increments also provide guidance in testing new physical parameterizations.
Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q.; Ducote, Justin L.; Su, Min-Ying; Molloi, Sabee
2013-01-01
Purpose: Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. Methods: T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left–right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. Results: The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left–right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left–right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. Conclusions: The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications. PMID:24320536
Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q; Ducote, Justin L; Su, Min-Ying; Molloi, Sabee
2013-12-01
Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left-right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left-right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left-right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications.
Correcting Memory Improves Accuracy of Predicted Task Duration
ERIC Educational Resources Information Center
Roy, Michael M.; Mitten, Scott T.; Christenfeld, Nicholas J. S.
2008-01-01
People are often inaccurate in predicting task duration. The memory bias explanation holds that this error is due to people having incorrect memories of how long previous tasks have taken, and these biased memories cause biased predictions. Therefore, the authors examined the effect on increasing predictive accuracy of correcting memory through…
NASA Astrophysics Data System (ADS)
Khaleghi, Mohammad Reza; Varvani, Javad
2018-02-01
Complex and variable nature of the river sediment yield caused many problems in estimating the long-term sediment yield and problems input into the reservoirs. Sediment Rating Curves (SRCs) are generally used to estimate the suspended sediment load of the rivers and drainage watersheds. Since the regression equations of the SRCs are obtained by logarithmic retransformation and have a little independent variable in this equation, they also overestimate or underestimate the true sediment load of the rivers. To evaluate the bias correction factors in Kalshor and Kashafroud watersheds, seven hydrometric stations of this region with suitable upstream watershed and spatial distribution were selected. Investigation of the accuracy index (ratio of estimated sediment yield to observed sediment yield) and the precision index of different bias correction factors of FAO, Quasi-Maximum Likelihood Estimator (QMLE), Smearing, and Minimum-Variance Unbiased Estimator (MVUE) with LSD test showed that FAO coefficient increases the estimated error in all of the stations. Application of MVUE in linear and mean load rating curves has not statistically meaningful effects. QMLE and smearing factors increased the estimated error in mean load rating curve, but that does not have any effect on linear rating curve estimation.
A multi-source precipitation approach to fill gaps over a radar precipitation field
NASA Astrophysics Data System (ADS)
Tesfagiorgis, K. B.; Mahani, S. E.; Khanbilvardi, R.
2012-12-01
Satellite Precipitation Estimates (SPEs) may be the only available source of information for operational hydrologic and flash flood prediction due to spatial limitations of radar and gauge products. The present work develops an approach to seamlessly blend satellite, radar, climatological and gauge precipitation products to fill gaps over ground-based radar precipitation fields. To mix different precipitation products, the bias of any of the products relative to each other should be removed. For bias correction, the study used an ensemble-based method which aims to estimate spatially varying multiplicative biases in SPEs using a radar rainfall product. Bias factors were calculated for a randomly selected sample of rainy pixels in the study area. Spatial fields of estimated bias were generated taking into account spatial variation and random errors in the sampled values. A weighted Successive Correction Method (SCM) is proposed to make the merging between error corrected satellite and radar rainfall estimates. In addition to SCM, we use a Bayesian spatial method for merging the gap free radar with rain gauges, climatological rainfall sources and SPEs. We demonstrate the method using SPE Hydro-Estimator (HE), radar- based Stage-II, a climatological product PRISM and rain gauge dataset for several rain events from 2006 to 2008 over three different geographical locations of the United States. Results show that: the SCM method in combination with the Bayesian spatial model produced a precipitation product in good agreement with independent measurements. The study implies that using the available radar pixels surrounding the gap area, rain gauge, PRISM and satellite products, a radar like product is achievable over radar gap areas that benefits the scientific community.
Characteristics of Four-years of GOSAT/TANSO-FTS TIR V1.0 CO2 and CH4 Products
NASA Astrophysics Data System (ADS)
Saitoh, N.; Kimoto, S.; Sugimura, R.; Imasu, R.; Shiomi, K.; Kuze, A.; Kataoka, F.; Knuteson, R. O.; Machida, T.; Sawa, Y.; Matsuda, H.
2015-12-01
Greenhouse Gases Observing Satellite (GOSAT) was launched on 23 January 2009, and has continued to make global observations, including both nadir and off-nadir measurements, for more than six years since its launch. The thermal infrared (TIR) band of Thermal and Near-infrared Sensor for Carbon Observation Fourier Transform Spectrometer (TANSO-FTS) on board the GOSAT has observed CO2 and CH4 profiles. We have analyzed the four-year data from 2010 through 2013 of the latest released version of the TIR Level 2 (L2) CO2 and CH4 products (V1.0). Comparisons of the TIR upper atmospheric CO2 product with CO2 data from Comprehensive Observation Network for Trace Gases by Airliner (CONTRAIL) aircraft measurements show that the growth rate estimated from the TIR CO2 data is slightly lower than that from the CONTRAIL data. Overall, the TIR V1.0 CO2 product has better quality in the upper troposphere and lower stratosphere than the a priori judging from comparisons with the collocated aircraft data. In spring and summer, however, the quality of the TIR L2 CO2 products became slightly worse than in the other seasons, especially in the low and northern-mid latitudes. This is because the corresponding a priori had a larger bias and the TIR Level 1B (L1B) radiance spectra might have a larger bias in the spring-summer seasons. Here, we have tested several types of correction methods to modify the L1B spectral bias, and then compared CO2 and CH4 concentrations retrieved after applying spectral bias correction factors with coincident CONTRAIL and HIAPER Pole-to-Pole Observation (HIPPO) aircraft data. The comparison results suggest that the L1B spectral bias correction factor should be changed depending on wavelength. In addition, it should be expressed as a function of on-board internal calibration blackbody temperatures. This is because they are weak season-dependent parameters; they were clearly lower in spring and summer.
Lüdtke, Oliver; Marsh, Herbert W; Robitzsch, Alexander; Trautwein, Ulrich
2011-12-01
In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data when estimating contextual effects are distinguished: unreliability that is due to measurement error and unreliability that is due to sampling error. The fact that studies may or may not correct for these 2 types of error can be translated into a 2 × 2 taxonomy of multilevel latent contextual models comprising 4 approaches: an uncorrected approach, partial correction approaches correcting for either measurement or sampling error (but not both), and a full correction approach that adjusts for both sources of error. It is shown mathematically and with simulated data that the uncorrected and partial correction approaches can result in substantially biased estimates of contextual effects, depending on the number of L1 individuals per group, the number of groups, the intraclass correlation, the number of indicators, and the size of the factor loadings. However, the simulation study also shows that partial correction approaches can outperform full correction approaches when the data provide only limited information in terms of the L2 construct (i.e., small number of groups, low intraclass correlation). A real-data application from educational psychology is used to illustrate the different approaches.
Bias correction of surface downwelling longwave and shortwave radiation for the EWEMBI dataset
NASA Astrophysics Data System (ADS)
Lange, Stefan
2018-05-01
Many meteorological forcing datasets include bias-corrected surface downwelling longwave and shortwave radiation (rlds and rsds). Methods used for such bias corrections range from multi-year monthly mean value scaling to quantile mapping at the daily timescale. An additional downscaling is necessary if the data to be corrected have a higher spatial resolution than the observational data used to determine the biases. This was the case when EartH2Observe (E2OBS; Calton et al., 2016) rlds and rsds were bias-corrected using more coarsely resolved Surface Radiation Budget (SRB; Stackhouse Jr. et al., 2011) data for the production of the meteorological forcing dataset EWEMBI (Lange, 2016). This article systematically compares various parametric quantile mapping methods designed specifically for this purpose, including those used for the production of EWEMBI rlds and rsds. The methods vary in the timescale at which they operate, in their way of accounting for physical upper radiation limits, and in their approach to bridging the spatial resolution gap between E2OBS and SRB. It is shown how temporal and spatial variability deflation related to bilinear interpolation and other deterministic downscaling approaches can be overcome by downscaling the target statistics of quantile mapping from the SRB to the E2OBS grid such that the sub-SRB-grid-scale spatial variability present in the original E2OBS data is retained. Cross validations at the daily and monthly timescales reveal that it is worthwhile to take empirical estimates of physical upper limits into account when adjusting either radiation component and that, overall, bias correction at the daily timescale is more effective than bias correction at the monthly timescale if sampling errors are taken into account.
Explanation of Two Anomalous Results in Statistical Mediation Analysis.
Fritz, Matthew S; Taylor, Aaron B; Mackinnon, David P
2012-01-01
Previous studies of different methods of testing mediation models have consistently found two anomalous results. The first result is elevated Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap tests not found in nonresampling tests or in resampling tests that did not include a bias correction. This is of special concern as the bias-corrected bootstrap is often recommended and used due to its higher statistical power compared with other tests. The second result is statistical power reaching an asymptote far below 1.0 and in some conditions even declining slightly as the size of the relationship between X and M , a , increased. Two computer simulations were conducted to examine these findings in greater detail. Results from the first simulation found that the increased Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap are a function of an interaction between the size of the individual paths making up the mediated effect and the sample size, such that elevated Type I error rates occur when the sample size is small and the effect size of the nonzero path is medium or larger. Results from the second simulation found that stagnation and decreases in statistical power as a function of the effect size of the a path occurred primarily when the path between M and Y , b , was small. Two empirical mediation examples are provided using data from a steroid prevention and health promotion program aimed at high school football players (Athletes Training and Learning to Avoid Steroids; Goldberg et al., 1996), one to illustrate a possible Type I error for the bias-corrected bootstrap test and a second to illustrate a loss in power related to the size of a . Implications of these findings are discussed.
NASA Astrophysics Data System (ADS)
Moise Famien, Adjoua; Janicot, Serge; Delfin Ochou, Abe; Vrac, Mathieu; Defrance, Dimitri; Sultan, Benjamin; Noël, Thomas
2018-03-01
The objective of this paper is to present a new dataset of bias-corrected CMIP5 global climate model (GCM) daily data over Africa. This dataset was obtained using the cumulative distribution function transform (CDF-t) method, a method that has been applied to several regions and contexts but never to Africa. Here CDF-t has been applied over the period 1950-2099 combining Historical runs and climate change scenarios for six variables: precipitation, mean near-surface air temperature, near-surface maximum air temperature, near-surface minimum air temperature, surface downwelling shortwave radiation, and wind speed, which are critical variables for agricultural purposes. WFDEI has been used as the reference dataset to correct the GCMs. Evaluation of the results over West Africa has been carried out on a list of priority user-based metrics that were discussed and selected with stakeholders. It includes simulated yield using a crop model simulating maize growth. These bias-corrected GCM data have been compared with another available dataset of bias-corrected GCMs using WATCH Forcing Data as the reference dataset. The impact of WFD, WFDEI, and also EWEMBI reference datasets has been also examined in detail. It is shown that CDF-t is very effective at removing the biases and reducing the high inter-GCM scattering. Differences with other bias-corrected GCM data are mainly due to the differences among the reference datasets. This is particularly true for surface downwelling shortwave radiation, which has a significant impact in terms of simulated maize yields. Projections of future yields over West Africa are quite different, depending on the bias-correction method used. However all these projections show a similar relative decreasing trend over the 21st century.
A New Source Biasing Approach in ADVANTG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bevill, Aaron M; Mosher, Scott W
2012-01-01
The ADVANTG code has been developed at Oak Ridge National Laboratory to generate biased sources and weight window maps for MCNP using the CADIS and FW-CADIS methods. In preparation for an upcoming RSICC release, a new approach for generating a biased source has been developed. This improvement streamlines user input and improves reliability. Previous versions of ADVANTG generated the biased source from ADVANTG input, writing an entirely new general fixed-source definition (SDEF). Because volumetric sources were translated into SDEF-format as a finite set of points, the user had to perform a convergence study to determine whether the number of sourcemore » points used accurately represented the source region. Further, the large number of points that must be written in SDEF-format made the MCNP input and output files excessively long and difficult to debug. ADVANTG now reads SDEF-format distributions and generates corresponding source biasing cards, eliminating the need for a convergence study. Many problems of interest use complicated source regions that are defined using cell rejection. In cell rejection, the source distribution in space is defined using an arbitrarily complex cell and a simple bounding region. Source positions are sampled within the bounding region but accepted only if they fall within the cell; otherwise, the position is resampled entirely. When biasing in space is applied to sources that use rejection sampling, current versions of MCNP do not account for the rejection in setting the source weight of histories, resulting in an 'unfair game'. This problem was circumvented in previous versions of ADVANTG by translating volumetric sources into a finite set of points, which does not alter the mean history weight ({bar w}). To use biasing parameters without otherwise modifying the original cell-rejection SDEF-format source, ADVANTG users now apply a correction factor for {bar w} in post-processing. A stratified-random sampling approach in ADVANTG is under development to automatically report the correction factor with estimated uncertainty. This study demonstrates the use of ADVANTG's new source biasing method, including the application of {bar w}.« less
NASA Astrophysics Data System (ADS)
Sippel, S.; Otto, F. E. L.; Forkel, M.; Allen, M. R.; Guillod, B. P.; Heimann, M.; Reichstein, M.; Seneviratne, S. I.; Kirsten, T.; Mahecha, M. D.
2015-12-01
Understanding, quantifying and attributing the impacts of climatic extreme events and variability is crucial for societal adaptation in a changing climate. However, climate model simulations generated for this purpose typically exhibit pronounced biases in their output that hinders any straightforward assessment of impacts. To overcome this issue, various bias correction strategies are routinely used to alleviate climate model deficiencies most of which have been criticized for physical inconsistency and the non-preservation of the multivariate correlation structure. We assess how biases and their correction affect the quantification and attribution of simulated extremes and variability in i) climatological variables and ii) impacts on ecosystem functioning as simulated by a terrestrial biosphere model. Our study demonstrates that assessments of simulated climatic extreme events and impacts in the terrestrial biosphere are highly sensitive to bias correction schemes with major implications for the detection and attribution of these events. We introduce a novel ensemble-based resampling scheme based on a large regional climate model ensemble generated by the distributed weather@home setup[1], which fully preserves the physical consistency and multivariate correlation structure of the model output. We use extreme value statistics to show that this procedure considerably improves the representation of climatic extremes and variability. Subsequently, biosphere-atmosphere carbon fluxes are simulated using a terrestrial ecosystem model (LPJ-GSI) to further demonstrate the sensitivity of ecosystem impacts to the methodology of bias correcting climate model output. We find that uncertainties arising from bias correction schemes are comparable in magnitude to model structural and parameter uncertainties. The present study consists of a first attempt to alleviate climate model biases in a physically consistent way and demonstrates that this yields improved simulations of climate extremes and associated impacts. [1] http://www.climateprediction.net/weatherathome/
A brain MRI bias field correction method created in the Gaussian multi-scale space
NASA Astrophysics Data System (ADS)
Chen, Mingsheng; Qin, Mingxin
2017-07-01
A pre-processing step is needed to correct for the bias field signal before submitting corrupted MR images to such image-processing algorithms. This study presents a new bias field correction method. The method creates a Gaussian multi-scale space by the convolution of the inhomogeneous MR image with a two-dimensional Gaussian function. In the multi-Gaussian space, the method retrieves the image details from the differentiation of the original image and convolution image. Then, it obtains an image whose inhomogeneity is eliminated by the weighted sum of image details in each layer in the space. Next, the bias field-corrected MR image is retrieved after the Υ correction, which enhances the contrast and brightness of the inhomogeneity-eliminated MR image. We have tested the approach on T1 MRI and T2 MRI with varying bias field levels and have achieved satisfactory results. Comparison experiments with popular software have demonstrated superior performance of the proposed method in terms of quantitative indices, especially an improvement in subsequent image segmentation.
Using a bias aware EnKF to account for unresolved structure in an unsaturated zone model
NASA Astrophysics Data System (ADS)
Erdal, D.; Neuweiler, I.; Wollschläger, U.
2014-01-01
When predicting flow in the unsaturated zone, any method for modeling the flow will have to define how, and to what level, the subsurface structure is resolved. In this paper, we use the Ensemble Kalman Filter to assimilate local soil water content observations from both a synthetic layered lysimeter and a real field experiment in layered soil in an unsaturated water flow model. We investigate the use of colored noise bias corrections to account for unresolved subsurface layering in a homogeneous model and compare this approach with a fully resolved model. In both models, we use a simplified model parameterization in the Ensemble Kalman Filter. The results show that the use of bias corrections can increase the predictive capability of a simplified homogeneous flow model if the bias corrections are applied to the model states. If correct knowledge of the layering structure is available, the fully resolved model performs best. However, if no, or erroneous, layering is used in the model, the use of a homogeneous model with bias corrections can be the better choice for modeling the behavior of the system.
The Role of Response Bias in Perceptual Learning
2015-01-01
Sensory judgments improve with practice. Such perceptual learning is often thought to reflect an increase in perceptual sensitivity. However, it may also represent a decrease in response bias, with unpracticed observers acting in part on a priori hunches rather than sensory evidence. To examine whether this is the case, 55 observers practiced making a basic auditory judgment (yes/no amplitude-modulation detection or forced-choice frequency/amplitude discrimination) over multiple days. With all tasks, bias was present initially, but decreased with practice. Notably, this was the case even on supposedly “bias-free,” 2-alternative forced-choice, tasks. In those tasks, observers did not favor the same response throughout (stationary bias), but did favor whichever response had been correct on previous trials (nonstationary bias). Means of correcting for bias are described. When applied, these showed that at least 13% of perceptual learning on a forced-choice task was due to reduction in bias. In other situations, changes in bias were shown to obscure the true extent of learning, with changes in estimated sensitivity increasing once bias was corrected for. The possible causes of bias and the implications for our understanding of perceptual learning are discussed. PMID:25867609
Calibration of weak-lensing shear in the Kilo-Degree Survey
NASA Astrophysics Data System (ADS)
Fenech Conti, I.; Herbonnet, R.; Hoekstra, H.; Merten, J.; Miller, L.; Viola, M.
2017-05-01
We describe and test the pipeline used to measure the weak-lensing shear signal from the Kilo-Degree Survey (KiDS). It includes a novel method of 'self-calibration' that partially corrects for the effect of noise bias. We also discuss the 'weight bias' that may arise in optimally weighted measurements, and present a scheme to mitigate that bias. To study the residual biases arising from both galaxy selection and shear measurement, and to derive an empirical correction to reduce the shear biases to ≲1 per cent, we create a suite of simulated images whose properties are close to those of the KiDS survey observations. We find that the use of 'self-calibration' reduces the additive and multiplicative shear biases significantly, although further correction via a calibration scheme is required, which also corrects for a dependence of the bias on galaxy properties. We find that the calibration relation itself is biased by the use of noisy, measured galaxy properties, which may limit the final accuracy that can be achieved. We assess the accuracy of the calibration in the tomographic bins used for the KiDS cosmic shear analysis, testing in particular the effect of possible variations in the uncertain distributions of galaxy size, magnitude and ellipticity, and conclude that the calibration procedure is accurate at the level of multiplicative bias ≲1 per cent required for the KiDS cosmic shear analysis.
Forecasts Recent NCEP NAM-CMAQ AQF Reports EPA CMAQ Bibliography 2016-2017 Huang, J., et al., 2017: Wea Stajner, I., et al., 2016: EGU: NAQFC Overview Huang, J., et al. 2016: AMS: Bias Correction Stajner, I, et . Huang, J., et al.,(2015): CMAS, Testing of two bias correction approaches for reducing biases of
Empirical Validation of a Procedure to Correct Position and Stimulus Biases in Matching-to-Sample
ERIC Educational Resources Information Center
Kangas, Brian D.; Branch, Marc N.
2008-01-01
The development of position and stimulus biases often occurs during initial training on matching-to-sample tasks. Furthermore, without intervention, these biases can be maintained via intermittent reinforcement provided by matching-to-sample contingencies. The present study evaluated the effectiveness of a correction procedure designed to…
To Duc, Khanh
2017-11-18
Receiver operating characteristic (ROC) surface analysis is usually employed to assess the accuracy of a medical diagnostic test when there are three ordered disease status (e.g. non-diseased, intermediate, diseased). In practice, verification bias can occur due to missingness of the true disease status and can lead to a distorted conclusion on diagnostic accuracy. In such situations, bias-corrected inference tools are required. This paper introduce an R package, named bcROCsurface, which provides utility functions for verification bias-corrected ROC surface analysis. The shiny web application of the correction for verification bias in estimation of the ROC surface analysis is also developed. bcROCsurface may become an important tool for the statistical evaluation of three-class diagnostic markers in presence of verification bias. The R package, readme and example data are available on CRAN. The web interface enables users less familiar with R to evaluate the accuracy of diagnostic tests, and can be found at http://khanhtoduc.shinyapps.io/bcROCsurface_shiny/ .
On the Limitations of Variational Bias Correction
NASA Technical Reports Server (NTRS)
Moradi, Isaac; Mccarty, Will; Gelaro, Ronald
2018-01-01
Satellite radiances are the largest dataset assimilated into Numerical Weather Prediction (NWP) models, however the data are subject to errors and uncertainties that need to be accounted for before assimilating into the NWP models. Variational bias correction uses the time series of observation minus background to estimate the observations bias. This technique does not distinguish between the background error, forward operator error, and observations error so that all these errors are summed up together and counted as observation error. We identify some sources of observations errors (e.g., antenna emissivity, non-linearity in the calibration, and antenna pattern) and show the limitations of variational bias corrections on estimating these errors.
Dye bias correction in dual-labeled cDNA microarray gene expression measurements.
Rosenzweig, Barry A; Pine, P Scott; Domon, Olen E; Morris, Suzanne M; Chen, James J; Sistare, Frank D
2004-01-01
A significant limitation to the analytical accuracy and precision of dual-labeled spotted cDNA microarrays is the signal error due to dye bias. Transcript-dependent dye bias may be due to gene-specific differences of incorporation of two distinctly different chemical dyes and the resultant differential hybridization efficiencies of these two chemically different targets for the same probe. Several approaches were used to assess and minimize the effects of dye bias on fluorescent hybridization signals and maximize the experimental design efficiency of a cell culture experiment. Dye bias was measured at the individual transcript level within each batch of simultaneously processed arrays by replicate dual-labeled split-control sample hybridizations and accounted for a significant component of fluorescent signal differences. This transcript-dependent dye bias alone could introduce unacceptably high numbers of both false-positive and false-negative signals. We found that within a given set of concurrently processed hybridizations, the bias is remarkably consistent and therefore measurable and correctable. The additional microarrays and reagents required for paired technical replicate dye-swap corrections commonly performed to control for dye bias could be costly to end users. Incorporating split-control microarrays within a set of concurrently processed hybridizations to specifically measure dye bias can eliminate the need for technical dye swap replicates and reduce microarray and reagent costs while maintaining experimental accuracy and technical precision. These data support a practical and more efficient experimental design to measure and mathematically correct for dye bias. PMID:15033598
The covariance between hourly concentration (C) and deposition velocity (V) for various atmospheric; species may act to bias the, deposition (D) computed from the product of the weekly average C and 'V. This is a potential problem for the CASTNet filter pack (FP) species, nitric...
NASA Astrophysics Data System (ADS)
Inoue, Makoto; Morino, Isamu; Uchino, Osamu; Nakatsuru, Takahiro; Yoshida, Yukio; Yokota, Tatsuya; Wunch, Debra; Wennberg, Paul O.; Roehl, Coleen M.; Griffith, David W. T.; Velazco, Voltaire A.; Deutscher, Nicholas M.; Warneke, Thorsten; Notholt, Justus; Robinson, John; Sherlock, Vanessa; Hase, Frank; Blumenstock, Thomas; Rettinger, Markus; Sussmann, Ralf; Kyrö, Esko; Kivi, Rigel; Shiomi, Kei; Kawakami, Shuji; De Mazière, Martine; Arnold, Sabrina G.; Feist, Dietrich G.; Barrow, Erica A.; Barney, James; Dubey, Manvendra; Schneider, Matthias; Iraci, Laura T.; Podolske, James R.; Hillyard, Patrick W.; Machida, Toshinobu; Sawa, Yousuke; Tsuboi, Kazuhiro; Matsueda, Hidekazu; Sweeney, Colm; Tans, Pieter P.; Andrews, Arlyn E.; Biraud, Sebastien C.; Fukuyama, Yukio; Pittman, Jasna V.; Kort, Eric A.; Tanaka, Tomoaki
2016-08-01
We describe a method for removing systematic biases of column-averaged dry air mole fractions of CO2 (XCO2) and CH4 (XCH4) derived from short-wavelength infrared (SWIR) spectra of the Greenhouse gases Observing SATellite (GOSAT). We conduct correlation analyses between the GOSAT biases and simultaneously retrieved auxiliary parameters. We use these correlations to bias correct the GOSAT data, removing these spurious correlations. Data from the Total Carbon Column Observing Network (TCCON) were used as reference values for this regression analysis. To evaluate the effectiveness of this correction method, the uncorrected/corrected GOSAT data were compared to independent XCO2 and XCH4 data derived from aircraft measurements taken for the Comprehensive Observation Network for TRace gases by AIrLiner (CONTRAIL) project, the National Oceanic and Atmospheric Administration (NOAA), the US Department of Energy (DOE), the National Institute for Environmental Studies (NIES), the Japan Meteorological Agency (JMA), the HIAPER Pole-to-Pole observations (HIPPO) program, and the GOSAT validation aircraft observation campaign over Japan. These comparisons demonstrate that the empirically derived bias correction improves the agreement between GOSAT XCO2/XCH4 and the aircraft data. Finally, we present spatial distributions and temporal variations of the derived GOSAT biases.
Quantile Mapping Bias correction for daily precipitation over Vietnam in a regional climate model
NASA Astrophysics Data System (ADS)
Trinh, L. T.; Matsumoto, J.; Ngo-Duc, T.
2017-12-01
In the past decades, Regional Climate Models (RCMs) have been developed significantly, allowing climate simulation to be conducted at a higher resolution. However, RCMs often contained biases when comparing with observations. Therefore, statistical correction methods were commonly employed to reduce/minimize the model biases. In this study, outputs of the Regional Climate Model (RegCM) version 4.3 driven by the CNRM-CM5 global products were evaluated with and without the Quantile Mapping (QM) bias correction method. The model domain covered the area from 90oE to 145oE and from 15oS to 40oN with a horizontal resolution of 25km. The QM bias correction processes were implemented by using the Vietnam Gridded precipitation dataset (VnGP) and the outputs of RegCM historical run in the period 1986-1995 and then validated for the period 1996-2005. Based on the statistical quantity of spatial correlation and intensity distributions, the QM method showed a significant improvement in rainfall compared to the non-bias correction method. The improvements both in time and space were recognized in all seasons and all climatic sub-regions of Vietnam. Moreover, not only the rainfall amount but also some extreme indices such as R10m, R20mm, R50m, CDD, CWD, R95pTOT, R99pTOT were much better after the correction. The results suggested that the QM correction method should be taken into practice for the projections of the future precipitation over Vietnam.
Fat fraction bias correction using T1 estimates and flip angle mapping.
Yang, Issac Y; Cui, Yifan; Wiens, Curtis N; Wade, Trevor P; Friesen-Waldner, Lanette J; McKenzie, Charles A
2014-01-01
To develop a new method of reducing T1 bias in proton density fat fraction (PDFF) measured with iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL). PDFF maps reconstructed from high flip angle IDEAL measurements were simulated and acquired from phantoms and volunteer L4 vertebrae. T1 bias was corrected using a priori T1 values for water and fat, both with and without flip angle correction. Signal-to-noise ratio (SNR) maps were used to measure precision of the reconstructed PDFF maps. PDFF measurements acquired using small flip angles were then compared to both sets of corrected large flip angle measurements for accuracy and precision. Simulations show similar results in PDFF error between small flip angle measurements and corrected large flip angle measurements as long as T1 estimates were within one standard deviation from the true value. Compared to low flip angle measurements, phantom and in vivo measurements demonstrate better precision and accuracy in PDFF measurements if images were acquired at a high flip angle, with T1 bias corrected using T1 estimates and flip angle mapping. T1 bias correction of large flip angle acquisitions using estimated T1 values with flip angle mapping yields fat fraction measurements of similar accuracy and superior precision compared to low flip angle acquisitions. Copyright © 2013 Wiley Periodicals, Inc.
Correction of stream quality trends for the effects of laboratory measurement bias
Alexander, Richard B.; Smith, Richard A.; Schwarz, Gregory E.
1993-01-01
We present a statistical model relating measurements of water quality to associated errors in laboratory methods. Estimation of the model allows us to correct trends in water quality for long-term and short-term variations in laboratory measurement errors. An illustration of the bias correction method for a large national set of stream water quality and quality assurance data shows that reductions in the bias of estimates of water quality trend slopes are achieved at the expense of increases in the variance of these estimates. Slight improvements occur in the precision of estimates of trend in bias by using correlative information on bias and water quality to estimate random variations in measurement bias. The results of this investigation stress the need for reliable, long-term quality assurance data and efficient statistical methods to assess the effects of measurement errors on the detection of water quality trends.
Bias correction for magnetic resonance images via joint entropy regularization.
Wang, Shanshan; Xia, Yong; Dong, Pei; Luo, Jianhua; Huang, Qiu; Feng, Dagan; Li, Yuanxiang
2014-01-01
Due to the imperfections of the radio frequency (RF) coil or object-dependent electrodynamic interactions, magnetic resonance (MR) images often suffer from a smooth and biologically meaningless bias field, which causes severe troubles for subsequent processing and quantitative analysis. To effectively restore the original signal, this paper simultaneously exploits the spatial and gradient features of the corrupted MR images for bias correction via the joint entropy regularization. With both isotropic and anisotropic total variation (TV) considered, two nonparametric bias correction algorithms have been proposed, namely IsoTVBiasC and AniTVBiasC. These two methods have been applied to simulated images under various noise levels and bias field corruption and also tested on real MR data. The test results show that the proposed two methods can effectively remove the bias field and also present comparable performance compared to the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Simpson, I.
2015-12-01
A long standing bias among global climate models (GCMs) is their incorrect representation of the wintertime circulation of the North Atlantic region. Specifically models tend to exhibit a North Atlantic jet (and associated storm track) that is too zonal, extending across central Europe, when it should tilt northward toward Scandinavia. GCM's consistently predict substantial changes in the large scale circulation in this region, consisting of a localized anti-cyclonic circulation, centered over the Mediterranean and accompanied by increased aridity there and increased storminess over Northern Europe.Here, we present preliminary results from experiments that are designed to address the question of what the impact of the climatological circulation biases might be on this predicted future response. Climate change experiments will be compared in two versions of the Community Earth System Model: the first is a free running version of the model, as typically used in climate prediction; the second is a bias corrected version of the model in which a seasonally varying cycle of bias correction tendencies are applied to the wind and temperature fields. These bias correction tendencies are designed to account for deficiencies in the fast parameterized processes, with an aim to push the model toward a more realistic climatology.While these experiments come with the caveat that they assume the bias correction tendencies will remain constant with time, they allow for an initial assessment, through controlled experiments, of the impact that biases in the climatological circulation can have on future predictions in this region. They will also motivate future work that can make use of the bias correction tendencies to understand the underlying physical processes responsible for the incorrect tilt of the jet.
An Analysis of the Individual Effects of Sex Bias.
ERIC Educational Resources Information Center
Smith, Richard M.
Most attempts to correct for the presence of biased test items in a measurement instrument have been either to remove the items or to adjust the scores to correct for the bias. Using the Rasch Dichotomous Response Model and the independent ability estimates derived from three sets of items, those which favor females, those which favor males, and…
ERIC Educational Resources Information Center
Ugille, Maaike; Moeyaert, Mariola; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim
2014-01-01
A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect…
The Detection and Correction of Bias in Student Ratings of Instruction.
ERIC Educational Resources Information Center
Haladyna, Thomas; Hess, Robert K.
1994-01-01
A Rasch model was used to detect and correct bias in Likert rating scales used to assess student perceptions of college teaching, using a database of ratings. Statistical corrections were significant, supporting the model's potential utility. Recommendations are made for a theoretical rationale and further research on the model. (Author/MSE)
A European-wide 222radon and 222radon progeny comparison study
NASA Astrophysics Data System (ADS)
Schmithüsen, Dominik; Chambers, Scott; Fischer, Bernd; Gilge, Stefan; Hatakka, Juha; Kazan, Victor; Neubert, Rolf; Paatero, Jussi; Ramonet, Michel; Schlosser, Clemens; Schmid, Sabine; Vermeulen, Alex; Levin, Ingeborg
2017-04-01
Although atmospheric 222radon (222Rn) activity concentration measurements are currently performed worldwide, they are being made by many different laboratories and with fundamentally different measurement principles, so compatibility issues can limit their utility for regional-to-global applications. Consequently, we conducted a European-wide 222Rn / 222Rn progeny comparison study in order to evaluate the different measurement systems in use, determine potential systematic biases between them, and estimate correction factors that could be applied to harmonize data for their use as a tracer in atmospheric applications. Two compact portable Heidelberg radon monitors (HRM) were moved around to run for at least 1 month at each of the nine European measurement stations included in this comparison. Linear regressions between parallel data sets were calculated, yielding correction factors relative to the HRM ranging from 0.68 to 1.45. A calibration bias between ANSTO (Australian Nuclear Science and Technology Organisation) two-filter radon monitors and the HRM of ANSTO / HRM = 1.11 ± 0.05 was found. Moreover, for the continental stations using one-filter systems that derive atmospheric 222Rn activity concentrations from measured atmospheric progeny activity concentrations, preliminary 214Po / 222Rn disequilibrium values were also estimated. Mean station-specific disequilibrium values between 0.8 at mountain sites (e.g. Schauinsland) and 0.9 at non-mountain sites for sampling heights around 20 to 30 m above ground level were determined. The respective corrections for calibration biases and disequilibrium derived in this study need to be applied to obtain a compatible European atmospheric 222Rn data set for use in quantitative applications, such as regional model intercomparison and validation or trace gas flux estimates with the radon tracer method.
NASA Technical Reports Server (NTRS)
Whiteman, D. N.; Cadirola, M.; Venable, D.; Calhoun, M.; Miloshevich, L; Vermeesch, K.; Twigg, L.; Dirisu, A.; Hurst, D.; Hall, E.;
2012-01-01
The MOHAVE-2009 campaign brought together diverse instrumentation for measuring atmospheric water vapor. We report on the participation of the ALVICE (Atmospheric Laboratory for Validation, Interagency Collaboration and Education) mobile laboratory in the MOHAVE-2009 campaign. In appendices we also report on the performance of the corrected Vaisala RS92 radiosonde measurements during the campaign, on a new radiosonde based calibration algorithm that reduces the influence of atmospheric variability on the derived calibration constant, and on other results of the ALVICE deployment. The MOHAVE-2009 campaign permitted the Raman lidar systems participating to discover and address measurement biases in the upper troposphere and lower stratosphere. The ALVICE lidar system was found to possess a wet bias which was attributed to fluorescence of insect material that was deposited on the telescope early in the mission. Other sources of wet biases are discussed and data from other Raman lidar systems are investigated, revealing that wet biases in upper tropospheric (UT) and lower stratospheric (LS) water vapor measurements appear to be quite common in Raman lidar systems. Lower stratospheric climatology of water vapor is investigated both as a means to check for the existence of these wet biases in Raman lidar data and as a source of correction for the bias. A correction technique is derived and applied to the ALVICE lidar water vapor profiles. Good agreement is found between corrected ALVICE lidar measurments and those of RS92, frost point hygrometer and total column water. The correction is offered as a general method to both quality control Raman water vapor lidar data and to correct those data that have signal-dependent bias. The influence of the correction is shown to be small at regions in the upper troposphere where recent work indicates detection of trends in atmospheric water vapor may be most robust. The correction shown here holds promise for permitting useful upper tropospheric water vapor profiles to be consistently measured by Raman lidar within NDACC (Network for the Detection of Atmospheric Composition Change) and elsewhere, despite the prevalence of instrumental and atmospheric effects that can contaminate the very low signal to noise measurements in the UT.
NASA Astrophysics Data System (ADS)
Zhang, Baocheng; Liu, Teng; Yuan, Yunbin
2017-11-01
The integer ambiguity resolution enabled precise point positioning (PPP-RTK) has been proven advantageous in a wide range of applications. The realization of PPP-RTK concerns the isolation of satellite phase biases (SPBs) and other corrections from a network of Global Positioning System (GPS) reference receivers. This is generally based on Kalman filter in order to achieve real-time capability, in which proper modeling of the dynamics of various types of unknowns remains crucial. This paper seeks to gain insight into how to reasonably deal with the dynamic behavior of the estimable receiver phase biases (RPBs). Using dual-frequency GPS data collected at six colocated receivers over days 50-120 of 2015, we analyze the 30-s epoch-by-epoch estimates of L1 and wide-lane (WL) RPBs for each receiver pair. The dynamics observed in these estimates are a combined effect of three factors, namely the random measurement noise, the multipath and the ambient temperature. The first factor can be overcome by turning to a real-time filter and the second by considering the use of a sidereal filtering. The third factor has an effect only on the WL, and this effect appears to be linear. After accounting for these three factors, the low-pass-filtered, sidereal-filtered, epoch-by-epoch estimates of L1 RPBs follow a random walk process, whereas those of WL RPBs are constant over time. Properly modeling the dynamics of RPBs is vital, as it ensures the best convergence of the Kalman-filtered, between-satellite single-differenced SPB estimates to their correct values and, in turn, shortens the time-to-first-fix at user side.
NASA Astrophysics Data System (ADS)
Zhang, Baocheng; Liu, Teng; Yuan, Yunbin
2018-06-01
The integer ambiguity resolution enabled precise point positioning (PPP-RTK) has been proven advantageous in a wide range of applications. The realization of PPP-RTK concerns the isolation of satellite phase biases (SPBs) and other corrections from a network of Global Positioning System (GPS) reference receivers. This is generally based on Kalman filter in order to achieve real-time capability, in which proper modeling of the dynamics of various types of unknowns remains crucial. This paper seeks to gain insight into how to reasonably deal with the dynamic behavior of the estimable receiver phase biases (RPBs). Using dual-frequency GPS data collected at six colocated receivers over days 50-120 of 2015, we analyze the 30-s epoch-by-epoch estimates of L1 and wide-lane (WL) RPBs for each receiver pair. The dynamics observed in these estimates are a combined effect of three factors, namely the random measurement noise, the multipath and the ambient temperature. The first factor can be overcome by turning to a real-time filter and the second by considering the use of a sidereal filtering. The third factor has an effect only on the WL, and this effect appears to be linear. After accounting for these three factors, the low-pass-filtered, sidereal-filtered, epoch-by-epoch estimates of L1 RPBs follow a random walk process, whereas those of WL RPBs are constant over time. Properly modeling the dynamics of RPBs is vital, as it ensures the best convergence of the Kalman-filtered, between-satellite single-differenced SPB estimates to their correct values and, in turn, shortens the time-to-first-fix at user side.
Brief communication: Drought likelihood for East Africa
NASA Astrophysics Data System (ADS)
Yang, Hui; Huntingford, Chris
2018-02-01
The East Africa drought in autumn of year 2016 caused malnutrition, illness and death. Close to 16 million people across Somalia, Ethiopia and Kenya needed food, water and medical assistance. Many factors influence drought stress and response. However, inevitably the following question is asked: are elevated greenhouse gas concentrations altering extreme rainfall deficit frequency? We investigate this with general circulation models (GCMs). After GCM bias correction to match the climatological mean of the CHIRPS data-based rainfall product, climate models project small decreases in probability of drought with the same (or worse) severity as 2016 ASO (August to October) East African event. This is by the end of the 21st century compared to the probabilities for present day. However, when further adjusting the climatological variability of GCMs to also match CHIRPS data, by additionally bias-correcting for variance, then the probability of drought occurrence will increase slightly over the same period.
Malyarenko, Dariya I; Pang, Yuxi; Senegas, Julien; Ivancevic, Marko K; Ross, Brian D; Chenevert, Thomas L
2015-12-01
Spatially non-uniform diffusion weighting bias due to gradient nonlinearity (GNL) causes substantial errors in apparent diffusion coefficient (ADC) maps for anatomical regions imaged distant from magnet isocenter. Our previously-described approach allowed effective removal of spatial ADC bias from three orthogonal DWI measurements for mono-exponential media of arbitrary anisotropy. The present work evaluates correction feasibility and performance for quantitative diffusion parameters of the two-component IVIM model for well-perfused and nearly isotropic renal tissue. Sagittal kidney DWI scans of a volunteer were performed on a clinical 3T MRI scanner near isocenter and offset superiorly. Spatially non-uniform diffusion weighting due to GNL resulted both in shift and broadening of perfusion-suppressed ADC histograms for off-center DWI relative to unbiased measurements close to isocenter. Direction-average DW-bias correctors were computed based on the known gradient design provided by vendor. The computed bias maps were empirically confirmed by coronal DWI measurements for an isotropic gel-flood phantom. Both phantom and renal tissue ADC bias for off-center measurements was effectively removed by applying pre-computed 3D correction maps. Comparable ADC accuracy was achieved for corrections of both b -maps and DWI intensities in presence of IVIM perfusion. No significant bias impact was observed for IVIM perfusion fraction.
Zeng, Chan; Newcomer, Sophia R; Glanz, Jason M; Shoup, Jo Ann; Daley, Matthew F; Hambidge, Simon J; Xu, Stanley
2013-12-15
The self-controlled case series (SCCS) method is often used to examine the temporal association between vaccination and adverse events using only data from patients who experienced such events. Conditional Poisson regression models are used to estimate incidence rate ratios, and these models perform well with large or medium-sized case samples. However, in some vaccine safety studies, the adverse events studied are rare and the maximum likelihood estimates may be biased. Several bias correction methods have been examined in case-control studies using conditional logistic regression, but none of these methods have been evaluated in studies using the SCCS design. In this study, we used simulations to evaluate 2 bias correction approaches-the Firth penalized maximum likelihood method and Cordeiro and McCullagh's bias reduction after maximum likelihood estimation-with small sample sizes in studies using the SCCS design. The simulations showed that the bias under the SCCS design with a small number of cases can be large and is also sensitive to a short risk period. The Firth correction method provides finite and less biased estimates than the maximum likelihood method and Cordeiro and McCullagh's method. However, limitations still exist when the risk period in the SCCS design is short relative to the entire observation period.
Malyarenko, Dariya I.; Pang, Yuxi; Senegas, Julien; Ivancevic, Marko K.; Ross, Brian D.; Chenevert, Thomas L.
2015-01-01
Spatially non-uniform diffusion weighting bias due to gradient nonlinearity (GNL) causes substantial errors in apparent diffusion coefficient (ADC) maps for anatomical regions imaged distant from magnet isocenter. Our previously-described approach allowed effective removal of spatial ADC bias from three orthogonal DWI measurements for mono-exponential media of arbitrary anisotropy. The present work evaluates correction feasibility and performance for quantitative diffusion parameters of the two-component IVIM model for well-perfused and nearly isotropic renal tissue. Sagittal kidney DWI scans of a volunteer were performed on a clinical 3T MRI scanner near isocenter and offset superiorly. Spatially non-uniform diffusion weighting due to GNL resulted both in shift and broadening of perfusion-suppressed ADC histograms for off-center DWI relative to unbiased measurements close to isocenter. Direction-average DW-bias correctors were computed based on the known gradient design provided by vendor. The computed bias maps were empirically confirmed by coronal DWI measurements for an isotropic gel-flood phantom. Both phantom and renal tissue ADC bias for off-center measurements was effectively removed by applying pre-computed 3D correction maps. Comparable ADC accuracy was achieved for corrections of both b-maps and DWI intensities in presence of IVIM perfusion. No significant bias impact was observed for IVIM perfusion fraction. PMID:26811845
Efficient bias correction for magnetic resonance image denoising.
Mukherjee, Partha Sarathi; Qiu, Peihua
2013-05-30
Magnetic resonance imaging (MRI) is a popular radiology technique that is used for visualizing detailed internal structure of the body. Observed MRI images are generated by the inverse Fourier transformation from received frequency signals of a magnetic resonance scanner system. Previous research has demonstrated that random noise involved in the observed MRI images can be described adequately by the so-called Rician noise model. Under that model, the observed image intensity at a given pixel is a nonlinear function of the true image intensity and of two independent zero-mean random variables with the same normal distribution. Because of such a complicated noise structure in the observed MRI images, denoised images by conventional denoising methods are usually biased, and the bias could reduce image contrast and negatively affect subsequent image analysis. Therefore, it is important to address the bias issue properly. To this end, several bias-correction procedures have been proposed in the literature. In this paper, we study the Rician noise model and the corresponding bias-correction problem systematically and propose a new and more effective bias-correction formula based on the regression analysis and Monte Carlo simulation. Numerical studies show that our proposed method works well in various applications. Copyright © 2012 John Wiley & Sons, Ltd.
A Realization of Bias Correction Method in the GMAO Coupled System
NASA Technical Reports Server (NTRS)
Chang, Yehui; Koster, Randal; Wang, Hailan; Schubert, Siegfried; Suarez, Max
2018-01-01
Over the past several decades, a tremendous effort has been made to improve model performance in the simulation of the climate system. The cold or warm sea surface temperature (SST) bias in the tropics is still a problem common to most coupled ocean atmosphere general circulation models (CGCMs). The precipitation biases in CGCMs are also accompanied by SST and surface wind biases. The deficiencies and biases over the equatorial oceans through their influence on the Walker circulation likely contribute the precipitation biases over land surfaces. In this study, we introduce an approach in the CGCM modeling to correct model biases. This approach utilizes the history of the model's short-term forecasting errors and their seasonal dependence to modify model's tendency term and to minimize its climate drift. The study shows that such an approach removes most of model climate biases. A number of other aspects of the model simulation (e.g. extratropical transient activities) are also improved considerably due to the imposed pre-processed initial 3-hour model drift corrections. Because many regional biases in the GEOS-5 CGCM are common amongst other current models, our approaches and findings are applicable to these other models as well.
Mazoure, Bogdan; Caraus, Iurie; Nadon, Robert; Makarenkov, Vladimir
2018-06-01
Data generated by high-throughput screening (HTS) technologies are prone to spatial bias. Traditionally, bias correction methods used in HTS assume either a simple additive or, more recently, a simple multiplicative spatial bias model. These models do not, however, always provide an accurate correction of measurements in wells located at the intersection of rows and columns affected by spatial bias. The measurements in these wells depend on the nature of interaction between the involved biases. Here, we propose two novel additive and two novel multiplicative spatial bias models accounting for different types of bias interactions. We describe a statistical procedure that allows for detecting and removing different types of additive and multiplicative spatial biases from multiwell plates. We show how this procedure can be applied by analyzing data generated by the four HTS technologies (homogeneous, microorganism, cell-based, and gene expression HTS), the three high-content screening (HCS) technologies (area, intensity, and cell-count HCS), and the only small-molecule microarray technology available in the ChemBank small-molecule screening database. The proposed methods are included in the AssayCorrector program, implemented in R, and available on CRAN.
Detecting and removing multiplicative spatial bias in high-throughput screening technologies.
Caraus, Iurie; Mazoure, Bogdan; Nadon, Robert; Makarenkov, Vladimir
2017-10-15
Considerable attention has been paid recently to improve data quality in high-throughput screening (HTS) and high-content screening (HCS) technologies widely used in drug development and chemical toxicity research. However, several environmentally- and procedurally-induced spatial biases in experimental HTS and HCS screens decrease measurement accuracy, leading to increased numbers of false positives and false negatives in hit selection. Although effective bias correction methods and software have been developed over the past decades, almost all of these tools have been designed to reduce the effect of additive bias only. Here, we address the case of multiplicative spatial bias. We introduce three new statistical methods meant to reduce multiplicative spatial bias in screening technologies. We assess the performance of the methods with synthetic and real data affected by multiplicative spatial bias, including comparisons with current bias correction methods. We also describe a wider data correction protocol that integrates methods for removing both assay and plate-specific spatial biases, which can be either additive or multiplicative. The methods for removing multiplicative spatial bias and the data correction protocol are effective in detecting and cleaning experimental data generated by screening technologies. As our protocol is of a general nature, it can be used by researchers analyzing current or next-generation high-throughput screens. The AssayCorrector program, implemented in R, is available on CRAN. makarenkov.vladimir@uqam.ca. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Estimation of the electromagnetic bias from retracked TOPEX data
NASA Technical Reports Server (NTRS)
Rodriguez, Ernesto; Martin, Jan M.
1994-01-01
We examine the electromagnetic (EM) bias by using retracked TOPEX altimeter data. In contrast to previous studies, we use a parameterization of the EM bias which does not make stringent assumptions about the form of the correction or its global behavior. We find that the most effective single parameter correction uses the altimeter-estimated wind speed but that other parameterizations, using a wave age related parameter of significant wave height, may also significantly reduce the repeat pass variance. The different corrections are compared, and their improvement of the TOPEX height variance is quantified.
NASA Technical Reports Server (NTRS)
Klein, V.; Schiess, J. R.
1977-01-01
An extended Kalman filter smoother and a fixed point smoother were used for estimation of the state variables in the six degree of freedom kinematic equations relating measured aircraft responses and for estimation of unknown constant bias and scale factor errors in measured data. The computing algorithm includes an analysis of residuals which can improve the filter performance and provide estimates of measurement noise characteristics for some aircraft output variables. The technique developed was demonstrated using simulated and real flight test data. Improved accuracy of measured data was obtained when the data were corrected for estimated bias errors.
Sumner, David M.; Pathak, Chandra S.; Mecikalski, John R.; Paech, Simon J.; Wu, Qinglong; Sangoyomi, Taiye; Babcock, Roger W.; Walton, Raymond
2008-01-01
Solar radiation data are critically important for the estimation of evapotranspiration. Analysis of visible-channel data derived from Geostationary Operational Environmental Satellites (GOES) using radiative transfer modeling has been used to produce spatially- and temporally-distributed datasets of solar radiation. An extensive network of (pyranometer) surface measurements of solar radiation in the State of Florida has allowed refined calibration of a GOES-derived daily integrated radiation data product. This refinement of radiation data allowed for corrections of satellite sensor drift, satellite generational change, and consideration of the highly-variable cloudy conditions that are typical of Florida. To aid in calibration of a GOES-derived radiation product, solar radiation data for the period 1995–2004 from 58 field stations that are located throughout the State were compiled. The GOES radiation product was calibrated by way of a three-step process: 1) comparison with ground-based pyranometer measurements on clear reference days, 2) correcting for a bias related to cloud cover, and 3) deriving month-by-month bias correction factors. Pre-calibration results indicated good model performance, with a station-averaged model error of 2.2 MJ m–2 day–1 (13 percent). Calibration reduced errors to 1.7 MJ m–2 day–1 (10 percent) and also removed time- and cloudiness-related biases. The final dataset has been used to produce Statewide evapotranspiration estimates.
Manninen, Antti J.; O'Connor, Ewan J.; Vakkari, Ville; ...
2016-03-03
Current commercially available Doppler lidars provide an economical and robust solution for measuring vertical and horizontal wind velocities, together with the ability to provide co- and cross-polarised backscatter profiles. The high temporal resolution of these instruments allows turbulent properties to be obtained from studying the variation in radial velocities. However, the instrument specifications mean that certain characteristics, especially the background noise behaviour, become a limiting factor for the instrument sensitivity in regions where the aerosol load is low. Turbulent calculations require an accurate estimate of the contribution from velocity uncertainty estimates, which are directly related to the signal-to-noise ratio. Anymore » bias in the signal-to-noise ratio will propagate through as a bias in turbulent properties. In this paper we present a method to correct for artefacts in the background noise behaviour of commercially available Doppler lidars and reduce the signal-to-noise ratio threshold used to discriminate between noise, and cloud or aerosol signals. We show that, for Doppler lidars operating continuously at a number of locations in Finland, the data availability can be increased by as much as 50 % after performing this background correction and subsequent reduction in the threshold. Furthermore the reduction in bias also greatly improves subsequent calculations of turbulent properties in weak signal regimes.« less
Tropospheric GOM at the Pic du Midi Observatory-Correcting Bias in Denuder Based Observations.
Marusczak, Nicolas; Sonke, Jeroen E; Fu, Xuewu; Jiskra, Martin
2017-01-17
Gaseous elemental mercury (GEM, Hg) emissions are transformed to divalent reactive Hg (RM) forms throughout the troposphere and stratosphere. RM is often operationally quantified as the sum of particle bound Hg (PBM) and gaseous oxidized Hg (GOM). The measurement of GOM and PBM is challenging and under mounting criticism. Here we intercompare six months of automated GOM and PBM measurements using a Tekran (TK) KCl-coated denuder and quartz regenerable particulate filter method (GOM TK , PBM TK , and RM TK ) with RM CEM collected on cation exchange membranes (CEMs) at the high altitude Pic du Midi Observatory. We find that RM TK is systematically lower by a factor of 1.3 than RM CEM . We observe a significant relationship between GOM TK (but not PBM TK ) and Tekran flush TK blanks suggesting significant loss (36%) of labile GOM TK from the denuder or inlet. Adding the flush TK blank to RM TK results in good agreement with RM CEM (slope = 1.01, r 2 = 0.90) suggesting we can correct bias in RM TK and GOM TK . We provide a bias corrected (*) Pic du Midi data set for 2012-2014 that shows GOM* and RM* levels in dry free tropospheric air of 198 ± 57 and 229 ± 58 pg m -3 which agree well with in-flight observed RM and with model based GOM and RM estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manninen, Antti J.; O'Connor, Ewan J.; Vakkari, Ville
Current commercially available Doppler lidars provide an economical and robust solution for measuring vertical and horizontal wind velocities, together with the ability to provide co- and cross-polarised backscatter profiles. The high temporal resolution of these instruments allows turbulent properties to be obtained from studying the variation in radial velocities. However, the instrument specifications mean that certain characteristics, especially the background noise behaviour, become a limiting factor for the instrument sensitivity in regions where the aerosol load is low. Turbulent calculations require an accurate estimate of the contribution from velocity uncertainty estimates, which are directly related to the signal-to-noise ratio. Anymore » bias in the signal-to-noise ratio will propagate through as a bias in turbulent properties. In this paper we present a method to correct for artefacts in the background noise behaviour of commercially available Doppler lidars and reduce the signal-to-noise ratio threshold used to discriminate between noise, and cloud or aerosol signals. We show that, for Doppler lidars operating continuously at a number of locations in Finland, the data availability can be increased by as much as 50 % after performing this background correction and subsequent reduction in the threshold. Furthermore the reduction in bias also greatly improves subsequent calculations of turbulent properties in weak signal regimes.« less
NASA Astrophysics Data System (ADS)
Hagemann, Stefan; Chen, Cui; Haerter, Jan O.; Gerten, Dieter; Heinke, Jens; Piani, Claudio
2010-05-01
Future climate model scenarios depend crucially on their adequate representation of the hydrological cycle. Within the European project "Water and Global Change" (WATCH) special care is taken to couple state-of-the-art climate model output to a suite of hydrological models. This coupling is expected to lead to a better assessment of changes in the hydrological cycle. However, due to the systematic model errors of climate models, their output is often not directly applicable as input for hydrological models. Thus, the methodology of a statistical bias correction has been developed, which can be used for correcting climate model output to produce internally consistent fields that have the same statistical intensity distribution as the observations. As observations, global re-analysed daily data of precipitation and temperature are used that are obtained in the WATCH project. We will apply the bias correction to global climate model data of precipitation and temperature from the GCMs ECHAM5/MPIOM, CNRM-CM3 and LMDZ-4, and intercompare the bias corrected data to the original GCM data and the observations. Then, the orginal and the bias corrected GCM data will be used to force two global hydrology models: (1) the hydrological model of the Max Planck Institute for Meteorology (MPI-HM) consisting of the Simplified Land surface (SL) scheme and the Hydrological Discharge (HD) model, and (2) the dynamic vegetation model LPJmL operated by the Potsdam Institute for Climate Impact Research. The impact of the bias correction on the projected simulated hydrological changes will be analysed, and the resulting behaviour of the two hydrology models will be compared.
Syfert, Mindy M; Smith, Matthew J; Coomes, David A
2013-01-01
Species distribution models (SDMs) trained on presence-only data are frequently used in ecological research and conservation planning. However, users of SDM software are faced with a variety of options, and it is not always obvious how selecting one option over another will affect model performance. Working with MaxEnt software and with tree fern presence data from New Zealand, we assessed whether (a) choosing to correct for geographical sampling bias and (b) using complex environmental response curves have strong effects on goodness of fit. SDMs were trained on tree fern data, obtained from an online biodiversity data portal, with two sources that differed in size and geographical sampling bias: a small, widely-distributed set of herbarium specimens and a large, spatially clustered set of ecological survey records. We attempted to correct for geographical sampling bias by incorporating sampling bias grids in the SDMs, created from all georeferenced vascular plants in the datasets, and explored model complexity issues by fitting a wide variety of environmental response curves (known as "feature types" in MaxEnt). In each case, goodness of fit was assessed by comparing predicted range maps with tree fern presences and absences using an independent national dataset to validate the SDMs. We found that correcting for geographical sampling bias led to major improvements in goodness of fit, but did not entirely resolve the problem: predictions made with clustered ecological data were inferior to those made with the herbarium dataset, even after sampling bias correction. We also found that the choice of feature type had negligible effects on predictive performance, indicating that simple feature types may be sufficient once sampling bias is accounted for. Our study emphasizes the importance of reducing geographical sampling bias, where possible, in datasets used to train SDMs, and the effectiveness and essentialness of sampling bias correction within MaxEnt.
Chen, Yunjie; Zhao, Bo; Zhang, Jianwei; Zheng, Yuhui
2014-09-01
Accurate segmentation of magnetic resonance (MR) images remains challenging mainly due to the intensity inhomogeneity, which is also commonly known as bias field. Recently active contour models with geometric information constraint have been applied, however, most of them deal with the bias field by using a necessary pre-processing step before segmentation of MR data. This paper presents a novel automatic variational method, which can segment brain MR images meanwhile correcting the bias field when segmenting images with high intensity inhomogeneities. We first define a function for clustering the image pixels in a smaller neighborhood. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. In order to reduce the effect of the noise, the local intensity variations are described by the Gaussian distributions with different means and variances. Then, the objective functions are integrated over the entire domain. In order to obtain the global optimal and make the results independent of the initialization of the algorithm, we reconstructed the energy function to be convex and calculated it by using the Split Bregman theory. A salient advantage of our method is that its result is independent of initialization, which allows robust and fully automated application. Our method is able to estimate the bias of quite general profiles, even in 7T MR images. Moreover, our model can also distinguish regions with similar intensity distribution with different variances. The proposed method has been rigorously validated with images acquired on variety of imaging modalities with promising results. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Moghim, S.; Hsu, K.; Bras, R. L.
2013-12-01
General Circulation Models (GCMs) are used to predict circulation and energy transfers between the atmosphere and the land. It is known that these models produce biased results that will have impact on their uses. This work proposes a new method for bias correction: the equidistant cumulative distribution function-artificial neural network (EDCDFANN) procedure. The method uses artificial neural networks (ANNs) as a surrogate model to estimate bias-corrected temperature, given an identification of the system derived from GCM models output variables. A two-layer feed forward neural network is trained with observations during a historical period and then the adjusted network can be used to predict bias-corrected temperature for future periods. To capture the extreme values this method is combined with the equidistant CDF matching method (EDCDF, Li et al. 2010). The proposed method is tested with the Community Climate System Model (CCSM3) outputs using air and skin temperature, specific humidity, shortwave and longwave radiation as inputs to the ANN. This method decreases the mean square error and increases the spatial correlation between the modeled temperature and the observed one. The results indicate the EDCDFANN has potential to remove the biases of the model outputs.
Inoue, Makoto; Morino, Isamu; Uchino, Osamu; ...
2016-08-01
We describe a method for removing systematic biases of column-averaged dry air mole fractions of CO 2 (XCO 2) and CH 4 (XCH 4) derived from short-wavelength infrared (SWIR) spectra of the Greenhouse gases Observing SATellite (GOSAT). We conduct correlation analyses between the GOSAT biases and simultaneously retrieved auxiliary parameters. We use these correlations to bias correct the GOSAT data, removing these spurious correlations. Data from the Total Carbon Column Observing Network (TCCON) were used as reference values for this regression analysis. To evaluate the effectiveness of this correction method, the uncorrected/corrected GOSAT data were compared to independent XCO 2more » and XCH 4 data derived from aircraft measurements taken for the Comprehensive Observation Network for TRace gases by AIrLiner (CONTRAIL) project, the National Oceanic and Atmospheric Administration (NOAA), the US Department of Energy (DOE), the National Institute for Environmental Studies (NIES), the Japan Meteorological Agency (JMA), the HIAPER Pole-to-Pole observations (HIPPO) program, and the GOSAT validation aircraft observation campaign over Japan. These comparisons demonstrate that the empirically derived bias correction improves the agreement between GOSAT XCO 2/XCH 4 and the aircraft data. Finally, we present spatial distributions and temporal variations of the derived GOSAT biases.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Inoue, Makoto; Morino, Isamu; Uchino, Osamu
We describe a method for removing systematic biases of column-averaged dry air mole fractions of CO 2 (XCO 2) and CH 4 (XCH 4) derived from short-wavelength infrared (SWIR) spectra of the Greenhouse gases Observing SATellite (GOSAT). We conduct correlation analyses between the GOSAT biases and simultaneously retrieved auxiliary parameters. We use these correlations to bias correct the GOSAT data, removing these spurious correlations. Data from the Total Carbon Column Observing Network (TCCON) were used as reference values for this regression analysis. To evaluate the effectiveness of this correction method, the uncorrected/corrected GOSAT data were compared to independent XCO 2more » and XCH 4 data derived from aircraft measurements taken for the Comprehensive Observation Network for TRace gases by AIrLiner (CONTRAIL) project, the National Oceanic and Atmospheric Administration (NOAA), the US Department of Energy (DOE), the National Institute for Environmental Studies (NIES), the Japan Meteorological Agency (JMA), the HIAPER Pole-to-Pole observations (HIPPO) program, and the GOSAT validation aircraft observation campaign over Japan. These comparisons demonstrate that the empirically derived bias correction improves the agreement between GOSAT XCO 2/XCH 4 and the aircraft data. Finally, we present spatial distributions and temporal variations of the derived GOSAT biases.« less
Correcting Estimates of the Occurrence Rate of Earth-like Exoplanets for Stellar Multiplicity
NASA Astrophysics Data System (ADS)
Cantor, Elliot; Dressing, Courtney D.; Ciardi, David R.; Christiansen, Jessie
2018-06-01
One of the most prominent questions in the exoplanet field has been determining the true occurrence rate of potentially habitable Earth-like planets. NASA’s Kepler mission has been instrumental in answering this question by searching for transiting exoplanets, but follow-up observations of Kepler target stars are needed to determine whether or not the surveyed Kepler targets are in multi-star systems. While many researchers have searched for companions to Kepler planet host stars, few studies have investigated the larger target sample. Regardless of physical association, the presence of nearby stellar companions biases our measurements of a system’s planetary parameters and reduces our sensitivity to small planets. Assuming that all Kepler target stars are single (as is done in many occurrence rate calculations) would overestimate our search completeness and result in an underestimate of the frequency of potentially habitable Earth-like planets. We aim to correct for this bias by characterizing the set of targets for which Kepler could have detected Earth-like planets. We are using adaptive optics (AO) imaging to reveal potential stellar companions and near-infrared spectroscopy to refine stellar parameters for a subset of the Kepler targets that are most amenable to the detection of Earth-like planets. We will then derive correction factors to correct for the biases in the larger set of target stars and determine the true frequency of systems with Earth-like planets. Due to the prevalence of stellar multiples, we expect to calculate an occurrence rate for Earth-like exoplanets that is higher than current figures.
NASA Astrophysics Data System (ADS)
Hu, Taiyang; Lv, Rongchuan; Jin, Xu; Li, Hao; Chen, Wenxin
2018-01-01
The nonlinear bias analysis and correction of receiving channels in Chinese FY-3C meteorological satellite Microwave Temperature Sounder (MWTS) is a key technology of data assimilation for satellite radiance data. The thermal-vacuum chamber calibration data acquired from the MWTS can be analyzed to evaluate the instrument performance, including radiometric temperature sensitivity, channel nonlinearity and calibration accuracy. Especially, the nonlinearity parameters due to imperfect square-law detectors will be calculated from calibration data and further used to correct the nonlinear bias contributions of microwave receiving channels. Based upon the operational principles and thermalvacuum chamber calibration procedures of MWTS, this paper mainly focuses on the nonlinear bias analysis and correction methods for improving the calibration accuracy of the important instrument onboard FY-3C meteorological satellite, from the perspective of theoretical and experimental studies. Furthermore, a series of original results are presented to demonstrate the feasibility and significance of the methods.
Liu, Zhao; Zheng, Chaorong; Wu, Yue
2018-02-01
Recently, the government installed a boundary layer profiler (BLP), which is operated under the Doppler beam swinging mode, in a coastal area of China, to acquire useful wind field information in the atmospheric boundary layer for several purposes. And under strong wind conditions, the performance of the BLP is evaluated. It is found that, even though the quality controlled BLP data show good agreement with the balloon observations, a systematic bias can always be found for the BLP data. For the low wind velocities, the BLP data tend to overestimate the atmospheric wind. However, with the increment of wind velocity, the BLP data show a tendency of underestimation. In order to remove the effect of poor quality data on bias correction, the probability distribution function of the differences between the two instruments is discussed, and it is found that the t location scale distribution is the most suitable probability model when compared to other probability models. After the outliers with a large discrepancy, which are outside of 95% confidence interval of the t location scale distribution, are discarded, the systematic bias can be successfully corrected using a first-order polynomial correction function. The methodology of bias correction used in the study not only can be referred for the correction of other wind profiling radars, but also can lay a solid basis for further analysis of the wind profiles.
NASA Astrophysics Data System (ADS)
Liu, Zhao; Zheng, Chaorong; Wu, Yue
2018-02-01
Recently, the government installed a boundary layer profiler (BLP), which is operated under the Doppler beam swinging mode, in a coastal area of China, to acquire useful wind field information in the atmospheric boundary layer for several purposes. And under strong wind conditions, the performance of the BLP is evaluated. It is found that, even though the quality controlled BLP data show good agreement with the balloon observations, a systematic bias can always be found for the BLP data. For the low wind velocities, the BLP data tend to overestimate the atmospheric wind. However, with the increment of wind velocity, the BLP data show a tendency of underestimation. In order to remove the effect of poor quality data on bias correction, the probability distribution function of the differences between the two instruments is discussed, and it is found that the t location scale distribution is the most suitable probability model when compared to other probability models. After the outliers with a large discrepancy, which are outside of 95% confidence interval of the t location scale distribution, are discarded, the systematic bias can be successfully corrected using a first-order polynomial correction function. The methodology of bias correction used in the study not only can be referred for the correction of other wind profiling radars, but also can lay a solid basis for further analysis of the wind profiles.
Complacency and Automation Bias in the Use of Imperfect Automation.
Wickens, Christopher D; Clegg, Benjamin A; Vieane, Alex Z; Sebok, Angelia L
2015-08-01
We examine the effects of two different kinds of decision-aiding automation errors on human-automation interaction (HAI), occurring at the first failure following repeated exposure to correctly functioning automation. The two errors are incorrect advice, triggering the automation bias, and missing advice, reflecting complacency. Contrasts between analogous automation errors in alerting systems, rather than decision aiding, have revealed that alerting false alarms are more problematic to HAI than alerting misses are. Prior research in decision aiding, although contrasting the two aiding errors (incorrect vs. missing), has confounded error expectancy. Participants performed an environmental process control simulation with and without decision aiding. For those with the aid, automation dependence was created through several trials of perfect aiding performance, and an unexpected automation error was then imposed in which automation was either gone (one group) or wrong (a second group). A control group received no automation support. The correct aid supported faster and more accurate diagnosis and lower workload. The aid failure degraded all three variables, but "automation wrong" had a much greater effect on accuracy, reflecting the automation bias, than did "automation gone," reflecting the impact of complacency. Some complacency was manifested for automation gone, by a longer latency and more modest reduction in accuracy. Automation wrong, creating the automation bias, appears to be a more problematic form of automation error than automation gone, reflecting complacency. Decision-aiding automation should indicate its lower degree of confidence in uncertain environments to avoid the automation bias. © 2015, Human Factors and Ergonomics Society.
NASA Astrophysics Data System (ADS)
Iizumi, Toshichika; Takikawa, Hiroki; Hirabayashi, Yukiko; Hanasaki, Naota; Nishimori, Motoki
2017-08-01
The use of different bias-correction methods and global retrospective meteorological forcing data sets as the reference climatology in the bias correction of general circulation model (GCM) daily data is a known source of uncertainty in projected climate extremes and their impacts. Despite their importance, limited attention has been given to these uncertainty sources. We compare 27 projected temperature and precipitation indices over 22 regions of the world (including the global land area) in the near (2021-2060) and distant future (2061-2100), calculated using four Representative Concentration Pathways (RCPs), five GCMs, two bias-correction methods, and three reference forcing data sets. To widen the variety of forcing data sets, we developed a new forcing data set, S14FD, and incorporated it into this study. The results show that S14FD is more accurate than other forcing data sets in representing the observed temperature and precipitation extremes in recent decades (1961-2000 and 1979-2008). The use of different bias-correction methods and forcing data sets contributes more to the total uncertainty in the projected precipitation index values in both the near and distant future than the use of different GCMs and RCPs. However, GCM appears to be the most dominant uncertainty source for projected temperature index values in the near future, and RCP is the most dominant source in the distant future. Our findings encourage climate risk assessments, especially those related to precipitation extremes, to employ multiple bias-correction methods and forcing data sets in addition to using different GCMs and RCPs.
NASA Astrophysics Data System (ADS)
Li, Jingwan; Sharma, Ashish; Evans, Jason; Johnson, Fiona
2018-01-01
Addressing systematic biases in regional climate model simulations of extreme rainfall is a necessary first step before assessing changes in future rainfall extremes. Commonly used bias correction methods are designed to match statistics of the overall simulated rainfall with observations. This assumes that change in the mix of different types of extreme rainfall events (i.e. convective and non-convective) in a warmer climate is of little relevance in the estimation of overall change, an assumption that is not supported by empirical or physical evidence. This study proposes an alternative approach to account for the potential change of alternate rainfall types, characterized here by synoptic weather patterns (SPs) using self-organizing maps classification. The objective of this study is to evaluate the added influence of SPs on the bias correction, which is achieved by comparing the corrected distribution of future extreme rainfall with that using conventional quantile mapping. A comprehensive synthetic experiment is first defined to investigate the conditions under which the additional information of SPs makes a significant difference to the bias correction. Using over 600,000 synthetic cases, statistically significant differences are found to be present in 46% cases. This is followed by a case study over the Sydney region using a high-resolution run of the Weather Research and Forecasting (WRF) regional climate model, which indicates a small change in the proportions of the SPs and a statistically significant change in the extreme rainfall over the region, although the differences between the changes obtained from the two bias correction methods are not statistically significant.
NASA Astrophysics Data System (ADS)
Tesfagiorgis, Kibrewossen B.
Satellite Precipitation Estimates (SPEs) may be the only available source of information for operational hydrologic and flash flood prediction due to spatial limitations of radar and gauge products in mountainous regions. The present work develops an approach to seamlessly blend satellite, available radar, climatological and gauge precipitation products to fill gaps in ground-based radar precipitation field. To mix different precipitation products, the error of any of the products relative to each other should be removed. For bias correction, the study uses a new ensemble-based method which aims to estimate spatially varying multiplicative biases in SPEs using a radar-gauge precipitation product. Bias factors were calculated for a randomly selected sample of rainy pixels in the study area. Spatial fields of estimated bias were generated taking into account spatial variation and random errors in the sampled values. In addition to biases, sometimes there is also spatial error between the radar and satellite precipitation estimates; one of them has to be geometrically corrected with reference to the other. A set of corresponding raining points between SPE and radar products are selected to apply linear registration using a regularized least square technique to minimize the dislocation error in SPEs with respect to available radar products. A weighted Successive Correction Method (SCM) is used to make the merging between error corrected satellite and radar precipitation estimates. In addition to SCM, we use a combination of SCM and Bayesian spatial method for merging the rain gauges and climatological precipitation sources with radar and SPEs. We demonstrated the method using two satellite-based, CPC Morphing (CMORPH) and Hydro-Estimator (HE), two radar-gauge based, Stage-II and ST-IV, a climatological product PRISM and rain gauge dataset for several rain events from 2006 to 2008 over different geographical locations of the United States. Results show that: (a) the method of ensembles helped reduce biases in SPEs significantly; (b) the SCM method in combination with the Bayesian spatial model produced a precipitation product in good agreement with independent measurements .The study implies that using the available radar pixels surrounding the gap area, rain gauge, PRISM and satellite products, a radar like product is achievable over radar gap areas that benefits the operational meteorology and hydrology community.
NASA Astrophysics Data System (ADS)
Fang, G. H.; Yang, J.; Chen, Y. N.; Zammit, C.
2015-06-01
Water resources are essential to the ecosystem and social economy in the desert and oasis of the arid Tarim River basin, northwestern China, and expected to be vulnerable to climate change. It has been demonstrated that regional climate models (RCMs) provide more reliable results for a regional impact study of climate change (e.g., on water resources) than general circulation models (GCMs). However, due to their considerable bias it is still necessary to apply bias correction before they are used for water resources research. In this paper, after a sensitivity analysis on input meteorological variables based on the Sobol' method, we compared five precipitation correction methods and three temperature correction methods in downscaling RCM simulations applied over the Kaidu River basin, one of the headwaters of the Tarim River basin. Precipitation correction methods applied include linear scaling (LS), local intensity scaling (LOCI), power transformation (PT), distribution mapping (DM) and quantile mapping (QM), while temperature correction methods are LS, variance scaling (VARI) and DM. The corrected precipitation and temperature were compared to the observed meteorological data, prior to being used as meteorological inputs of a distributed hydrologic model to study their impacts on streamflow. The results show (1) streamflows are sensitive to precipitation, temperature and solar radiation but not to relative humidity and wind speed; (2) raw RCM simulations are heavily biased from observed meteorological data, and its use for streamflow simulations results in large biases from observed streamflow, and all bias correction methods effectively improved these simulations; (3) for precipitation, PT and QM methods performed equally best in correcting the frequency-based indices (e.g., standard deviation, percentile values) while the LOCI method performed best in terms of the time-series-based indices (e.g., Nash-Sutcliffe coefficient, R2); (4) for temperature, all correction methods performed equally well in correcting raw temperature; and (5) for simulated streamflow, precipitation correction methods have more significant influence than temperature correction methods and the performances of streamflow simulations are consistent with those of corrected precipitation; i.e., the PT and QM methods performed equally best in correcting flow duration curve and peak flow while the LOCI method performed best in terms of the time-series-based indices. The case study is for an arid area in China based on a specific RCM and hydrologic model, but the methodology and some results can be applied to other areas and models.
NASA Astrophysics Data System (ADS)
Nguyen, Huong Giang T.; Horn, Jarod C.; Thommes, Matthias; van Zee, Roger D.; Espinal, Laura
2017-12-01
Addressing reproducibility issues in adsorption measurements is critical to accelerating the path to discovery of new industrial adsorbents and to understanding adsorption processes. A National Institute of Standards and Technology Reference Material, RM 8852 (ammonium ZSM-5 zeolite), and two gravimetric instruments with asymmetric two-beam balances were used to measure high-pressure adsorption isotherms. This work demonstrates how common approaches to buoyancy correction, a key factor in obtaining the mass change due to surface excess gas uptake from the apparent mass change, can impact the adsorption isotherm data. Three different approaches to buoyancy correction were investigated and applied to the subcritical CO2 and supercritical N2 adsorption isotherms at 293 K. It was observed that measuring a collective volume for all balance components for the buoyancy correction (helium method) introduces an inherent bias in temperature partition when there is a temperature gradient (i.e. analysis temperature is not equal to instrument air bath temperature). We demonstrate that a blank subtraction is effective in mitigating the biases associated with temperature partitioning, instrument calibration, and the determined volumes of the balance components. In general, the manual and subtraction methods allow for better treatment of the temperature gradient during buoyancy correction. From the study, best practices specific to asymmetric two-beam balances and more general recommendations for measuring isotherms far from critical temperatures using gravimetric instruments are offered.
Nguyen, Huong Giang T; Horn, Jarod C; Thommes, Matthias; van Zee, Roger D; Espinal, Laura
2017-12-01
Addressing reproducibility issues in adsorption measurements is critical to accelerating the path to discovery of new industrial adsorbents and to understanding adsorption processes. A National Institute of Standards and Technology Reference Material, RM 8852 (ammonium ZSM-5 zeolite), and two gravimetric instruments with asymmetric two-beam balances were used to measure high-pressure adsorption isotherms. This work demonstrates how common approaches to buoyancy correction, a key factor in obtaining the mass change due to surface excess gas uptake from the apparent mass change, can impact the adsorption isotherm data. Three different approaches to buoyancy correction were investigated and applied to the subcritical CO 2 and supercritical N 2 adsorption isotherms at 293 K. It was observed that measuring a collective volume for all balance components for the buoyancy correction (helium method) introduces an inherent bias in temperature partition when there is a temperature gradient (i.e. analysis temperature is not equal to instrument air bath temperature). We demonstrate that a blank subtraction is effective in mitigating the biases associated with temperature partitioning, instrument calibration, and the determined volumes of the balance components. In general, the manual and subtraction methods allow for better treatment of the temperature gradient during buoyancy correction. From the study, best practices specific to asymmetric two-beam balances and more general recommendations for measuring isotherms far from critical temperatures using gravimetric instruments are offered.
Model-Based Control of Observer Bias for the Analysis of Presence-Only Data in Ecology
Warton, David I.; Renner, Ian W.; Ramp, Daniel
2013-01-01
Presence-only data, where information is available concerning species presence but not species absence, are subject to bias due to observers being more likely to visit and record sightings at some locations than others (hereafter “observer bias”). In this paper, we describe and evaluate a model-based approach to accounting for observer bias directly – by modelling presence locations as a function of known observer bias variables (such as accessibility variables) in addition to environmental variables, then conditioning on a common level of bias to make predictions of species occurrence free of such observer bias. We implement this idea using point process models with a LASSO penalty, a new presence-only method related to maximum entropy modelling, that implicitly addresses the “pseudo-absence problem” of where to locate pseudo-absences (and how many). The proposed method of bias-correction is evaluated using systematically collected presence/absence data for 62 plant species endemic to the Blue Mountains near Sydney, Australia. It is shown that modelling and controlling for observer bias significantly improves the accuracy of predictions made using presence-only data, and usually improves predictions as compared to pseudo-absence or “inventory” methods of bias correction based on absences from non-target species. Future research will consider the potential for improving the proposed bias-correction approach by estimating the observer bias simultaneously across multiple species. PMID:24260167
Baumann, Soo Mee; Webb, Patrick; Zeller, Manfred
2013-03-01
Cross-cultural validity of food security indicators is commonly presumed without questioning the suitability of generic indicators in different geographic settings. However, ethnic differences in the perception of and reporting on, food insecurity, as well as variations in consumption patterns, may limit the comparability of results. Although research on correction factors for standardization of food security indicators is in process, so far no universal indicator has been identified. The current paper considers the ability of the Food Consumption Score (FCS) developed by the World Food Programme in southern Africa in 1996 to meet the requirement of local cultural validity in a Laotian context. The analysis is based on research that seeks to identify options for correcting possible biases linked to cultural disparities. Based on the results of a household survey conducted in different agroecological zones of Laos in 2009, the FCS was validated against a benchmark of calorie consumption. Changing the thresholds and excluding small amounts of food items consumed were tested as options to correct for biases caused by cultural disparities. The FCS in its original form underestimates the food insecurity level in the surveyed villages. However, the closeness of fit of the FCS to the benchmark classification improves when small amounts of food items are excluded from the assessment. Further research in different cultural settings is required to generate more insight into the extent to which universal thresholds can be applied to dietary diversity indicators with or without locally determined correction factors such as the exclusion of small amounts of food items.
NASA Astrophysics Data System (ADS)
Zhang, Guoguang; Yu, Zitian; Wang, Junmin
2017-03-01
Yaw rate is a crucial signal for the motion control systems of ground vehicles. Yet it may be contaminated by sensor bias. In order to correct the contaminated yaw rate signal and estimate the sensor bias, a robust gain-scheduling observer is proposed in this paper. First of all, a two-degree-of-freedom (2DOF) vehicle lateral and yaw dynamic model is presented, and then a Luenberger-like observer is proposed. To make the observer more applicable to real vehicle driving operations, a 2DOF vehicle model with uncertainties on the coefficients of tire cornering stiffness is employed. Further, a gain-scheduling approach and a robustness enhancement are introduced, leading to a robust gain-scheduling observer. Sensor bias detection mechanism is also designed. Case studies are conducted using an electric ground vehicle to assess the performance of signal correction and sensor bias estimation under difference scenarios.
NASA Astrophysics Data System (ADS)
Yang, T.; Lee, C.
2017-12-01
The biases in the Global Circulation Models (GCMs) are crucial for understanding future climate changes. Currently, most bias correction methodologies suffer from the assumption that model bias is stationary. This paper provides a non-stationary bias correction model, termed Residual-based Bagging Tree (RBT) model, to reduce simulation biases and to quantify the contributions of single models. Specifically, the proposed model estimates the residuals between individual models and observations, and takes the differences between observations and the ensemble mean into consideration during the model training process. A case study is conducted for 10 major river basins in Mainland China during different seasons. Results show that the proposed model is capable of providing accurate and stable predictions while including the non-stationarities into the modeling framework. Significant reductions in both bias and root mean squared error are achieved with the proposed RBT model, especially for the central and western parts of China. The proposed RBT model has consistently better performance in reducing biases when compared to the raw ensemble mean, the ensemble mean with simple additive bias correction, and the single best model for different seasons. Furthermore, the contribution of each single GCM in reducing the overall bias is quantified. The single model importance varies between 3.1% and 7.2%. For different future scenarios (RCP 2.6, RCP 4.5, and RCP 8.5), the results from RBT model suggest temperature increases of 1.44 ºC, 2.59 ºC, and 4.71 ºC by the end of the century, respectively, when compared to the average temperature during 1970 - 1999.
Dry Bias and Variability in Vaisala RS80-H Radiosondes: The ARM Experience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, David D.; Lesht, B. M.; Clough, Shepard A.
2003-01-02
Thousands of comparisons between total precipitable water vapor (PWV) obtained from radiosonde (Vaisala RS80-H) profiles and PWV retrieved from a collocated microwave radiometer (MWR) were made at the Atmospheric Radiation Measurement (ARM) Program's Southern Great Plains Cloud and Radiation Testbed (SGP/CART) site in northern Oklahoma from 1994 to 2000. These comparisons show that the RS80-H radiosonde has an approximate 5% dry bias compared to the MWR. This observation is consistent with interpretations of Vaisala RS80 radiosonde data obtained during the Tropical Ocean and Global Atmosphere Coupled Ocean-Atmosphere Response Experiment (TOGA/COARE). In addition to the dry bias, analysis of the PWVmore » comparisons as well as of data obtained from dual-sonde soundings done at the SGP show that the calibration of the radiosonde humidity measurements varies considerably both when the radiosondes come from different calibration batches and when the radiosondes come from the same calibration batch. This variability can result in peak-to-peak differences between radiosondes of greater than 25% in PWV. Because accurate representation of the vertical profile of water vapor is critical for ARM's science objectives, we have developed an empirical method for correcting the radiosonde humidity profiles that is based on a constant scaling factor. By using an independent set of observations and radiative transfer models to test the correction, we show that the constant humidity scaling method appears both to improve the accuracy and reduce the uncertainty of the radiosonde data. We also used the ARM data to examine a different, physically-based, correction scheme that was developed recently by scientists from Vaisala and the National Center for Atmospheric Research (NCAR). This scheme, which addresses the dry bias problem as well as other calibration-related problems with the RS80-H sensor, results in excellent agreement between the PWV retrieved from the MWR and integrated from the corrected radiosonde. However, because the physically-based correction scheme does not address the apparently random calibration variations we observe, it does not reduce the variability either between radiosonde calibration batches or within individual calibration batches.« less
Extracting muon momentum scale corrections for hadron collider experiments
NASA Astrophysics Data System (ADS)
Bodek, A.; van Dyne, A.; Han, J. Y.; Sakumoto, W.; Strelnikov, A.
2012-10-01
We present a simple method for the extraction of corrections for bias in the measurement of the momentum of muons in hadron collider experiments. Such bias can originate from a variety of sources such as detector misalignment, software reconstruction bias, and uncertainties in the magnetic field. The two step method uses the mean <1/p^{μ}T rangle for muons from Z→ μμ decays to determine the momentum scale corrections in bins of charge, η and ϕ. In the second step, the corrections are tuned by using the average invariant mass < MZ_{μμ }rangle of Z→ μμ events in the same bins of charge η and ϕ. The forward-backward asymmetry of Z/ γ ∗→ μμ pairs as a function of μ + μ - mass, and the ϕ distribution of Z bosons in the Collins-Soper frame are used to ascertain that the corrections remove the bias in the momentum measurements for positive versus negatively charged muons. By taking the sum and difference of the momentum scale corrections for positive and negative muons, we isolate additive corrections to 1/p^{μ}T that may originate from misalignments and multiplicative corrections that may originate from mis-modeling of the magnetic field (∫ Bṡ d L). This method has recently been used in the CDF experiment at Fermilab and in the CMS experiment at the Large Hadron Collider at CERN.
High-resolution near real-time drought monitoring in South Asia
NASA Astrophysics Data System (ADS)
Aadhar, Saran; Mishra, Vimal
2017-10-01
Drought in South Asia affect food and water security and pose challenges for millions of people. For policy-making, planning, and management of water resources at sub-basin or administrative levels, high-resolution datasets of precipitation and air temperature are required in near-real time. We develop a high-resolution (0.05°) bias-corrected precipitation and temperature data that can be used to monitor near real-time drought conditions over South Asia. Moreover, the dataset can be used to monitor climatic extremes (heat and cold waves, dry and wet anomalies) in South Asia. A distribution mapping method was applied to correct bias in precipitation and air temperature, which performed well compared to the other bias correction method based on linear scaling. Bias-corrected precipitation and temperature data were used to estimate Standardized precipitation index (SPI) and Standardized Precipitation Evapotranspiration Index (SPEI) to assess the historical and current drought conditions in South Asia. We evaluated drought severity and extent against the satellite-based Normalized Difference Vegetation Index (NDVI) anomalies and satellite-driven Drought Severity Index (DSI) at 0.05°. The bias-corrected high-resolution data can effectively capture observed drought conditions as shown by the satellite-based drought estimates. High resolution near real-time dataset can provide valuable information for decision-making at district and sub-basin levels.
Byrd, Darrin; Christopfel, Rebecca; Arabasz, Grae; Catana, Ciprian; Karp, Joel; Lodge, Martin A; Laymon, Charles; Moros, Eduardo G; Budzevich, Mikalai; Nehmeh, Sadek; Scheuermann, Joshua; Sunderland, John; Zhang, Jun; Kinahan, Paul
2018-01-01
Positron emission tomography (PET) is a quantitative imaging modality, but the computation of standardized uptake values (SUVs) requires several instruments to be correctly calibrated. Variability in the calibration process may lead to unreliable quantitation. Sealed source kits containing traceable amounts of [Formula: see text] were used to measure signal stability for 19 PET scanners at nine hospitals in the National Cancer Institute's Quantitative Imaging Network. Repeated measurements of the sources were performed on PET scanners and in dose calibrators. The measured scanner and dose calibrator signal biases were used to compute the bias in SUVs at multiple time points for each site over a 14-month period. Estimation of absolute SUV accuracy was confounded by bias from the solid phantoms' physical properties. On average, the intrascanner coefficient of variation for SUV measurements was 3.5%. Over the entire length of the study, single-scanner SUV values varied over a range of 11%. Dose calibrator bias was not correlated with scanner bias. Calibration factors from the image metadata were nearly as variable as scanner signal, and were correlated with signal for many scanners. SUVs often showed low intrascanner variability between successive measurements but were also prone to shifts in apparent bias, possibly in part due to scanner recalibrations that are part of regular scanner quality control. Biases of key factors in the computation of SUVs were not correlated and their temporal variations did not cancel out of the computation. Long-lived sources and image metadata may provide a check on the recalibration process.
Evaluation of Bias and Variance in Low-count OSEM List Mode Reconstruction
Jian, Y; Planeta, B; Carson, R E
2016-01-01
Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization (MLEM) reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combination of subsets and iterations. Regions of interest (ROIs) were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations x subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1–5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR. PMID:25479254
Evaluation of bias and variance in low-count OSEM list mode reconstruction
NASA Astrophysics Data System (ADS)
Jian, Y.; Planeta, B.; Carson, R. E.
2015-01-01
Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combinations of subsets and iterations. Regions of interest were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations × subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1-5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR.
Quantitative image fusion in infrared radiometry
NASA Astrophysics Data System (ADS)
Romm, Iliya; Cukurel, Beni
2018-05-01
Towards high-accuracy infrared radiance estimates, measurement practices and processing techniques aimed to achieve quantitative image fusion using a set of multi-exposure images of a static scene are reviewed. The conventional non-uniformity correction technique is extended, as the original is incompatible with quantitative fusion. Recognizing the inherent limitations of even the extended non-uniformity correction, an alternative measurement methodology, which relies on estimates of the detector bias using self-calibration, is developed. Combining data from multi-exposure images, two novel image fusion techniques that ultimately provide high tonal fidelity of a photoquantity are considered: ‘subtract-then-fuse’, which conducts image subtraction in the camera output domain and partially negates the bias frame contribution common to both the dark and scene frames; and ‘fuse-then-subtract’, which reconstructs the bias frame explicitly and conducts image fusion independently for the dark and the scene frames, followed by subtraction in the photoquantity domain. The performances of the different techniques are evaluated for various synthetic and experimental data, identifying the factors contributing to potential degradation of the image quality. The findings reflect the superiority of the ‘fuse-then-subtract’ approach, conducting image fusion via per-pixel nonlinear weighted least squares optimization.
NASA Astrophysics Data System (ADS)
Saitoh, N.; Hatta, H.; Imasu, R.; Shiomi, K.; Kuze, A.; Niwa, Y.; Machida, T.; Sawa, Y.; Matsueda, H.
2016-12-01
Thermal and Near Infrared Sensor for Carbon Observation (TANSO)-Fourier Transform Spectrometer (FTS) on board the Greenhouse Gases Observing Satellite (GOSAT) has been observing carbon dioxide (CO2) concentrations in several atmospheric layers in the thermal infrared (TIR) band since its launch on 23 January 2009. We have compared TANSO-FTS TIR Version 1 (V1) CO2 data from 2010 to 2012 and CO2 data obtained by the Continuous CO2 Measuring Equipment (CME) installed on several JAL aircraft in the framework of the Comprehensive Observation Network for TRace gases by AIrLiner (CONTRAIL) project to evaluate bias in the TIR CO2 data in the lower and middle troposphere. Here, we have regarded the CME data obtained during the ascent and descent flights over several airports as part of CO2 vertical profiles there. The comparisons showed that the TIR V1 CO2 data had a negative bias against the CME CO2 data; the magnitude of the bias varied depending on season and latitude. We have estimated bias correction values for the TIR V1 lower and middle tropospheric CO2 data in each latitude band from 40°S to 60°N in each season on the basis of the comparisons with the CME CO2 profiles in limited areas over airports, applied the bias correction values to the TIR V1 CO2 data, and evaluated the quality of the bias-corrected TIR CO2 data globally through comparisons with CO2 data taken from the Nonhydrostatic Icosahedral Atmospheric Model (NICAM)-based Transport Model (TM). The bias-corrected TIR CO2 data showed a better agreement with the NICAM-TM CO2 than the original TIR data, which suggests that the bias correction values estimated in the limited areas are basically applicable to global TIR CO2 data. We have compared XCO2 data calculated from both the original and bias-corrected TIR CO2 data with TANSO-FTS SWIR and NICAM-TM XCO2 data; both the TIR XCO2 data agreed with SWIR and NICAM-TM XCO2 data within 1% except over the Sahara desert and strong source and sink regions.
Classification bias in commercial business lists for retail food stores in the U.S.
Han, Euna; Powell, Lisa M; Zenk, Shannon N; Rimkus, Leah; Ohri-Vachaspati, Punam; Chaloupka, Frank J
2012-04-18
Aspects of the food environment such as the availability of different types of food stores have recently emerged as key modifiable factors that may contribute to the increased prevalence of obesity. Given that many of these studies have derived their results based on secondary datasets and the relationship of food stores with individual weight outcomes has been reported to vary by store type, it is important to understand the extent to which often-used secondary data correctly classify food stores. We evaluated the classification bias of food stores in Dun & Bradstreet (D&B) and InfoUSA commercial business lists. We performed a full census in 274 randomly selected census tracts in the Chicago metropolitan area and collected detailed store attributes inside stores for classification. Store attributes were compared by classification match status and store type. Systematic classification bias by census tract characteristics was assessed in multivariate regression. D&B had a higher classification match rate than InfoUSA for supermarkets and grocery stores, while InfoUSA was higher for convenience stores. Both lists were more likely to correctly classify large supermarkets, grocery stores, and convenience stores with more cash registers and different types of service counters (supermarkets and grocery stores only). The likelihood of a correct classification match for supermarkets and grocery stores did not vary systemically by tract characteristics whereas convenience stores were more likely to be misclassified in predominately Black tracts. Researches can rely on classification of food stores in commercial datasets for supermarkets and grocery stores whereas classifications for convenience and specialty food stores are subject to some systematic bias by neighborhood racial/ethnic composition.
Classification bias in commercial business lists for retail food stores in the U.S.
2012-01-01
Background Aspects of the food environment such as the availability of different types of food stores have recently emerged as key modifiable factors that may contribute to the increased prevalence of obesity. Given that many of these studies have derived their results based on secondary datasets and the relationship of food stores with individual weight outcomes has been reported to vary by store type, it is important to understand the extent to which often-used secondary data correctly classify food stores. We evaluated the classification bias of food stores in Dun & Bradstreet (D&B) and InfoUSA commercial business lists. Methods We performed a full census in 274 randomly selected census tracts in the Chicago metropolitan area and collected detailed store attributes inside stores for classification. Store attributes were compared by classification match status and store type. Systematic classification bias by census tract characteristics was assessed in multivariate regression. Results D&B had a higher classification match rate than InfoUSA for supermarkets and grocery stores, while InfoUSA was higher for convenience stores. Both lists were more likely to correctly classify large supermarkets, grocery stores, and convenience stores with more cash registers and different types of service counters (supermarkets and grocery stores only). The likelihood of a correct classification match for supermarkets and grocery stores did not vary systemically by tract characteristics whereas convenience stores were more likely to be misclassified in predominately Black tracts. Conclusion Researches can rely on classification of food stores in commercial datasets for supermarkets and grocery stores whereas classifications for convenience and specialty food stores are subject to some systematic bias by neighborhood racial/ethnic composition. PMID:22512874
Attribution of Extreme Rainfall Events in the South of France Using EURO-CORDEX Simulations
NASA Astrophysics Data System (ADS)
Luu, L. N.; Vautard, R.; Yiou, P.
2017-12-01
The Mediterranean region regularly undergoes episodes of intense precipitation in the fall season that exceed 300mm a day. This study focuses on the role of climate change on the dynamics of the events that occur in the South of France. We used an ensemble of 10 EURO-CORDEX model simulations with two horizontal resolutions (EUR-11: 0.11° and EUR-44: 0.44°) for the attribution of extreme rainfall in the fall in the Cevennes mountain range (South of France). The biases of the simulations were corrected with simple scaling adjustment and a quantile correction (CDFt). This produces five datasets including EUR-44 and EUR-11 with and without scaling adjustment and CDFt-EUR-11, on which we test the impact of resolution and bias correction on the extremes. Those datasets, after pooling all of models together, are fitted by a stationary Generalized Extreme Value distribution for several periods to estimate a climate change signal in the tail of distribution of extreme rainfall in the Cévenne region. Those changes are then interpreted by a scaling model that links extreme rainfall with mean and maximum daily temperature. The results show that higher-resolution simulations with bias adjustment provide a robust and confident increase of intensity and likelihood of occurrence of autumn extreme rainfall in the area in current climate in comparison with historical climate. The probability (exceedance probability) of 1-in-1000-year event in historical climate may increase by a factor of 1.8 under current climate with a confident interval of 0.4 to 5.3 following the CDFt bias-adjusted EUR-11. The change of magnitude appears to follow the Clausius-Clapeyron relation that indicates a 7% increase in rainfall per 1oC increase in temperature.
Unabated global surface temperature warming: evaluating the evidence
NASA Astrophysics Data System (ADS)
Karl, T. R.; Arguez, A.
2015-12-01
New insights related to time-dependent bias corrections in global surface temperatures have led to higher rates of warming over the past few decades than previously reported in the IPCC Fifth Assessment Report (2014). Record high global temperatures in the past few years have also contributed to larger trends. The combination of these factors and new analyses of the rate of temperature change show unabated global warming since at least the mid-Twentieth Century. New time-dependent bias corrections account for: (1) differences in temperatures measured from ships and drifting buoys; (2) improved corrections to ship measured temperatures; and (3) the larger rates of warming in polar regions (particularly the Arctic). Since 1951, the period over which IPCC (2014) attributes over half of the observed global warming to human causes, it is shown that there has been a remarkably robust and sustained warming, punctuated with inter-annual and decadal variability. This finding is confirmed through simple trend analysis and Empirical Mode Decomposition (EMD). Trend analysis however, especially for decadal trends, is sensitive to selection bias of beginning and ending dates. EMD has no selection bias. Additionally, it can highlight both short- and long-term processes affecting the global temperature times series since it addresses both non-linear and non-stationary processes. For the new NOAA global temperature data set, our analyses do not support the notion of a hiatus or slowing of long-term global warming. However, sub-decadal periods of little (or no warming) and rapid warming can also be found, clearly showing the impact of inter-annual and decadal variability that previously has been attributed to both natural and human-induced non-greenhouse forcings.
UNBIASED CORRECTION RELATIONS FOR GALAXY CLUSTER PROPERTIES DERIVED FROM CHANDRA AND XMM-NEWTON
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Hai-Hui; Li, Cheng-Kui; Chen, Yong
2015-01-20
We use a sample of 62 clusters of galaxies to investigate the discrepancies between the gas temperature and total mass within r {sub 500} from XMM-Newton and Chandra data. Comparisons of the properties show that (1) both the de-projected and projected temperatures determined by Chandra are higher than those of XMM-Newton and there is a good linear relationship for the de-projected temperatures: T {sub Chandra} = 1.25 × T {sub XMM}–0.13. (2) The Chandra mass is much higher than the XMM-Newton mass with a bias of 0.15 and our mass relation is log{sub 10} M {sub Chandra} = 1.02 × log{sub 10}more » M {sub XMM}+0.15. To explore the reasons for the discrepancy in mass, we recalculate the Chandra mass (expressed as M{sub Ch}{sup mo/d}) by modifying its temperature with the de-projected temperature relation. The results show that M{sub Ch}{sup mo/d} is closer to the XMM-Newton mass with the bias reducing to 0.02. Moreover, M{sub Ch}{sup mo/d} are corrected with the r {sub 500} measured by XMM-Newton and the intrinsic scatter is significantly improved with the value reducing from 0.20 to 0.12. These mean that the temperature bias may be the main factor causing the mass bias. Finally, we find that M{sub Ch}{sup mo/d} is consistent with the corresponding XMM-Newton mass derived directly from our mass relation at a given Chandra mass. Thus, the de-projected temperature and mass relations can provide unbiased corrections for galaxy cluster properties derived from Chandra and XMM-Newton.« less
NASA Astrophysics Data System (ADS)
Hakala, Kirsti; Addor, Nans; Seibert, Jan
2017-04-01
Streamflow stemming from Switzerland's mountainous landscape will be influenced by climate change, which will pose significant challenges to the water management and policy sector. In climate change impact research, the determination of future streamflow is impeded by different sources of uncertainty, which propagate through the model chain. In this research, we explicitly considered the following sources of uncertainty: (1) climate models, (2) downscaling of the climate projections to the catchment scale, (3) bias correction method and (4) parameterization of the hydrological model. We utilize climate projections at the 0.11 degree 12.5 km resolution from the EURO-CORDEX project, which are the most recent climate projections for the European domain. EURO-CORDEX is comprised of regional climate model (RCM) simulations, which have been downscaled from global climate models (GCMs) from the CMIP5 archive, using both dynamical and statistical techniques. Uncertainties are explored by applying a modeling chain involving 14 GCM-RCMs to ten Swiss catchments. We utilize the rainfall-runoff model HBV Light, which has been widely used in operational hydrological forecasting. The Lindström measure, a combination of model efficiency and volume error, was used as an objective function to calibrate HBV Light. Ten best sets of parameters are then achieved by calibrating using the genetic algorithm and Powell optimization (GAP) method. The GAP optimization method is based on the evolution of parameter sets, which works by selecting and recombining high performing parameter sets with each other. Once HBV is calibrated, we then perform a quantitative comparison of the influence of biases inherited from climate model simulations to the biases stemming from the hydrological model. The evaluation is conducted over two time periods: i) 1980-2009 to characterize the simulation realism under the current climate and ii) 2070-2099 to identify the magnitude of the projected change of streamflow under the climate scenarios RCP4.5 and RCP8.5. We utilize two techniques for correcting biases in the climate model output: quantile mapping and a new method, frequency bias correction. The FBC method matches the frequencies between observed and GCM-RCM data. In this way, it can be used to correct for all time scales, which is a known limitation of quantile mapping. A novel approach for the evaluation of the climate simulations and bias correction methods was then applied. Streamflow can be thought of as the "great integrator" of uncertainties. The ability, or the lack thereof, to correctly simulate streamflow is a way to assess the realism of the bias-corrected climate simulations. Long-term monthly mean as well as high and low flow metrics are used to evaluate the realism of the simulations under current climate and to gauge the impacts of climate change on streamflow. Preliminary results show that under present climate, calibration of the hydrological model comprises of a much smaller band of uncertainty in the modeling chain as compared to the bias correction of the GCM-RCMs. Therefore, for future time periods, we expect the bias correction of climate model data to have a greater influence on projected changes in streamflow than the calibration of the hydrological model.
Sequence-specific bias correction for RNA-seq data using recurrent neural networks.
Zhang, Yao-Zhong; Yamaguchi, Rui; Imoto, Seiya; Miyano, Satoru
2017-01-25
The recent success of deep learning techniques in machine learning and artificial intelligence has stimulated a great deal of interest among bioinformaticians, who now wish to bring the power of deep learning to bare on a host of bioinformatical problems. Deep learning is ideally suited for biological problems that require automatic or hierarchical feature representation for biological data when prior knowledge is limited. In this work, we address the sequence-specific bias correction problem for RNA-seq data redusing Recurrent Neural Networks (RNNs) to model nucleotide sequences without pre-determining sequence structures. The sequence-specific bias of a read is then calculated based on the sequence probabilities estimated by RNNs, and used in the estimation of gene abundance. We explore the application of two popular RNN recurrent units for this task and demonstrate that RNN-based approaches provide a flexible way to model nucleotide sequences without knowledge of predetermined sequence structures. Our experiments show that training a RNN-based nucleotide sequence model is efficient and RNN-based bias correction methods compare well with the-state-of-the-art sequence-specific bias correction method on the commonly used MAQC-III data set. RNNs provides an alternative and flexible way to calculate sequence-specific bias without explicitly pre-determining sequence structures.
Problems and Limitations of Satellite Image Orientation for Determination of Height Models
NASA Astrophysics Data System (ADS)
Jacobsen, K.
2017-05-01
The usual satellite image orientation is based on bias corrected rational polynomial coefficients (RPC). The RPC are describing the direct sensor orientation of the satellite images. The locations of the projection centres today are without problems, but an accuracy limit is caused by the attitudes. Very high resolution satellites today are very agile, able to change the pointed area over 200km within 10 to 11 seconds. The corresponding fast attitude acceleration of the satellite may cause a jitter which cannot be expressed by the third order RPC, even if it is recorded by the gyros. Only a correction of the image geometry may help, but usually this will not be done. The first indication of jitter problems is shown by systematic errors of the y-parallaxes (py) for the intersection of corresponding points during the computation of ground coordinates. These y-parallaxes have a limited influence to the ground coordinates, but similar problems can be expected for the x-parallaxes, determining directly the object height. Systematic y-parallaxes are shown for Ziyuan-3 (ZY3), WorldView-2 (WV2), Pleiades, Cartosat-1, IKONOS and GeoEye. Some of them have clear jitter effects. In addition linear trends of py can be seen. Linear trends in py and tilts in of computed height models may be caused by limited accuracy of the attitude registration, but also by bias correction with affinity transformation. The bias correction is based on ground control points (GCPs). The accuracy of the GCPs usually does not cause some limitations but the identification of the GCPs in the images may be difficult. With 2-dimensional bias corrected RPC-orientation by affinity transformation tilts of the generated height models may be caused, but due to large affine image deformations some satellites, as Cartosat-1, have to be handled with bias correction by affinity transformation. Instead of a 2-dimensional RPC-orientation also a 3-dimensional orientation is possible, respecting the object height more as by 2-dimensional orientation. The 3-dimensional orientation showed advantages for orientation based on a limited number of GCPs, but in case of poor GCP distribution it may cause also negative effects. For some of the used satellites the bias correction by affinity transformation showed advantages, but for some other the bias correction by shift was leading to a better levelling of the generated height models, even if the root mean square (RMS) differences at the GCPs were larger as for bias correction by affinity transformation. The generated height models can be analyzed and corrected with reference height models. For the used data sets accurate reference height models are available, but an analysis and correction with the free of charge available SRTM digital surface model (DSM) or ALOS World 3D (AW3D30) is also possible and leads to similar results. The comparison of the generated height models with the reference DSM shows some height undulations, but the major accuracy influence is caused by tilts of the height models. Some height model undulations reach up to 50 % of the ground sampling distance (GSD), this is not negligible but it cannot be seen not so much at the standard deviations of the height. In any case an improvement of the generated height models is possible with reference height models. If such corrections are applied it compensates possible negative effects of the type of bias correction or 2-dimensional orientations against 3-dimensional handling.
An experimental verification of laser-velocimeter sampling bias and its correction
NASA Technical Reports Server (NTRS)
Johnson, D. A.; Modarress, D.; Owen, F. K.
1982-01-01
The existence of 'sampling bias' in individual-realization laser velocimeter measurements is experimentally verified and shown to be independent of sample rate. The experiments were performed in a simple two-stream mixing shear flow with the standard for comparison being laser-velocimeter results obtained under continuous-wave conditions. It is also demonstrated that the errors resulting from sampling bias can be removed by a proper interpretation of the sampling statistics. In addition, data obtained in a shock-induced separated flow and in the near-wake of airfoils are presented, both bias-corrected and uncorrected, to illustrate the effects of sampling bias in the extreme.
NASA Astrophysics Data System (ADS)
Moise Famien, Adjoua; Defrance, Dimitri; Sultan, Benjamin; Janicot, Serge; Vrac, Mathieu
2017-04-01
Different CMIP exercises show that the simulations of the future/current temperature and precipitation are complex with a high uncertainty degree. For example, the African monsoon system is not correctly simulated and most of the CMIP5 models underestimate the precipitation. Therefore, Global Climate Models (GCMs) show significant systematic biases that require bias correction before it can be used in impacts studies. Several methods of bias corrections have been developed for several years and are increasingly using more complex statistical methods. The aims of this work is to show the interest of the CDFt (Cumulative Distribution Function transfom (Michelangeli et al.,2009)) method to reduce the data bias from 29 CMIP5 GCMs over Africa and to assess the impact of bias corrected data on crop yields prediction by the end of the 21st century. In this work, we apply the CDFt to daily data covering the period from 1950 to 2099 (Historical and RCP8.5) and we correct the climate variables (temperature, precipitation, solar radiation, wind) by the use of the new daily database from the EU project WATer and global CHange (WATCH) available from 1979 to 2013 as reference data. The performance of the method is assessed in several cases. First, data are corrected based on different calibrations periods and are compared, on one hand, with observations to estimate the sensitivity of the method to the calibration period and, on other hand, with another bias-correction method used in the ISIMIP project. We find that, whatever the calibration period used, CDFt corrects well the mean state of variables and preserves their trend, as well as daily rainfall occurrence and intensity distributions. However, some differences appear when compared to the outputs obtained with the method used in ISIMIP and show that the quality of the correction is strongly related to the reference data. Secondly, we validate the bias correction method with the agronomic simulations (SARRA-H model (Kouressy et al., 2008)) by comparison with FAO crops yields estimations over West Africa. Impact simulations show that crop model is sensitive to input data. They show also decreasing in crop yields by the end of this century. Michelangeli, P. A., Vrac, M., & Loukos, H. (2009). Probabilistic downscaling approaches: Application to wind cumulative distribution functions. Geophysical Research Letters, 36(11). Kouressy M, Dingkuhn M, Vaksmann M and Heinemann A B 2008: Adaptation to diverse semi-arid environments of sorghum genotypes having different plant type and sensitivity to photoperiod. Agric. Forest Meteorol., http://dx.doi.org/10.1016/j.agrformet.2007.09.009
Characterizing bias correction uncertainty in wheat yield predictions
NASA Astrophysics Data System (ADS)
Ortiz, Andrea Monica; Jones, Julie; Freckleton, Robert; Scaife, Adam
2017-04-01
Farming systems are under increased pressure due to current and future climate change, variability and extremes. Research on the impacts of climate change on crop production typically rely on the output of complex Global and Regional Climate Models, which are used as input to crop impact models. Yield predictions from these top-down approaches can have high uncertainty for several reasons, including diverse model construction and parameterization, future emissions scenarios, and inherent or response uncertainty. These uncertainties propagate down each step of the 'cascade of uncertainty' that flows from climate input to impact predictions, leading to yield predictions that may be too complex for their intended use in practical adaptation options. In addition to uncertainty from impact models, uncertainty can also stem from the intermediate steps that are used in impact studies to adjust climate model simulations to become more realistic when compared to observations, or to correct the spatial or temporal resolution of climate simulations, which are often not directly applicable as input into impact models. These important steps of bias correction or calibration also add uncertainty to final yield predictions, given the various approaches that exist to correct climate model simulations. In order to address how much uncertainty the choice of bias correction method can add to yield predictions, we use several evaluation runs from Regional Climate Models from the Coordinated Regional Downscaling Experiment over Europe (EURO-CORDEX) at different resolutions together with different bias correction methods (linear and variance scaling, power transformation, quantile-quantile mapping) as input to a statistical crop model for wheat, a staple European food crop. The objective of our work is to compare the resulting simulation-driven hindcasted wheat yields to climate observation-driven wheat yield hindcasts from the UK and Germany in order to determine ranges of yield uncertainty that result from different climate model simulation input and bias correction methods. We simulate wheat yields using a General Linear Model that includes the effects of seasonal maximum temperatures and precipitation, since wheat is sensitive to heat stress during important developmental stages. We use the same statistical model to predict future wheat yields using the recently available bias-corrected simulations of EURO-CORDEX-Adjust. While statistical models are often criticized for their lack of complexity, an advantage is that we are here able to consider only the effect of the choice of climate model, resolution or bias correction method on yield. Initial results using both past and future bias-corrected climate simulations with a process-based model will also be presented. Through these methods, we make recommendations in preparing climate model output for crop models.
Krishna P. Poudel; Temesgen Hailemariam
2016-01-01
Using data from destructively sampled Douglas-fir and lodgepole pine trees, we evaluated the performance of regional volume and component biomass equations in terms of bias and RMSE. The volume and component biomass equations were calibrated using three different adjustment methods that used: (a) a correction factor based on ordinary least square regression through...
Comparing multilayer brain networks between groups: Introducing graph metrics and recommendations.
Mandke, Kanad; Meier, Jil; Brookes, Matthew J; O'Dea, Reuben D; Van Mieghem, Piet; Stam, Cornelis J; Hillebrand, Arjan; Tewarie, Prejaas
2018-02-01
There is an increasing awareness of the advantages of multi-modal neuroimaging. Networks obtained from different modalities are usually treated in isolation, which is however contradictory to accumulating evidence that these networks show non-trivial interdependencies. Even networks obtained from a single modality, such as frequency-band specific functional networks measured from magnetoencephalography (MEG) are often treated independently. Here, we discuss how a multilayer network framework allows for integration of multiple networks into a single network description and how graph metrics can be applied to quantify multilayer network organisation for group comparison. We analyse how well-known biases for single layer networks, such as effects of group differences in link density and/or average connectivity, influence multilayer networks, and we compare four schemes that aim to correct for such biases: the minimum spanning tree (MST), effective graph resistance cost minimisation, efficiency cost optimisation (ECO) and a normalisation scheme based on singular value decomposition (SVD). These schemes can be applied to the layers independently or to the multilayer network as a whole. For correction applied to whole multilayer networks, only the SVD showed sufficient bias correction. For correction applied to individual layers, three schemes (ECO, MST, SVD) could correct for biases. By using generative models as well as empirical MEG and functional magnetic resonance imaging (fMRI) data, we further demonstrated that all schemes were sensitive to identify network topology when the original networks were perturbed. In conclusion, uncorrected multilayer network analysis leads to biases. These biases may differ between centres and studies and could consequently lead to unreproducible results in a similar manner as for single layer networks. We therefore recommend using correction schemes prior to multilayer network analysis for group comparisons. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhu, Q.; Xu, Y. P.; Hsu, K. L.
2017-12-01
A new satellite-based precipitation dataset, Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Climate Data Record (PERSIANN-CDR) with long-term time series dating back to 1983 can be one valuable dataset for climate studies. This study investigates the feasibility of using PERSIANN-CDR as a reference dataset for climate studies. Sixteen CMIP5 models are evaluated over the Xiang River basin, southern China, by comparing their performance on precipitation projection and streamflow simulation, particularly on extreme precipitation and streamflow events. The results show PERSIANN-CDR is a valuable dataset for climate studies, even on extreme precipitation events. The precipitation estimates and their extreme events from CMIP5 models are improved significantly compared with rain gauge observations after bias-correction by the PERSIANN-CDR precipitation estimates. Given streamflows simulated with raw and bias-corrected precipitation estimates from 16 CMIP5 models, 10 out of 16 are improved after bias-correction. The impact of bias-correction on extreme events for streamflow simulations are unstable, with eight out of 16 models can be clearly claimed they are improved after the bias-correction. Concerning the performance of raw CMIP5 models on precipitation, IPSL-CM5A-MR excels the other CMIP5 models, while MRI-CGCM3 outperforms on extreme events with its better performance on six extreme precipitation metrics. Case studies also show that raw CCSM4, CESM1-CAM5, and MRI-CGCM3 outperform other models on streamflow simulation, while MIROC5-ESM-CHEM, MIROC5-ESM and IPSL-CM5A-MR behaves better than the other models after bias-correction.
Impact of correction factors in human brain lesion-behavior inference.
Sperber, Christoph; Karnath, Hans-Otto
2017-03-01
Statistical voxel-based lesion-behavior mapping (VLBM) in neurological patients with brain lesions is frequently used to examine the relationship between structure and function of the healthy human brain. Only recently, two simulation studies noted reduced anatomical validity of this method, observing the results of VLBM to be systematically misplaced by about 16 mm. However, both simulation studies differed from VLBM analyses of real data in that they lacked the proper use of two correction factors: lesion size and "sufficient lesion affection." In simulation experiments on a sample of 274 real stroke patients, we found that the use of these two correction factors reduced misplacement markedly compared to uncorrected VLBM. Apparently, the misplacement is due to physiological effects of brain lesion anatomy. Voxel-wise topographies of collateral damage in the real data were generated and used to compute a metric for the inter-voxel relation of brain damage. "Anatomical bias" vectors that were solely calculated from these inter-voxel relations in the patients' real anatomical data, successfully predicted the VLBM misplacement. The latter has the potential to help in the development of new VLBM methods that provide even higher anatomical validity than currently available by the proper use of correction factors. Hum Brain Mapp 38:1692-1701, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Transport through correlated systems with density functional theory
NASA Astrophysics Data System (ADS)
Kurth, S.; Stefanucci, G.
2017-10-01
We present recent advances in density functional theory (DFT) for applications in the field of quantum transport, with particular emphasis on transport through strongly correlated systems. We review the foundations of the popular Landauer-Büttiker(LB) + DFT approach. This formalism, when using approximations to the exchange-correlation (xc) potential with steps at integer occupation, correctly captures the Kondo plateau in the zero bias conductance at zero temperature but completely fails to capture the transition to the Coulomb blockade (CB) regime as the temperature increases. To overcome the limitations of LB + DFT, the quantum transport problem is treated from a time-dependent (TD) perspective using TDDFT, an exact framework to deal with nonequilibrium situations. The steady-state limit of TDDFT shows that in addition to an xc potential in the junction, there also exists an xc correction to the applied bias. Open shell molecules in the CB regime provide the most striking examples of the importance of the xc bias correction. Using the Anderson model as guidance we estimate these corrections in the limit of zero bias. For the general case we put forward a steady-state DFT which is based on one-to-one correspondence between the pair of basic variables, steady density on and steady current across the junction and the pair local potential on and bias across the junction. Like TDDFT, this framework also leads to both an xc potential in the junction and an xc correction to the bias. Unlike TDDFT, these potentials are independent of history. We highlight the universal features of both xc potential and xc bias corrections for junctions in the CB regime and provide an accurate parametrization for the Anderson model at arbitrary temperatures and interaction strengths, thus providing a unified DFT description for both Kondo and CB regimes and the transition between them.
Magnetometer bias determination and attitude determination for near-earth spacecraft
NASA Technical Reports Server (NTRS)
Lerner, G. M.; Shuster, M. D.
1979-01-01
A simple linear-regression algorithm is used to determine simultaneously magnetometer biases, misalignments, and scale factor corrections, as well as the dependence of the measured magnetic field on magnetic control systems. This algorithm has been applied to data from the Seasat-1 and the Atmosphere Explorer Mission-1/Heat Capacity Mapping Mission (AEM-1/HCMM) spacecraft. Results show that complete inflight calibration as described here can improve significantly the accuracy of attitude solutions obtained from magnetometer measurements. This report discusses the difficulties involved in obtaining attitude information from three-axis magnetometers, briefly derives the calibration algorithm, and presents numerical results for the Seasat-1 and AEM-1/HCMM spacecraft.
NASA Astrophysics Data System (ADS)
Monico, J. F. G.; De Oliveira, P. S., Jr.; Morel, L.; Fund, F.; Durand, S.; Durand, F.
2017-12-01
Mitigation of ionospheric effects on GNSS (Global Navigation Satellite System) signals is very challenging, especially for GNSS positioning applications based on SSR (State Space Representation) concept, which requires the knowledge of spatial correlated errors with considerable accuracy level (centimeter). The presence of satellite and receiver hardware biases on GNSS measurements difficult the proper estimation of ionospheric corrections, reducing their physical meaning. This problematic can lead to ionospheric corrections biased of several meters and often presenting negative values, which is physically not possible. In this contribution, we discuss a strategy to obtain SSR ionospheric corrections based on GNSS measurements from CORS (Continuous Operation Reference Stations) Networks with minimal presence of hardware biases and consequently physical meaning. Preliminary results are presented on generation and application of such corrections for simulated users located in Brazilian region under high level of ionospheric activity.
Quezada, Amado D; García-Guerra, Armando; Escobar, Leticia
2016-06-01
To assess the performance of a simple correction method for nutritional status estimates in children under five years of age when exact age is not available from the data. The proposed method was based on the assumption of symmetry of age distributions within a given month of age and validated in a large population-based survey sample of Mexican preschool children. The main distributional assumption was consistent with the data. All prevalence estimates derived from the correction method showed no statistically significant bias. In contrast, failing to correct attained age resulted in an underestimation of stunting in general and an overestimation of overweight or obesity among the youngest. The proposed method performed remarkably well in terms of bias correction of estimates and could be easily applied in situations in which either birth or interview dates are not available from the data.
NASA Astrophysics Data System (ADS)
Liersch, Stefan; Tecklenburg, Julia; Rust, Henning; Dobler, Andreas; Fischer, Madlen; Kruschke, Tim; Koch, Hagen; Fokko Hattermann, Fred
2018-04-01
Climate simulations are the fuel to drive hydrological models that are used to assess the impacts of climate change and variability on hydrological parameters, such as river discharges, soil moisture, and evapotranspiration. Unlike with cars, where we know which fuel the engine requires, we never know in advance what unexpected side effects might be caused by the fuel we feed our models with. Sometimes we increase the fuel's octane number (bias correction) to achieve better performance and find out that the model behaves differently but not always as was expected or desired. This study investigates the impacts of projected climate change on the hydrology of the Upper Blue Nile catchment using two model ensembles consisting of five global CMIP5 Earth system models and 10 regional climate models (CORDEX Africa). WATCH forcing data were used to calibrate an eco-hydrological model and to bias-correct both model ensembles using slightly differing approaches. On the one hand it was found that the bias correction methods considerably improved the performance of average rainfall characteristics in the reference period (1970-1999) in most of the cases. This also holds true for non-extreme discharge conditions between Q20 and Q80. On the other hand, bias-corrected simulations tend to overemphasize magnitudes of projected change signals and extremes. A general weakness of both uncorrected and bias-corrected simulations is the rather poor representation of high and low flows and their extremes, which were often deteriorated by bias correction. This inaccuracy is a crucial deficiency for regional impact studies dealing with water management issues and it is therefore important to analyse model performance and characteristics and the effect of bias correction, and eventually to exclude some climate models from the ensemble. However, the multi-model means of all ensembles project increasing average annual discharges in the Upper Blue Nile catchment and a shift in seasonal patterns, with decreasing discharges in June and July and increasing discharges from August to November.
Impact of chlorophyll bias on the tropical Pacific mean climate in an earth system model
NASA Astrophysics Data System (ADS)
Lim, Hyung-Gyu; Park, Jong-Yeon; Kug, Jong-Seong
2017-12-01
Climate modeling groups nowadays develop earth system models (ESMs) by incorporating biogeochemical processes in their climate models. The ESMs, however, often show substantial bias in simulated marine biogeochemistry which can potentially introduce an undesirable bias in physical ocean fields through biogeophysical interactions. This study examines how and how much the chlorophyll bias in a state-of-the-art ESM affects the mean and seasonal cycle of tropical Pacific sea-surface temperature (SST). The ESM used in the present study shows a sizeable positive bias in the simulated tropical chlorophyll. We found that the correction of the chlorophyll bias can reduce the ESM's intrinsic cold SST mean bias in the equatorial Pacific. The biologically-induced cold SST bias is strongly affected by seasonally-dependent air-sea coupling strength. In addition, the correction of chlorophyll bias can improve the annual cycle of SST by up to 25%. This result suggests a possible modeling approach in understanding the two-way interactions between physical and chlorophyll biases by biogeophysical effects.
The L0 Regularized Mumford-Shah Model for Bias Correction and Segmentation of Medical Images.
Duan, Yuping; Chang, Huibin; Huang, Weimin; Zhou, Jiayin; Lu, Zhongkang; Wu, Chunlin
2015-11-01
We propose a new variant of the Mumford-Shah model for simultaneous bias correction and segmentation of images with intensity inhomogeneity. First, based on the model of images with intensity inhomogeneity, we introduce an L0 gradient regularizer to model the true intensity and a smooth regularizer to model the bias field. In addition, we derive a new data fidelity using the local intensity properties to allow the bias field to be influenced by its neighborhood. Second, we use a two-stage segmentation method, where the fast alternating direction method is implemented in the first stage for the recovery of true intensity and bias field and a simple thresholding is used in the second stage for segmentation. Different from most of the existing methods for simultaneous bias correction and segmentation, we estimate the bias field and true intensity without fixing either the number of the regions or their values in advance. Our method has been validated on medical images of various modalities with intensity inhomogeneity. Compared with the state-of-art approaches and the well-known brain software tools, our model is fast, accurate, and robust with initializations.
Zhu, Qiaohao; Carriere, K C
2016-01-01
Publication bias can significantly limit the validity of meta-analysis when trying to draw conclusion about a research question from independent studies. Most research on detection and correction for publication bias in meta-analysis focus mainly on funnel plot-based methodologies or selection models. In this paper, we formulate publication bias as a truncated distribution problem, and propose new parametric solutions. We develop methodologies of estimating the underlying overall effect size and the severity of publication bias. We distinguish the two major situations, in which publication bias may be induced by: (1) small effect size or (2) large p-value. We consider both fixed and random effects models, and derive estimators for the overall mean and the truncation proportion. These estimators will be obtained using maximum likelihood estimation and method of moments under fixed- and random-effects models, respectively. We carried out extensive simulation studies to evaluate the performance of our methodology, and to compare with the non-parametric Trim and Fill method based on funnel plot. We find that our methods based on truncated normal distribution perform consistently well, both in detecting and correcting publication bias under various situations.
On the distortion of elevation dependent warming signals by quantile mapping
NASA Astrophysics Data System (ADS)
Jury, Martin W.; Mendlik, Thomas; Maraun, Douglas
2017-04-01
Elevation dependent warming (EDW), the amplification of warming under climate change with elevation, is likely to accelerate changes in e.g. cryospheric and hydrological systems. Responsible for EDW is a mixture of processes including snow albedo feedback, cloud formations or the location of aerosols. The degree of incorporation of this processes varies across state of the art climate models. In a recent study we were preparing bias corrected model output of CMIP5 GCMs and CORDEX RCMs over the Himalayan region for the glacier modelling community. In a first attempt we used quantile mapping (QM) to generate this data. A beforehand model evaluation showed that more than two third of the 49 included climate models were able to reproduce positive trend differences between areas of higher and lower elevations in winter, clearly visible in all of our five observational datasets used. Regrettably, we noticed that height dependent trend signals provided by models were distorted, most of the time in the direction of less EDW, sometimes even reversing EDW signals present in the models before the bias correction. As a consequence, we refrained from using quantile mapping for our task, as EDW poses one important factor influencing the climate in high altitudes for the nearer and more distant future, and used a climate change signal preserving bias correction approach. Here we present our findings of the distortion of the EDW temperature change by QM and discuss the influence of QM on different statistical properties as well as their modifications.
When do we care about political neutrality? The hypocritical nature of reaction to political bias
Sulitzeanu-Kenan, Raanan
2018-01-01
Claims and accusations of political bias are common in many countries. The essence of such claims is a denunciation of alleged violations of political neutrality in the context of media coverage, legal and bureaucratic decisions, academic teaching etc. Yet the acts and messages that give rise to such claims are also embedded within a context of intergroup competition. Thus, in evaluating the seriousness of, and the need for taking a corrective action in reaction to a purported politically biased act people may consider both the alleged normative violation and the political implications of the act/message for the evaluator’s ingroup. The question thus arises whether partisans react similarly to ingroup-aiding and ingroup-harming actions or messages which they perceive as politically biased. In three separate studies, conducted in two countries, we show that political considerations strongly affect partisans’ reactions to actions and messages that they perceive as politically biased. Namely, ingroup-harming biased messages/acts are considered more serious and are more likely to warrant corrective action in comparison to ingroup-aiding biased messages/acts. We conclude by discussing the implications of these findings for the implementations of measures intended for correcting and preventing biases, and for the nature of conflict and competition between rival political groups. PMID:29723271
When do we care about political neutrality? The hypocritical nature of reaction to political bias.
Yair, Omer; Sulitzeanu-Kenan, Raanan
2018-01-01
Claims and accusations of political bias are common in many countries. The essence of such claims is a denunciation of alleged violations of political neutrality in the context of media coverage, legal and bureaucratic decisions, academic teaching etc. Yet the acts and messages that give rise to such claims are also embedded within a context of intergroup competition. Thus, in evaluating the seriousness of, and the need for taking a corrective action in reaction to a purported politically biased act people may consider both the alleged normative violation and the political implications of the act/message for the evaluator's ingroup. The question thus arises whether partisans react similarly to ingroup-aiding and ingroup-harming actions or messages which they perceive as politically biased. In three separate studies, conducted in two countries, we show that political considerations strongly affect partisans' reactions to actions and messages that they perceive as politically biased. Namely, ingroup-harming biased messages/acts are considered more serious and are more likely to warrant corrective action in comparison to ingroup-aiding biased messages/acts. We conclude by discussing the implications of these findings for the implementations of measures intended for correcting and preventing biases, and for the nature of conflict and competition between rival political groups.
Skin Temperature Analysis and Bias Correction in a Coupled Land-Atmosphere Data Assimilation System
NASA Technical Reports Server (NTRS)
Bosilovich, Michael G.; Radakovich, Jon D.; daSilva, Arlindo; Todling, Ricardo; Verter, Frances
2006-01-01
In an initial investigation, remotely sensed surface temperature is assimilated into a coupled atmosphere/land global data assimilation system, with explicit accounting for biases in the model state. In this scheme, an incremental bias correction term is introduced in the model's surface energy budget. In its simplest form, the algorithm estimates and corrects a constant time mean bias for each gridpoint; additional benefits are attained with a refined version of the algorithm which allows for a correction of the mean diurnal cycle. The method is validated against the assimilated observations, as well as independent near-surface air temperature observations. In many regions, not accounting for the diurnal cycle of bias caused degradation of the diurnal amplitude of background model air temperature. Energy fluxes collected through the Coordinated Enhanced Observing Period (CEOP) are used to more closely inspect the surface energy budget. In general, sensible heat flux is improved with the surface temperature assimilation, and two stations show a reduction of bias by as much as 30 Wm(sup -2) Rondonia station in Amazonia, the Bowen ratio changes direction in an improvement related to the temperature assimilation. However, at many stations the monthly latent heat flux bias is slightly increased. These results show the impact of univariate assimilation of surface temperature observations on the surface energy budget, and suggest the need for multivariate land data assimilation. The results also show the need for independent validation data, especially flux stations in varied climate regimes.
prepbufr BUFR biascr.$CDUMP.$CDATE Time dependent sat bias correction file abias text satang.$CDUMP.$CDATE Angle dependent sat bias correction satang text sfcanl.$CDUMP.$CDATE surface analysis sfcanl binary tcvitl.$CDUMP.$CDATE Tropical Storm Vitals syndata.tcvitals.tm00 text adpsfc.$CDUMP.$CDATE Surface land
High-Resolution Near Real-Time Drought Monitoring in South Asia
NASA Astrophysics Data System (ADS)
Aadhar, S.; Mishra, V.
2017-12-01
Drought in South Asia affect food and water security and pose challenges for millions of people. For policy-making, planning and management of water resources at the sub-basin or administrative levels, high-resolution datasets of precipitation and air temperature are required in near-real time. Here we develop a high resolution (0.05 degree) bias-corrected precipitation and temperature data that can be used to monitor near real-time drought conditions over South Asia. Moreover, the dataset can be used to monitor climatic extremes (heat waves, cold waves, dry and wet anomalies) in South Asia. A distribution mapping method was applied to correct bias in precipitation and air temperature (maximum and minimum), which performed well compared to the other bias correction method based on linear scaling. Bias-corrected precipitation and temperature data were used to estimate Standardized precipitation index (SPI) and Standardized Precipitation Evapotranspiration Index (SPEI) to assess the historical and current drought conditions in South Asia. We evaluated drought severity and extent against the satellite-based Normalized Difference Vegetation Index (NDVI) anomalies and satellite-driven Drought Severity Index (DSI) at 0.05˚. We find that the bias-corrected high-resolution data can effectively capture observed drought conditions as shown by the satellite-based drought estimates. High resolution near real-time dataset can provide valuable information for decision-making at district and sub- basin levels.
Ge, Yulong; Zhou, Feng; Sun, Baoqi; Wang, Shengli; Shi, Bo
2017-01-01
We present quad-constellation (namely, GPS, GLONASS, BeiDou and Galileo) time group delay (TGD) and differential code bias (DCB) correction models to fully exploit the code observations of all the four global navigation satellite systems (GNSSs) for navigation and positioning. The relationship between TGDs and DCBs for multi-GNSS is clearly figured out, and the equivalence of TGD and DCB correction models combining theory with practice is demonstrated. Meanwhile, the TGD/DCB correction models have been extended to various standard point positioning (SPP) and precise point positioning (PPP) scenarios in a multi-GNSS and multi-frequency context. To evaluate the effectiveness and practicability of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both single-frequency GNSS ionosphere-corrected SPP and dual-frequency GNSS ionosphere-free SPP/PPP tests are carried out with quad-constellation signals. Furthermore, the author investigates the influence of differential code biases on GNSS positioning estimates. The experiments show that multi-constellation combination SPP performs better after DCB/TGD correction, for example, for GPS-only b1-based SPP, the positioning accuracies can be improved by 25.0%, 30.6% and 26.7%, respectively, in the N, E, and U components, after the differential code biases correction, while GPS/GLONASS/BDS b1-based SPP can be improved by 16.1%, 26.1% and 9.9%. For GPS/BDS/Galileo the 3rd frequency based SPP, the positioning accuracies are improved by 2.0%, 2.0% and 0.4%, respectively, in the N, E, and U components, after Galileo satellites DCB correction. The accuracy of Galileo-only b1-based SPP are improved about 48.6%, 34.7% and 40.6% with DCB correction, respectively, in the N, E, and U components. The estimates of multi-constellation PPP are subject to different degrees of influence. For multi-constellation combination SPP, the accuracy of single-frequency is slightly better than that of dual-frequency combinations. Dual-frequency combinations are more sensitive to the differential code biases, especially for the 2nd and 3rd frequency combination, such as for GPS/BDS SPP, accuracy improvements of 60.9%, 26.5% and 58.8% in the three coordinate components is achieved after DCB parameters correction. For multi-constellation PPP, the convergence time can be reduced significantly with differential code biases correction. And the accuracy of positioning is slightly better with TGD/DCB correction. PMID:28300787
Ge, Yulong; Zhou, Feng; Sun, Baoqi; Wang, Shengli; Shi, Bo
2017-03-16
We present quad-constellation (namely, GPS, GLONASS, BeiDou and Galileo) time group delay (TGD) and differential code bias (DCB) correction models to fully exploit the code observations of all the four global navigation satellite systems (GNSSs) for navigation and positioning. The relationship between TGDs and DCBs for multi-GNSS is clearly figured out, and the equivalence of TGD and DCB correction models combining theory with practice is demonstrated. Meanwhile, the TGD/DCB correction models have been extended to various standard point positioning (SPP) and precise point positioning (PPP) scenarios in a multi-GNSS and multi-frequency context. To evaluate the effectiveness and practicability of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both single-frequency GNSS ionosphere-corrected SPP and dual-frequency GNSS ionosphere-free SPP/PPP tests are carried out with quad-constellation signals. Furthermore, the author investigates the influence of differential code biases on GNSS positioning estimates. The experiments show that multi-constellation combination SPP performs better after DCB/TGD correction, for example, for GPS-only b1-based SPP, the positioning accuracies can be improved by 25.0%, 30.6% and 26.7%, respectively, in the N, E, and U components, after the differential code biases correction, while GPS/GLONASS/BDS b1-based SPP can be improved by 16.1%, 26.1% and 9.9%. For GPS/BDS/Galileo the 3rd frequency based SPP, the positioning accuracies are improved by 2.0%, 2.0% and 0.4%, respectively, in the N, E, and U components, after Galileo satellites DCB correction. The accuracy of Galileo-only b1-based SPP are improved about 48.6%, 34.7% and 40.6% with DCB correction, respectively, in the N, E, and U components. The estimates of multi-constellation PPP are subject to different degrees of influence. For multi-constellation combination SPP, the accuracy of single-frequency is slightly better than that of dual-frequency combinations. Dual-frequency combinations are more sensitive to the differential code biases, especially for the 2nd and 3rd frequency combination, such as for GPS/BDS SPP, accuracy improvements of 60.9%, 26.5% and 58.8% in the three coordinate components is achieved after DCB parameters correction. For multi-constellation PPP, the convergence time can be reduced significantly with differential code biases correction. And the accuracy of positioning is slightly better with TGD/DCB correction.
Tisdall, M Dylan; Reuter, Martin; Qureshi, Abid; Buckner, Randy L; Fischl, Bruce; van der Kouwe, André J W
2016-02-15
Recent work has demonstrated that subject motion produces systematic biases in the metrics computed by widely used morphometry software packages, even when the motion is too small to produce noticeable image artifacts. In the common situation where the control population exhibits different behaviors in the scanner when compared to the experimental population, these systematic measurement biases may produce significant confounds for between-group analyses, leading to erroneous conclusions about group differences. While previous work has shown that prospective motion correction can improve perceived image quality, here we demonstrate that, in healthy subjects performing a variety of directed motions, the use of the volumetric navigator (vNav) prospective motion correction system significantly reduces the motion-induced bias and variance in morphometry. Copyright © 2015 Elsevier Inc. All rights reserved.
Image-guided regularization level set evolution for MR image segmentation and bias field correction.
Wang, Lingfeng; Pan, Chunhong
2014-01-01
Magnetic resonance (MR) image segmentation is a crucial step in surgical and treatment planning. In this paper, we propose a level-set-based segmentation method for MR images with intensity inhomogeneous problem. To tackle the initialization sensitivity problem, we propose a new image-guided regularization to restrict the level set function. The maximum a posteriori inference is adopted to unify segmentation and bias field correction within a single framework. Under this framework, both the contour prior and the bias field prior are fully used. As a result, the image intensity inhomogeneity can be well solved. Extensive experiments are provided to evaluate the proposed method, showing significant improvements in both segmentation and bias field correction accuracies as compared with other state-of-the-art approaches. Copyright © 2014 Elsevier Inc. All rights reserved.
Number-counts slope estimation in the presence of Poisson noise
NASA Technical Reports Server (NTRS)
Schmitt, Juergen H. M. M.; Maccacaro, Tommaso
1986-01-01
The slope determination of a power-law number flux relationship in the case of photon-limited sampling. This case is important for high-sensitivity X-ray surveys with imaging telescopes, where the error in an individual source measurement depends on integrated flux and is Poisson, rather than Gaussian, distributed. A bias-free method of slope estimation is developed that takes into account the exact error distribution, the influence of background noise, and the effects of varying limiting sensitivities. It is shown that the resulting bias corrections are quite insensitive to the bias correction procedures applied, as long as only sources with signal-to-noise ratio five or greater are considered. However, if sources with signal-to-noise ratio five or less are included, the derived bias corrections depend sensitively on the shape of the error distribution.
Braaf, Boy; Donner, Sabine; Nam, Ahhyun S.; Bouma, Brett E.; Vakoc, Benjamin J.
2018-01-01
Complex differential variance (CDV) provides phase-sensitive angiographic imaging for optical coherence tomography (OCT) with immunity to phase-instabilities of the imaging system and small-scale axial bulk motion. However, like all angiographic methods, measurement noise can result in erroneous indications of blood flow that confuse the interpretation of angiographic images. In this paper, a modified CDV algorithm that corrects for this noise-bias is presented. This is achieved by normalizing the CDV signal by analytically derived upper and lower limits. The noise-bias corrected CDV algorithm was implemented into an experimental 1 μm wavelength OCT system for retinal imaging that used an eye tracking scanner laser ophthalmoscope at 815 nm for compensation of lateral eye motions. The noise-bias correction improved the CDV imaging of the blood flow in tissue layers with a low signal-to-noise ratio and suppressed false indications of blood flow outside the tissue. In addition, the CDV signal normalization suppressed noise induced by galvanometer scanning errors and small-scale lateral motion. High quality cross-section and motion-corrected en face angiograms of the retina and choroid are presented. PMID:29552388
Braaf, Boy; Donner, Sabine; Nam, Ahhyun S; Bouma, Brett E; Vakoc, Benjamin J
2018-02-01
Complex differential variance (CDV) provides phase-sensitive angiographic imaging for optical coherence tomography (OCT) with immunity to phase-instabilities of the imaging system and small-scale axial bulk motion. However, like all angiographic methods, measurement noise can result in erroneous indications of blood flow that confuse the interpretation of angiographic images. In this paper, a modified CDV algorithm that corrects for this noise-bias is presented. This is achieved by normalizing the CDV signal by analytically derived upper and lower limits. The noise-bias corrected CDV algorithm was implemented into an experimental 1 μm wavelength OCT system for retinal imaging that used an eye tracking scanner laser ophthalmoscope at 815 nm for compensation of lateral eye motions. The noise-bias correction improved the CDV imaging of the blood flow in tissue layers with a low signal-to-noise ratio and suppressed false indications of blood flow outside the tissue. In addition, the CDV signal normalization suppressed noise induced by galvanometer scanning errors and small-scale lateral motion. High quality cross-section and motion-corrected en face angiograms of the retina and choroid are presented.
Potential of bias correction for downscaling passive microwave and soil moisture data
USDA-ARS?s Scientific Manuscript database
Passive microwave satellites such as SMOS (Soil Moisture and Ocean Salinity) or SMAP (Soil Moisture Active Passive) observe brightness temperature (TB) and retrieve soil moisture at a spatial resolution greater than most hydrological processes. Bias correction is proposed as a simple method to disag...
Keckhut, P; Funatsu, B M; Claud, C; Hauchecorne, A
2015-01-01
Stratospheric temperature series derived from the Advanced Microwave Sounding Unit (AMSU) on board successive NOAA satellites reveal, during periods of overlap, some bias and drifts. Part of the reason for these discrepancies could be atmospheric tides as the orbits of these satellites drifted, inducing large changes in the actual times of measurement. NOAA 15 and 16, which exhibit a long period of overlap, allow deriving diurnal tides that can correct such temperature drifts. The characteristics of the derived diurnal tides during summer periods is in good agreement with those calculated with the Global Scale Wave Model, indicating that most of the observed drifts are likely due to the atmospheric tides. Cooling can be biased by a factor of 2, if times of measurement are not considered. When diurnal tides are considered, trends derived from temperature lidar series are in good agreement with AMSU series. Future adjustments of temperature time series based on successive AMSU instruments will require considering corrections associated with the local times of measurement. PMID:26300563
Keckhut, P; Funatsu, B M; Claud, C; Hauchecorne, A
2015-01-01
Stratospheric temperature series derived from the Advanced Microwave Sounding Unit (AMSU) on board successive NOAA satellites reveal, during periods of overlap, some bias and drifts. Part of the reason for these discrepancies could be atmospheric tides as the orbits of these satellites drifted, inducing large changes in the actual times of measurement. NOAA 15 and 16, which exhibit a long period of overlap, allow deriving diurnal tides that can correct such temperature drifts. The characteristics of the derived diurnal tides during summer periods is in good agreement with those calculated with the Global Scale Wave Model, indicating that most of the observed drifts are likely due to the atmospheric tides. Cooling can be biased by a factor of 2, if times of measurement are not considered. When diurnal tides are considered, trends derived from temperature lidar series are in good agreement with AMSU series. Future adjustments of temperature time series based on successive AMSU instruments will require considering corrections associated with the local times of measurement.
NASA Technical Reports Server (NTRS)
1979-01-01
The computer program DEKFIS (discrete extended Kalman filter/smoother), formulated for aircraft and helicopter state estimation and data consistency, is described. DEKFIS is set up to pre-process raw test data by removing biases, correcting scale factor errors and providing consistency with the aircraft inertial kinematic equations. The program implements an extended Kalman filter/smoother using the Friedland-Duffy formulation.
Length bias correction in gene ontology enrichment analysis using logistic regression.
Mi, Gu; Di, Yanming; Emerson, Sarah; Cumbie, Jason S; Chang, Jeff H
2012-01-01
When assessing differential gene expression from RNA sequencing data, commonly used statistical tests tend to have greater power to detect differential expression of genes encoding longer transcripts. This phenomenon, called "length bias", will influence subsequent analyses such as Gene Ontology enrichment analysis. In the presence of length bias, Gene Ontology categories that include longer genes are more likely to be identified as enriched. These categories, however, are not necessarily biologically more relevant. We show that one can effectively adjust for length bias in Gene Ontology analysis by including transcript length as a covariate in a logistic regression model. The logistic regression model makes the statistical issue underlying length bias more transparent: transcript length becomes a confounding factor when it correlates with both the Gene Ontology membership and the significance of the differential expression test. The inclusion of the transcript length as a covariate allows one to investigate the direct correlation between the Gene Ontology membership and the significance of testing differential expression, conditional on the transcript length. We present both real and simulated data examples to show that the logistic regression approach is simple, effective, and flexible.
NASA Astrophysics Data System (ADS)
Weber, Torsten; Haensler, Andreas; Jacob, Daniela
2017-12-01
Regional climate models (RCMs) have been used to dynamically downscale global climate projections at high spatial and temporal resolution in order to analyse the atmospheric water cycle. In southern Africa, precipitation pattern were strongly affected by the moisture transport from the southeast Atlantic and southwest Indian Ocean and, consequently, by their sea surface temperatures (SSTs). However, global ocean models often have deficiencies in resolving regional to local scale ocean currents, e.g. in ocean areas offshore the South African continent. By downscaling global climate projections using RCMs, the biased SSTs from the global forcing data were introduced to the RCMs and affected the results of regional climate projections. In this work, the impact of the SST bias correction on precipitation, evaporation and moisture transport were analysed over southern Africa. For this analysis, several experiments were conducted with the regional climate model REMO using corrected and uncorrected SSTs. In these experiments, a global MPI-ESM-LR historical simulation was downscaled with the regional climate model REMO to a high spatial resolution of 50 × 50 km2 and of 25 × 25 km2 for southern Africa using a double-nesting method. The results showed a distinct impact of the corrected SST on the moisture transport, the meridional vertical circulation and on the precipitation pattern in southern Africa. Furthermore, it was found that the experiment with the corrected SST led to a reduction of the wet bias over southern Africa and to a better agreement with observations as without SST bias corrections.
Hayes, Alison J; Clarke, Philip M; Lung, Tom Wc
2011-09-25
Many studies have documented the bias in body mass index (BMI) determined from self-reported data on height and weight, but few have examined the change in bias over time. Using data from large, nationally-representative population health surveys, we examined change in bias in height and weight reporting among Australian adults between 1995 and 2008. Our study dataset included 9,635 men and women in 1995 and 9,141 in 2007-2008. We investigated the determinants of the bias and derived correction equations using 2007-2008 data, which can be applied when only self-reported anthropometric data are available. In 1995, self-reported BMI (derived from height and weight) was 1.2 units (men) and 1.4 units (women) lower than measured BMI. In 2007-2008, there was still underreporting, but the amount had declined to 0.6 units (men) and 0.7 units (women) below measured BMI. The major determinants of reporting error in 2007-2008 were age, sex, measured BMI, and education of the respondent. Correction equations for height and weight derived from 2007-2008 data and applied to self-reported data were able to adjust for the bias and were accurate across all age and sex strata. The diminishing reporting bias in BMI in Australia means that correction equations derived from 2007-2008 data may not be transferable to earlier self-reported data. Second, predictions of future overweight and obesity in Australia based on trends in self-reported information are likely to be inaccurate, as the change in reporting bias will affect the apparent increase in self-reported obesity prevalence.
NASA Astrophysics Data System (ADS)
Cannon, Alex J.
2018-01-01
Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another—the N-dimensional probability density function transform—is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin, particularly for annual maxima of the FWI distribution and spatiotemporal autocorrelation of precipitation fields.
High-precision Ru isotopic measurements by multi-collector ICP-MS.
Becker, Harry; Dalpe, Claude; Walker, Richard J
2002-06-01
Ruthenium isotopic data for a pure Aldrich ruthenium nitrate solution obtained using a Nu Plasma multi collector inductively coupled plasma-mass spectrometer (MC-ICP-MS) shows excellent agreement (better than 1 epsilon unit = 1 part in 10(4)) with data obtained by other techniques for the mass range between 96 and 101 amu. External precisions are at the 0.5-1.7 epsilon level (2sigma). Higher sensitivity for MC ICP-MS compared to negative thermal ionization mass spectrometry (N-TIMS) is offset by the uncertainties introduced by relatively large mass discrimination and instabilities in the plasma source-ion extraction region that affect the long-term reproducibility. Large mass bias correction in ICP mass spectrometry demands particular attention to be paid to the choice of normalizing isotopes. Because of its position in the mass spectrum and the large mass bias correction, obtaining precise and accurate abundance data for 104Ru by MC-ICP-MS remains difficult. Internal and external mass bias correction schemes in this mass range may show similar shortcomings if the isotope of interest does not lie within the mass range covered by the masses used for normalization. Analyses of meteorite samples show that if isobaric interferences from Mo are sufficiently large (Ru/Mo < 10(4)), uncertainties on the Mo interference correction propagate through the mass bias correction and yield inaccurate results for Ru isotopic compositions. Second-order linear corrections may be used to correct for these inaccuracies, but such results are generally less precise than N-TIMS data.
Timebias corrections to predictions
NASA Technical Reports Server (NTRS)
Wood, Roger; Gibbs, Philip
1993-01-01
The importance of an accurate knowledge of the time bias corrections to predicted orbits to a satellite laser ranging (SLR) observer, especially for low satellites, is highlighted. Sources of time bias values and the optimum strategy for extrapolation are discussed from the viewpoint of the observer wishing to maximize the chances of getting returns from the next pass. What is said may be seen as a commercial encouraging wider and speedier use of existing data centers for mutually beneficial exchange of time bias data.
Averaging Bias Correction for Future IPDA Lidar Mission MERLIN
NASA Astrophysics Data System (ADS)
Tellier, Yoann; Pierangelo, Clémence; Wirth, Martin; Gibert, Fabien
2018-04-01
The CNES/DLR
NASA Astrophysics Data System (ADS)
Wang, Wenhui; Cao, Changyong; Ignatov, Alex; Li, Zhenglong; Wang, Likun; Zhang, Bin; Blonski, Slawomir; Li, Jun
2017-09-01
The Suomi NPP VIIRS thermal emissive bands (TEB) have been performing very well since data became available on January 20, 2012. The longwave infrared bands at 11 and 12 um (M15 and M16) are primarily used for sea surface temperature (SST) retrievals. A long standing anomaly has been observed during the quarterly warm-up-cool-down (WUCD) events. During such event daytime SST product becomes anomalous with a warm bias shown as a spike in the SST time series on the order of 0.2 K. A previous study (CAO et al. 2017) suggested that the VIIRS TEB calibration anomaly during WUCD is due to a flawed theoretical assumption in the calibration equation and proposed an Ltrace method to address the issue. This paper complements that study and presents operational implementation and validation of the Ltrace method for M15 and M16. The Ltrace method applies bias correction during WUCD only. It requires a simple code change and one-time calibration parameter look-up-table update. The method was evaluated using colocated CrIS observations and the SST algorithm. Our results indicate that the method can effectively reduce WUCD calibration anomaly in M15, with residual bias of 0.02 K after the correction. It works less effectively for M16, with residual bias of 0.04 K. The Ltrace method may over-correct WUCD calibration biases, especially for M16. However, the residual WUCD biases are small in both bands. Evaluation results using the SST algorithm show that the method can effectively remove SST anomaly during WUCD events.
North Atlantic climate model bias influence on multiyear predictability
NASA Astrophysics Data System (ADS)
Wu, Y.; Park, T.; Park, W.; Latif, M.
2018-01-01
The influences of North Atlantic biases on multiyear predictability of unforced surface air temperature (SAT) variability are examined in the Kiel Climate Model (KCM). By employing a freshwater flux correction over the North Atlantic to the model, which strongly alleviates both North Atlantic sea surface salinity (SSS) and sea surface temperature (SST) biases, the freshwater flux-corrected integration depicts significantly enhanced multiyear SAT predictability in the North Atlantic sector in comparison to the uncorrected one. The enhanced SAT predictability in the corrected integration is due to a stronger and more variable Atlantic Meridional Overturning Circulation (AMOC) and its enhanced influence on North Atlantic SST. Results obtained from preindustrial control integrations of models participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5) support the findings obtained from the KCM: models with large North Atlantic biases tend to have a weak AMOC influence on SAT and exhibit a smaller SAT predictability over the North Atlantic sector.
Bias of shear wave elasticity measurements in thin layer samples and a simple correction strategy.
Mo, Jianqiang; Xu, Hao; Qiang, Bo; Giambini, Hugo; Kinnick, Randall; An, Kai-Nan; Chen, Shigao; Luo, Zongping
2016-01-01
Shear wave elastography (SWE) is an emerging technique for measuring biological tissue stiffness. However, the application of SWE in thin layer tissues is limited by bias due to the influence of geometry on measured shear wave speed. In this study, we investigated the bias of Young's modulus measured by SWE in thin layer gelatin-agar phantoms, and compared the result with finite element method and Lamb wave model simulation. The result indicated that the Young's modulus measured by SWE decreased continuously when the sample thickness decreased, and this effect was more significant for smaller thickness. We proposed a new empirical formula which can conveniently correct the bias without the need of using complicated mathematical modeling. In summary, we confirmed the nonlinear relation between thickness and Young's modulus measured by SWE in thin layer samples, and offered a simple and practical correction strategy which is convenient for clinicians to use.
Zhang, Ying; Alonzo, Todd A
2016-11-01
In diagnostic medicine, the volume under the receiver operating characteristic (ROC) surface (VUS) is a commonly used index to quantify the ability of a continuous diagnostic test to discriminate between three disease states. In practice, verification of the true disease status may be performed only for a subset of subjects under study since the verification procedure is invasive, risky, or expensive. The selection for disease examination might depend on the results of the diagnostic test and other clinical characteristics of the patients, which in turn can cause bias in estimates of the VUS. This bias is referred to as verification bias. Existing verification bias correction in three-way ROC analysis focuses on ordinal tests. We propose verification bias-correction methods to construct ROC surface and estimate the VUS for a continuous diagnostic test, based on inverse probability weighting. By applying U-statistics theory, we develop asymptotic properties for the estimator. A Jackknife estimator of variance is also derived. Extensive simulation studies are performed to evaluate the performance of the new estimators in terms of bias correction and variance. The proposed methods are used to assess the ability of a biomarker to accurately identify stages of Alzheimer's disease. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Differential sea-state bias: A case study using TOPEX/POSEIDON data
NASA Technical Reports Server (NTRS)
Stewart, Robert H.; Devalla, B.
1994-01-01
We used selected data from the NASA altimeter TOPEX/POSEIDON to calculate differences in range measured by the C and Ku-band altimeters when the satellite overflew 5 to 15 m waves late at night. The range difference is due to free electrons in the ionosphere and to errors in sea-state bias. For the selected data the ionospheric influence on Ku range is less than 2 cm. Any difference in range over short horizontal distances is due only to a small along-track variability of the ionosphere and to errors in calculating the differential sea-state bias. We find that there is a barely detectable error in the bias in the geophysical data records. The wave-induced error in the ionospheric correction is less than 0.2% of significant wave height. The equivalent error in differential range is less than 1% of wave height. Errors in the differential sea-state bias calculations appear to be small even for extreme wave heights that greatly exceed the conditions on which the bias is based. The results also improved our confidence in the sea-state bias correction used for calculating the geophysical data records. Any error in the correction must influence Ku and C-band ranges almost equally.
Estimation of satellite position, clock and phase bias corrections
NASA Astrophysics Data System (ADS)
Henkel, Patrick; Psychas, Dimitrios; Günther, Christoph; Hugentobler, Urs
2018-05-01
Precise point positioning with integer ambiguity resolution requires precise knowledge of satellite position, clock and phase bias corrections. In this paper, a method for the estimation of these parameters with a global network of reference stations is presented. The method processes uncombined and undifferenced measurements of an arbitrary number of frequencies such that the obtained satellite position, clock and bias corrections can be used for any type of differenced and/or combined measurements. We perform a clustering of reference stations. The clustering enables a common satellite visibility within each cluster and an efficient fixing of the double difference ambiguities within each cluster. Additionally, the double difference ambiguities between the reference stations of different clusters are fixed. We use an integer decorrelation for ambiguity fixing in dense global networks. The performance of the proposed method is analysed with both simulated Galileo measurements on E1 and E5a and real GPS measurements of the IGS network. We defined 16 clusters and obtained satellite position, clock and phase bias corrections with a precision of better than 2 cm.
Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods
Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun
2016-01-01
This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability. PMID:26751455
Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods.
Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun
2016-01-07
This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses' quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups' output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.
Adaptable gene-specific dye bias correction for two-channel DNA microarrays.
Margaritis, Thanasis; Lijnzaad, Philip; van Leenen, Dik; Bouwmeester, Diane; Kemmeren, Patrick; van Hooff, Sander R; Holstege, Frank C P
2009-01-01
DNA microarray technology is a powerful tool for monitoring gene expression or for finding the location of DNA-bound proteins. DNA microarrays can suffer from gene-specific dye bias (GSDB), causing some probes to be affected more by the dye than by the sample. This results in large measurement errors, which vary considerably for different probes and also across different hybridizations. GSDB is not corrected by conventional normalization and has been difficult to address systematically because of its variance. We show that GSDB is influenced by label incorporation efficiency, explaining the variation of GSDB across different hybridizations. A correction method (Gene- And Slide-Specific Correction, GASSCO) is presented, whereby sequence-specific corrections are modulated by the overall bias of individual hybridizations. GASSCO outperforms earlier methods and works well on a variety of publically available datasets covering a range of platforms, organisms and applications, including ChIP on chip. A sequence-based model is also presented, which predicts which probes will suffer most from GSDB, useful for microarray probe design and correction of individual hybridizations. Software implementing the method is publicly available.
Adaptable gene-specific dye bias correction for two-channel DNA microarrays
Margaritis, Thanasis; Lijnzaad, Philip; van Leenen, Dik; Bouwmeester, Diane; Kemmeren, Patrick; van Hooff, Sander R; Holstege, Frank CP
2009-01-01
DNA microarray technology is a powerful tool for monitoring gene expression or for finding the location of DNA-bound proteins. DNA microarrays can suffer from gene-specific dye bias (GSDB), causing some probes to be affected more by the dye than by the sample. This results in large measurement errors, which vary considerably for different probes and also across different hybridizations. GSDB is not corrected by conventional normalization and has been difficult to address systematically because of its variance. We show that GSDB is influenced by label incorporation efficiency, explaining the variation of GSDB across different hybridizations. A correction method (Gene- And Slide-Specific Correction, GASSCO) is presented, whereby sequence-specific corrections are modulated by the overall bias of individual hybridizations. GASSCO outperforms earlier methods and works well on a variety of publically available datasets covering a range of platforms, organisms and applications, including ChIP on chip. A sequence-based model is also presented, which predicts which probes will suffer most from GSDB, useful for microarray probe design and correction of individual hybridizations. Software implementing the method is publicly available. PMID:19401678
Process-conditioned bias correction for seasonal forecasting: a case-study with ENSO in Peru
NASA Astrophysics Data System (ADS)
Manzanas, R.; Gutiérrez, J. M.
2018-05-01
This work assesses the suitability of a first simple attempt for process-conditioned bias correction in the context of seasonal forecasting. To do this, we focus on the northwestern part of Peru and bias correct 1- and 4-month lead seasonal predictions of boreal winter (DJF) precipitation from the ECMWF System4 forecasting system for the period 1981-2010. In order to include information about the underlying large-scale circulation which may help to discriminate between precipitation affected by different processes, we introduce here an empirical quantile-quantile mapping method which runs conditioned on the state of the Southern Oscillation Index (SOI), which is accurately predicted by System4 and is known to affect the local climate. Beyond the reduction of model biases, our results show that the SOI-conditioned method yields better ROC skill scores and reliability than the raw model output over the entire region of study, whereas the standard unconditioned implementation provides no added value for any of these metrics. This suggests that conditioning the bias correction on simple but well-simulated large-scale processes relevant to the local climate may be a suitable approach for seasonal forecasting. Yet, further research on the suitability of the application of similar approaches to the one considered here for other regions, seasons and/or variables is needed.
Measurement of the $B^-$ lifetime using a simulation free approach for trigger bias correction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aaltonen, T.; /Helsinki Inst. of Phys.; Adelman, J.
2010-04-01
The collection of a large number of B hadron decays to hadronic final states at the CDF II detector is possible due to the presence of a trigger that selects events based on track impact parameters. However, the nature of the selection requirements of the trigger introduces a large bias in the observed proper decay time distribution. A lifetime measurement must correct for this bias and the conventional approach has been to use a Monte Carlo simulation. The leading sources of systematic uncertainty in the conventional approach are due to differences between the data and the Monte Carlo simulation. Inmore » this paper they present an analytic method for bias correction without using simulation, thereby removing any uncertainty between data and simulation. This method is presented in the form of a measurement of the lifetime of the B{sup -} using the mode B{sup -} {yields} D{sup 0}{pi}{sup -}. The B{sup -} lifetime is measured as {tau}{sub B{sup -}} = 1.663 {+-} 0.023 {+-} 0.015 ps, where the first uncertainty is statistical and the second systematic. This new method results in a smaller systematic uncertainty in comparison to methods that use simulation to correct for the trigger bias.« less
Baruzzini, Matthew L.; Hall, Howard L.; Spencer, Khalil J.; ...
2018-04-22
Investigations of the isotope fractionation behaviors of plutonium and uranium reference standards were conducted employing platinum and rhenium (Pt/Re) porous ion emitter (PIE) sources, a relatively new thermal ionization mass spectrometry (TIMS) ion source strategy. The suitability of commonly employed, empirically developed mass bias correction laws (i.e., the Linear, Power, and Russell's laws) for correcting such isotope ratio data was also determined. Corrected plutonium isotope ratio data, regardless of mass bias correction strategy, were statistically identical to that of the certificate, however, the process of isotope fractionation behavior of plutonium using the adopted experimental conditions was determined to be bestmore » described by the Power law. Finally, the fractionation behavior of uranium, using the analytical conditions described herein, is also most suitably modeled using the Power law, though Russell's and the Linear law for mass bias correction rendered results that were identical, within uncertainty, to the certificate value.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baruzzini, Matthew L.; Hall, Howard L.; Spencer, Khalil J.
Investigations of the isotope fractionation behaviors of plutonium and uranium reference standards were conducted employing platinum and rhenium (Pt/Re) porous ion emitter (PIE) sources, a relatively new thermal ionization mass spectrometry (TIMS) ion source strategy. The suitability of commonly employed, empirically developed mass bias correction laws (i.e., the Linear, Power, and Russell's laws) for correcting such isotope ratio data was also determined. Corrected plutonium isotope ratio data, regardless of mass bias correction strategy, were statistically identical to that of the certificate, however, the process of isotope fractionation behavior of plutonium using the adopted experimental conditions was determined to be bestmore » described by the Power law. Finally, the fractionation behavior of uranium, using the analytical conditions described herein, is also most suitably modeled using the Power law, though Russell's and the Linear law for mass bias correction rendered results that were identical, within uncertainty, to the certificate value.« less
NASA Astrophysics Data System (ADS)
Villiger, Arturo; Schaer, Stefan; Dach, Rolf; Prange, Lars; Jäggi, Adrian
2017-04-01
It is common to handle code biases in the Global Navigation Satellite System (GNSS) data analysis as conventional differential code biases (DCBs): P1-C1, P1-P2, and P2-C2. Due to the increasing number of signals and systems in conjunction with various tracking modes for the different signals (as defined in RINEX3 format), the number of DCBs would increase drastically and the bookkeeping becomes almost unbearable. The Center for Orbit Determination in Europe (CODE) has thus changed its processing scheme to observable-specific signal biases (OSB). This means that for each observation involved all related satellite and receiver biases are considered. The OSB contributions from various ionosphere analyses (geometry-free linear combination) using different observables and frequencies and from clock analyses (ionosphere-free linear combination) are then combined on normal equation level. By this, one consistent set of OSB values per satellite and receiver can be obtained that contains all information needed for GNSS-related processing. This advanced procedure of code bias handling is now also applied to the IGS (International GNSS Service) MGEX (Multi-GNSS Experiment) procedure at CODE. Results for the biases from the legacy IGS solution as well as the CODE MGEX processing (considering GPS, GLONASS, Galileo, BeiDou, and QZSS) are presented. The consistency with the traditional method is confirmed and the new results are discussed regarding the long-term stability. When processing code data, it is essential to know the true observable types in order to correct for the associated biases. CODE has been verifying the receiver tracking technologies for GPS based on estimated DCB multipliers (for the RINEX 2 case). With the change to OSB, the original verification approach was extended to search for the best fitting observable types based on known OSB values. In essence, a multiplier parameter is estimated for each involved GNSS observable type. This implies that we could recover, for receivers tracking a combination of signals, even the factors of these combinations. The verification of the observable types is crucial to identify the correct observable types of RINEX 2 data (which does not contain the signal modulation in comparison to RINEX 3). The correct information of the used observable types is essential for precise point positioning (PPP) applications and GNSS ambiguity resolution. Multi-GNSS OSBs and verified receiver tracking modes are essential to get best possible multi-GNSS solutions for geodynamic purposes and other applications.
Bias atlases for segmentation-based PET attenuation correction using PET-CT and MR.
Ouyang, Jinsong; Chun, Se Young; Petibon, Yoann; Bonab, Ali A; Alpert, Nathaniel; Fakhri, Georges El
2013-10-01
This study was to obtain voxel-wise PET accuracy and precision using tissue-segmentation for attenuation correction. We applied multiple thresholds to the CTs of 23 patients to classify tissues. For six of the 23 patients, MR images were also acquired. The MR fat/in-phase ratio images were used for fat segmentation. Segmented tissue classes were used to create attenuation maps, which were used for attenuation correction in PET reconstruction. PET bias images were then computed using the PET reconstructed with the original CT as the reference. We registered the CTs for all the patients and transformed the corresponding bias images accordingly. We then obtained the mean and standard deviation bias atlas using all the registered bias images. Our CT-based study shows that four-class segmentation (air, lungs, fat, other tissues), which is available on most PET-MR scanners, yields 15.1%, 4.1%, 6.6%, and 12.9% RMSE bias in lungs, fat, non-fat soft-tissues, and bones, respectively. An accurate fat identification is achievable using fat/in-phase MR images. Furthermore, we have found that three-class segmentation (air, lungs, other tissues) yields less than 5% standard deviation of bias within the heart, liver, and kidneys. This implies that three-class segmentation can be sufficient to achieve small variation of bias for imaging these three organs. Finally, we have found that inter- and intra-patient lung density variations contribute almost equally to the overall standard deviation of bias within the lungs.
Bias in Examination Test Banks that Accompany Cost Accounting Texts.
ERIC Educational Resources Information Center
Clute, Ronald C.; McGrail, George R.
1989-01-01
Eight text banks that accompany cost accounting textbooks were evaluated for the presence of bias in the distribution of correct responses. All but one were found to have considerable bias, and three of eight were found to have significant choice bias. (SK)
Variance analysis of forecasted streamflow maxima in a wet temperate climate
NASA Astrophysics Data System (ADS)
Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.
2018-05-01
Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.
He, Hua; McDermott, Michael P.
2012-01-01
Sensitivity and specificity are common measures of the accuracy of a diagnostic test. The usual estimators of these quantities are unbiased if data on the diagnostic test result and the true disease status are obtained from all subjects in an appropriately selected sample. In some studies, verification of the true disease status is performed only for a subset of subjects, possibly depending on the result of the diagnostic test and other characteristics of the subjects. Estimators of sensitivity and specificity based on this subset of subjects are typically biased; this is known as verification bias. Methods have been proposed to correct verification bias under the assumption that the missing data on disease status are missing at random (MAR), that is, the probability of missingness depends on the true (missing) disease status only through the test result and observed covariate information. When some of the covariates are continuous, or the number of covariates is relatively large, the existing methods require parametric models for the probability of disease or the probability of verification (given the test result and covariates), and hence are subject to model misspecification. We propose a new method for correcting verification bias based on the propensity score, defined as the predicted probability of verification given the test result and observed covariates. This is estimated separately for those with positive and negative test results. The new method classifies the verified sample into several subsamples that have homogeneous propensity scores and allows correction for verification bias. Simulation studies demonstrate that the new estimators are more robust to model misspecification than existing methods, but still perform well when the models for the probability of disease and probability of verification are correctly specified. PMID:21856650
Paech, S.J.; Mecikalski, J.R.; Sumner, D.M.; Pathak, C.S.; Wu, Q.; Islam, S.; Sangoyomi, T.
2009-01-01
Estimates of incoming solar radiation (insolation) from Geostationary Operational Environmental Satellite observations have been produced for the state of Florida over a 10-year period (1995-2004). These insolation estimates were developed into well-calibrated half-hourly and daily integrated solar insolation fields over the state at 2 km resolution, in addition to a 2-week running minimum surface albedo product. Model results of the daily integrated insolation were compared with ground-based pyranometers, and as a result, the entire dataset was calibrated. This calibration was accomplished through a three-step process: (1) comparison with ground-based pyranometer measurements on clear (noncloudy) reference days, (2) correcting for a bias related to cloudiness, and (3) deriving a monthly bias correction factor. Precalibration results indicated good model performance, with a station-averaged model error of 2.2 MJ m-2/day (13%). Calibration reduced errors to 1.7 MJ m -2/day (10%), and also removed temporal-related, seasonal-related, and satellite sensor-related biases. The calibrated insolation dataset will subsequently be used by state of Florida Water Management Districts to produce statewide, 2-km resolution maps of estimated daily reference and potential evapotranspiration for water management-related activities. ?? 2009 American Water Resources Association.
LD Score Regression Distinguishes Confounding from Polygenicity in Genome-Wide Association Studies
Bulik-Sullivan, Brendan K.; Loh, Po-Ru; Finucane, Hilary; Ripke, Stephan; Yang, Jian; Patterson, Nick; Daly, Mark J.; Price, Alkes L.; Neale, Benjamin M.
2015-01-01
Both polygenicity (i.e., many small genetic effects) and confounding biases, such as cryptic relatedness and population stratification, can yield an inflated distribution of test statistics in genome-wide association studies (GWAS). However, current methods cannot distinguish between inflation from true polygenic signal and bias. We have developed an approach, LD Score regression, that quantifies the contribution of each by examining the relationship between test statistics and linkage disequilibrium (LD). The LD Score regression intercept can be used to estimate a more powerful and accurate correction factor than genomic control. We find strong evidence that polygenicity accounts for the majority of test statistic inflation in many GWAS of large sample size. PMID:25642630
Santin, G; Bénézet, L; Geoffroy-Perez, B; Bouyer, J; Guéguen, A
2017-02-01
The decline in participation rates in surveys, including epidemiological surveillance surveys, has become a real concern since it may increase nonresponse bias. The aim of this study is to estimate the contribution of a complementary survey among a subsample of nonrespondents, and the additional contribution of paradata in correcting for nonresponse bias in an occupational health surveillance survey. In 2010, 10,000 workers were randomly selected and sent a postal questionnaire. Sociodemographic data were available for the whole sample. After data collection of the questionnaires, a complementary survey among a random subsample of 500 nonrespondents was performed using a questionnaire administered by an interviewer. Paradata were collected for the complete subsample of the complementary survey. Nonresponse bias in the initial sample and in the combined samples were assessed using variables from administrative databases available for the whole sample, not subject to differential measurement errors. Corrected prevalences by reweighting technique were estimated by first using the initial survey alone and then the initial and complementary surveys combined, under several assumptions regarding the missing data process. Results were compared by computing relative errors. The response rates of the initial and complementary surveys were 23.6% and 62.6%, respectively. For the initial and the combined surveys, the relative errors decreased after correction for nonresponse on sociodemographic variables. For the combined surveys without paradata, relative errors decreased compared with the initial survey. The contribution of the paradata was weak. When a complex descriptive survey has a low response rate, a short complementary survey among nonrespondents with a protocol which aims to maximize the response rates, is useful. The contribution of sociodemographic variables in correcting for nonresponse bias is important whereas the additional contribution of paradata in correcting for nonresponse bias is questionable. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
2016-01-01
Reliably estimating wildlife abundance is fundamental to effective management. Aerial surveys are one of the only spatially robust tools for estimating large mammal populations, but statistical sampling methods are required to address detection biases that affect accuracy and precision of the estimates. Although various methods for correcting aerial survey bias are employed on large mammal species around the world, these have rarely been rigorously validated. Several populations of feral horses (Equus caballus) in the western United States have been intensively studied, resulting in identification of all unique individuals. This provided a rare opportunity to test aerial survey bias correction on populations of known abundance. We hypothesized that a hybrid method combining simultaneous double-observer and sightability bias correction techniques would accurately estimate abundance. We validated this integrated technique on populations of known size and also on a pair of surveys before and after a known number was removed. Our analysis identified several covariates across the surveys that explained and corrected biases in the estimates. All six tests on known populations produced estimates with deviations from the known value ranging from -8.5% to +13.7% and <0.7 standard errors. Precision varied widely, from 6.1% CV to 25.0% CV. In contrast, the pair of surveys conducted around a known management removal produced an estimated change in population between the surveys that was significantly larger than the known reduction. Although the deviation between was only 9.1%, the precision estimate (CV = 1.6%) may have been artificially low. It was apparent that use of a helicopter in those surveys perturbed the horses, introducing detection error and heterogeneity in a manner that could not be corrected by our statistical models. Our results validate the hybrid method, highlight its potentially broad applicability, identify some limitations, and provide insight and guidance for improving survey designs. PMID:27139732
Lubow, Bruce C; Ransom, Jason I
2016-01-01
Reliably estimating wildlife abundance is fundamental to effective management. Aerial surveys are one of the only spatially robust tools for estimating large mammal populations, but statistical sampling methods are required to address detection biases that affect accuracy and precision of the estimates. Although various methods for correcting aerial survey bias are employed on large mammal species around the world, these have rarely been rigorously validated. Several populations of feral horses (Equus caballus) in the western United States have been intensively studied, resulting in identification of all unique individuals. This provided a rare opportunity to test aerial survey bias correction on populations of known abundance. We hypothesized that a hybrid method combining simultaneous double-observer and sightability bias correction techniques would accurately estimate abundance. We validated this integrated technique on populations of known size and also on a pair of surveys before and after a known number was removed. Our analysis identified several covariates across the surveys that explained and corrected biases in the estimates. All six tests on known populations produced estimates with deviations from the known value ranging from -8.5% to +13.7% and <0.7 standard errors. Precision varied widely, from 6.1% CV to 25.0% CV. In contrast, the pair of surveys conducted around a known management removal produced an estimated change in population between the surveys that was significantly larger than the known reduction. Although the deviation between was only 9.1%, the precision estimate (CV = 1.6%) may have been artificially low. It was apparent that use of a helicopter in those surveys perturbed the horses, introducing detection error and heterogeneity in a manner that could not be corrected by our statistical models. Our results validate the hybrid method, highlight its potentially broad applicability, identify some limitations, and provide insight and guidance for improving survey designs.
Altitude exposure in sports: the Athlete Biological Passport standpoint.
Sanchis-Gomar, Fabian; Pareja-Galeano, Helios; Brioche, Thomas; Martinez-Bello, Vladimir; Lippi, Giuseppe
2014-03-01
The Athlete Biological Passport (ABP) is principally founded on monitoring an athlete's biological variables over time, to identify abnormal biases on a longitudinal basis. Several factors are known to influence the results of these markers. However, the manner in which the altitude factor is taken into account still needs to be standardized. Causal relationships between haematological variables should be correctly integrated into ABP software. In particular, modifications of haematological parameters during and after exposure to different altitudes/hypoxic protocols need to be properly included within detection models. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Yang, P.; Fekete, B. M.; Rosenzweig, B.; Lengyel, F.; Vorosmarty, C. J.
2012-12-01
Atmospheric dynamics are essential inputs to Regional-scale Earth System Models (RESMs). Variables including surface air temperature, total precipitation, solar radiation, wind speed and humidity must be downscaled from coarse-resolution, global General Circulation Models (GCMs) to the high temporal and spatial resolution required for regional modeling. However, this downscaling procedure can be challenging due to the need to correct for bias from the GCM and to capture the spatiotemporal heterogeneity of the regional dynamics. In this study, the results obtained using several downscaling techniques and observational datasets were compared for a RESM of the Northeast Corridor of the United States. Previous efforts have enhanced GCM model outputs through bias correction using novel techniques. For example, the Climate Impact Research at Potsdam Institute developed a series of bias-corrected GCMs towards the next generation climate change scenarios (Schiermeier, 2012; Moss et al., 2010). Techniques to better represent the heterogeneity of climate variables have also been improved using statistical approaches (Maurer, 2008; Abatzoglou, 2011). For this study, four downscaling approaches to transform bias-corrected HADGEM2-ES Model output (daily at .5 x .5 degree) to the 3'*3'(longitude*latitude) daily and monthly resolution required for the Northeast RESM were compared: 1) Bilinear Interpolation, 2) Daily bias-corrected spatial downscaling (D-BCSD) with Gridded Meteorological Datasets (developed by Abazoglou 2011), 3) Monthly bias-corrected spatial disaggregation (M-BCSD) with CRU(Climate Research Unit) and 4) Dynamic Downscaling based on Weather Research and Forecast (WRF) model. Spatio-temporal analysis of the variability in precipitation was conducted over the study domain. Validation of the variables of different downscaling methods against observational datasets was carried out for assessment of the downscaled climate model outputs. The effects of using the different approaches to downscale atmospheric variables (specifically air temperature and precipitation) for use as inputs to the Water Balance Model (WBMPlus, Vorosmarty et al., 1998;Wisser et al., 2008) for simulation of daily discharge and monthly stream flow in the Northeast US for a 100-year period in the 21st century were also assessed. Statistical techniques especially monthly bias-corrected spatial disaggregation (M-BCSD) showed potential advantage among other methods for the daily discharge and monthly stream flow simulation. However, Dynamic Downscaling will provide important complements to the statistical approaches tested.
Factors controlling the Indian summer monsoon onset in a coupled model
NASA Astrophysics Data System (ADS)
Prodhomme, Chloé; Terray, Pascal; Masson, Sébastien; Izumo, Takeshi
2013-04-01
The observed Indian Summer Monsoon (ISM) onset occurs around 30 May and 2 June, with a standard deviation of 8 to 9 days, according to the estimates. The relationship between interannual variability of the ISM onset and SSTs (Sea Surface Temperature) remains controversial. The role of Indian Ocean SSTs remain unclear, some studies have shown a driving role while other suggests a passive relation between Indian Ocean SSTs and ISM. The intrinsic impact of ENSO (El Nino-Southern Oscillation) is also difficult to estimate from observations alone. Finally, the predictability of the ISM onset remains drastically limited by the inability of both forced and coupled model to reproduce a realistic onset date. In order to measure objectively the ISM onset, different methods have been developed based on rainfall or dynamical indices (Ananthakrishnan and Soman, 1988 ; Wang and Ho 2002 ; Joseph et al. 2006). In the study we use the Tropospheric Temperature Gradient (TTG), which is the difference between the tropospheric temperature in a northern and a southern box in the Indian areas (Xavier et al. 2007). This index measures the dynamical strength of the monsoon and provides a stable and precise onset date consistent with rainfall estimates. In the SINTEX-F2 coupled model, the ISM onset measured with the TTG is delayed of approximately 10 days and is in advance of 6 days in the atmosphere-only (ECHAM) model. The 16 days lag between atmospheric-only and coupled runs suggests a crucial role of the coupling, especially SST biases on the delayed onset. With the help of several sensitivity experiments, this study tries to identify the keys regions influencing the ISM onset. Many studies have shown a strong impact of the Arabian Sea and Indian Ocean SST on the ISM onset. Nevertheless, the correction of the SSTs, based on AVHRR, in the tropical Indian Ocean only slightly corrects the delayed onset in the coupled model, which suggests an impact of SST in others regions on the ISM onset. During May and June, the main tropical SST biases in the coupled model are a strong warm bias in the Atlantic Ocean and a warm bias in the tropical Pacific Ocean, except along the equator around 140°W-100°W, where there is a cold tongue bias. The correction of the warm bias in the Atlantic Ocean slightly improves the onset date. Conversely, the correction of SST biases in the tropical and equatorial Pacific Oceans advances the onset date of 12 and 10 days, respectively, compared to the control coupled run. This result suggests that, at least in this model, the ISM onset is mainly control by the Pacific Ocean SSTs. Even if ENSO has an impact on the onset date it does not explain the delay, which is related to the biased SST mean state in the Pacific Ocean.
MacDonald, M. Ethan; Forkert, Nils D.; Pike, G. Bruce; Frayne, Richard
2016-01-01
Purpose Volume flow rate (VFR) measurements based on phase contrast (PC)-magnetic resonance (MR) imaging datasets have spatially varying bias due to eddy current induced phase errors. The purpose of this study was to assess the impact of phase errors in time averaged PC-MR imaging of the cerebral vasculature and explore the effects of three common correction schemes (local bias correction (LBC), local polynomial correction (LPC), and whole brain polynomial correction (WBPC)). Methods Measurements of the eddy current induced phase error from a static phantom were first obtained. In thirty healthy human subjects, the methods were then assessed in background tissue to determine if local phase offsets could be removed. Finally, the techniques were used to correct VFR measurements in cerebral vessels and compared statistically. Results In the phantom, phase error was measured to be <2.1 ml/s per pixel and the bias was reduced with the correction schemes. In background tissue, the bias was significantly reduced, by 65.6% (LBC), 58.4% (LPC) and 47.7% (WBPC) (p < 0.001 across all schemes). Correction did not lead to significantly different VFR measurements in the vessels (p = 0.997). In the vessel measurements, the three correction schemes led to flow measurement differences of -0.04 ± 0.05 ml/s, 0.09 ± 0.16 ml/s, and -0.02 ± 0.06 ml/s. Although there was an improvement in background measurements with correction, there was no statistical difference between the three correction schemes (p = 0.242 in background and p = 0.738 in vessels). Conclusions While eddy current induced phase errors can vary between hardware and sequence configurations, our results showed that the impact is small in a typical brain PC-MR protocol and does not have a significant effect on VFR measurements in cerebral vessels. PMID:26910600
Medrano-Gracia, Pau; Cowan, Brett R; Bluemke, David A; Finn, J Paul; Kadish, Alan H; Lee, Daniel C; Lima, Joao A C; Suinesiaputra, Avan; Young, Alistair A
2013-09-13
Cardiovascular imaging studies generate a wealth of data which is typically used only for individual study endpoints. By pooling data from multiple sources, quantitative comparisons can be made of regional wall motion abnormalities between different cohorts, enabling reuse of valuable data. Atlas-based analysis provides precise quantification of shape and motion differences between disease groups and normal subjects. However, subtle shape differences may arise due to differences in imaging protocol between studies. A mathematical model describing regional wall motion and shape was used to establish a coordinate system registered to the cardiac anatomy. The atlas was applied to data contributed to the Cardiac Atlas Project from two independent studies which used different imaging protocols: steady state free precession (SSFP) and gradient recalled echo (GRE) cardiovascular magnetic resonance (CMR). Shape bias due to imaging protocol was corrected using an atlas-based transformation which was generated from a set of 46 volunteers who were imaged with both protocols. Shape bias between GRE and SSFP was regionally variable, and was effectively removed using the atlas-based transformation. Global mass and volume bias was also corrected by this method. Regional shape differences between cohorts were more statistically significant after removing regional artifacts due to imaging protocol bias. Bias arising from imaging protocol can be both global and regional in nature, and is effectively corrected using an atlas-based transformation, enabling direct comparison of regional wall motion abnormalities between cohorts acquired in separate studies.
Effects of diurnal adjustment on biases and trends derived from inter-sensor calibrated AMSU-A data
NASA Astrophysics Data System (ADS)
Chen, H.; Zou, X.; Qin, Z.
2018-03-01
Measurements of brightness temperatures from Advanced Microwave Sounding Unit-A (AMSU-A) temperature sounding instruments onboard NOAA Polarorbiting Operational Environmental Satellites (POES) have been extensively used for studying atmospheric temperature trends over the past several decades. Intersensor biases, orbital drifts and diurnal variations of atmospheric and surface temperatures must be considered before using a merged long-term time series of AMSU-A measurements from NOAA-15, -18, -19 and MetOp-A.We study the impacts of the orbital drift and orbital differences of local equator crossing times (LECTs) on temperature trends derivable from AMSU-A using near-nadir observations from NOAA-15, NOAA-18, NOAA-19, and MetOp-A during 1998-2014 over the Amazon rainforest. The double difference method is firstly applied to estimation of inter-sensor biases between any two satellites during their overlapping time period. The inter-calibrated observations are then used to generate a monthly mean diurnal cycle of brightness temperature for each AMSU-A channel. A diurnal correction is finally applied each channel to obtain AMSU-A data valid at the same local time. Impacts of the inter-sensor bias correction and diurnal correction on the AMSU-A derived long-term atmospheric temperature trends are separately quantified and compared with those derived from original data. It is shown that the orbital drift and differences of LECTamong different POESs induce a large uncertainty in AMSU-A derived long-term warming/cooling trends. After applying an inter-sensor bias correction and a diurnal correction, the warming trends at different local times, which are approximately the same, are smaller by half than the trends derived without applying these corrections.
Kiang, Tony K L; Ensom, Mary H H
2016-04-01
In settings where free phenytoin concentrations are not available, the Sheiner-Tozer equation-Corrected total phenytoin concentration = Observed total phenytoin concentration/[(0.2 × Albumin) + 0.1]; phenytoin in µg/mL, albumin in g/dL-and its derivative equations are commonly used to correct for altered phenytoin binding to albumin. The objective of this article was to provide a comprehensive and updated review on the predictive performance of these equations in various patient populations. A literature search of PubMed, EMBASE, and Google Scholar was conducted using combinations of the following terms: Sheiner-Tozer, Winter-Tozer, phenytoin, predictive equation, precision, bias, free fraction. All English-language articles up to November 2015 (excluding abstracts) were evaluated. This review shows the Sheiner-Tozer equation to be biased and imprecise in various critical care, head trauma, and general neurology patient populations. Factors contributing to bias and imprecision include the following: albumin concentration, free phenytoin assay temperature, experimental conditions (eg, timing of concentration sampling, steady-state dosing conditions), renal function, age, concomitant medications, and patient type. Although derivative equations using varying albumin coefficients have improved accuracy (without much improvement in precision) in intensive care and elderly patients, these equations still require further validation. Further experiments are also needed to yield derivative equations with good predictive performance in all populations as well as to validate the equations' impact on actual patient efficacy and toxicity outcomes. More complex, multivariate predictive equations may be required to capture all variables that can potentially affect phenytoin pharmacokinetics and clinical therapeutic outcomes. © The Author(s) 2016.
Improving RNA-Seq expression estimates by correcting for fragment bias
2011-01-01
The biochemistry of RNA-Seq library preparation results in cDNA fragments that are not uniformly distributed within the transcripts they represent. This non-uniformity must be accounted for when estimating expression levels, and we show how to perform the needed corrections using a likelihood based approach. We find improvements in expression estimates as measured by correlation with independently performed qRT-PCR and show that correction of bias leads to improved replicability of results across libraries and sequencing technologies. PMID:21410973
Hagen, Nils T.
2008-01-01
Authorship credit for multi-authored scientific publications is routinely allocated either by issuing full publication credit repeatedly to all coauthors, or by dividing one credit equally among all coauthors. The ensuing inflationary and equalizing biases distort derived bibliometric measures of merit by systematically benefiting secondary authors at the expense of primary authors. Here I show how harmonic counting, which allocates credit according to authorship rank and the number of coauthors, provides simultaneous source-level correction for both biases as well as accommodating further decoding of byline information. I also demonstrate large and erratic effects of counting bias on the original h-index, and show how the harmonic version of the h-index provides unbiased bibliometric ranking of scientific merit while retaining the original's essential simplicity, transparency and intended fairness. Harmonic decoding of byline information resolves the conundrum of authorship credit allocation by providing a simple recipe for source-level correction of inflationary and equalizing bias. Harmonic counting could also offer unrivalled accuracy in automated assessments of scientific productivity, impact and achievement. PMID:19107201
Bergen, Silas; Sheppard, Lianne; Kaufman, Joel D.; Szpiro, Adam A.
2016-01-01
Summary Air pollution epidemiology studies are trending towards a multi-pollutant approach. In these studies, exposures at subject locations are unobserved and must be predicted using observed exposures at misaligned monitoring locations. This induces measurement error, which can bias the estimated health effects and affect standard error estimates. We characterize this measurement error and develop an analytic bias correction when using penalized regression splines to predict exposure. Our simulations show bias from multi-pollutant measurement error can be severe, and in opposite directions or simultaneously positive or negative. Our analytic bias correction combined with a non-parametric bootstrap yields accurate coverage of 95% confidence intervals. We apply our methodology to analyze the association of systolic blood pressure with PM2.5 and NO2 in the NIEHS Sister Study. We find that NO2 confounds the association of systolic blood pressure with PM2.5 and vice versa. Elevated systolic blood pressure was significantly associated with increased PM2.5 and decreased NO2. Correcting for measurement error bias strengthened these associations and widened 95% confidence intervals. PMID:27789915
ERIC Educational Resources Information Center
Pfaffel, Andreas; Schober, Barbara; Spiel, Christiane
2016-01-01
A common methodological problem in the evaluation of the predictive validity of selection methods, e.g. in educational and employment selection, is that the correlation between predictor and criterion is biased. Thorndike's (1949) formulas are commonly used to correct for this biased correlation. An alternative approach is to view the selection…
Peak-locking centroid bias in Shack-Hartmann wavefront sensing
NASA Astrophysics Data System (ADS)
Anugu, Narsireddy; Garcia, Paulo J. V.; Correia, Carlos M.
2018-05-01
Shack-Hartmann wavefront sensing relies on accurate spot centre measurement. Several algorithms were developed with this aim, mostly focused on precision, i.e. minimizing random errors. In the solar and extended scene community, the importance of the accuracy (bias error due to peak-locking, quantization, or sampling) of the centroid determination was identified and solutions proposed. But these solutions only allow partial bias corrections. To date, no systematic study of the bias error was conducted. This article bridges the gap by quantifying the bias error for different correlation peak-finding algorithms and types of sub-aperture images and by proposing a practical solution to minimize its effects. Four classes of sub-aperture images (point source, elongated laser guide star, crowded field, and solar extended scene) together with five types of peak-finding algorithms (1D parabola, the centre of gravity, Gaussian, 2D quadratic polynomial, and pyramid) are considered, in a variety of signal-to-noise conditions. The best performing peak-finding algorithm depends on the sub-aperture image type, but none is satisfactory to both bias and random errors. A practical solution is proposed that relies on the antisymmetric response of the bias to the sub-pixel position of the true centre. The solution decreases the bias by a factor of ˜7 to values of ≲ 0.02 pix. The computational cost is typically twice of current cross-correlation algorithms.
Correction of bias in belt transect studies of immotile objects
Anderson, D.R.; Pospahala, R.S.
1970-01-01
Unless a correction is made, population estimates derived from a sample of belt transects will be biased if a fraction of, the individuals on the sample transects are not counted. An approach, useful for correcting this bias when sampling immotile populations using transects of a fixed width, is presented. The method assumes that a searcher's ability to find objects near the center of the transect is nearly perfect. The method utilizes a mathematical equation, estimated from the data, to represent the searcher's inability to find all objects at increasing distances from the center of the transect. An example of the analysis of data, formation of the equation, and application is presented using waterfowl nesting data collected in Colorado.
A re-examination of the effects of biased lineup instructions in eyewitness identification.
Clark, Steven E
2005-10-01
A meta-analytic review of research comparing biased and unbiased instructions in eyewitness identification experiments showed an asymmetry; specifically, that biased instructions led to a large and consistent decrease in accuracy in target-absent lineups, but produced inconsistent results for target-present lineups, with an average effect size near zero (Steblay, 1997). The results for target-present lineups are surprising, and are inconsistent with statistical decision theories (i.e., Green & Swets, 1966). A re-examination of the relevant studies and the meta-analysis of those studies shows clear evidence that correct identification rates do increase with biased lineup instructions, and that biased witnesses make correct identifications at a rate considerably above chance. Implications for theory, as well as police procedure and policy, are discussed.
A re-examination of the effects of biased lineup instructions in eyewitness identification.
Clark, Steven E
2005-08-01
A meta-analytic review of research comparing biased and unbiased instructions in eyewitness identification experiments showed an asymmetry, specifically that biased instructions led to a large and consistent decrease in accuracy in target-absent lineups, but produced inconsistent results for target-present lineups, with an average effect size near zero (N. M. Steblay, 1997). The results for target-present lineups are surprising, and are inconsistent with statistical decision theories (i.e., D. M. Green & J. A. Swets, 1966). A re-examination of the relevant studies and the meta-analysis of those studies shows clear evidence that correct identification rates do increase with biased lineup instructions, and that biased witnesses make correct identifications at a rate considerably above chance. Implications for theory, as well as police procedure and policy, are discussed.
Biased lineup instructions and face identification from video images.
Thompson, W Burt; Johnson, Jaime
2008-01-01
Previous eyewitness memory research has shown that biased lineup instructions reduce identification accuracy, primarily by increasing false-positive identifications in target-absent lineups. Because some attempts at identification do not rely on a witness's memory of the perpetrator but instead involve matching photos to images on surveillance video, the authors investigated the effects of biased instructions on identification accuracy in a matching task. In Experiment 1, biased instructions did not affect the overall accuracy of participants who used video images as an identification aid, but nearly all correct decisions occurred with target-present photo spreads. Both biased and unbiased instructions resulted in high false-positive rates. In Experiment 2, which focused on video-photo matching accuracy with target-absent photo spreads, unbiased instructions led to more correct responses (i.e., fewer false positives). These findings suggest that investigators should not relax precautions against biased instructions when people attempt to match photos to an unfamiliar person recorded on video.
NASA Astrophysics Data System (ADS)
Rogers, Jeffrey N.; Parrish, Christopher E.; Ward, Larry G.; Burdick, David M.
2018-03-01
Salt marsh vegetation tends to increase vertical uncertainty in light detection and ranging (lidar) derived elevation data, often causing the data to become ineffective for analysis of topographic features governing tidal inundation or vegetation zonation. Previous attempts at improving lidar data collected in salt marsh environments range from simply computing and subtracting the global elevation bias to more complex methods such as computing vegetation-specific, constant correction factors. The vegetation specific corrections can be used along with an existing habitat map to apply separate corrections to different areas within a study site. It is hypothesized here that correcting salt marsh lidar data by applying location-specific, point-by-point corrections, which are computed from lidar waveform-derived features, tidal-datum based elevation, distance from shoreline and other lidar digital elevation model based variables, using nonparametric regression will produce better results. The methods were developed and tested using full-waveform lidar and ground truth for three marshes in Cape Cod, Massachusetts, U.S.A. Five different model algorithms for nonparametric regression were evaluated, with TreeNet's stochastic gradient boosting algorithm consistently producing better regression and classification results. Additionally, models were constructed to predict the vegetative zone (high marsh and low marsh). The predictive modeling methods used in this study estimated ground elevation with a mean bias of 0.00 m and a standard deviation of 0.07 m (0.07 m root mean square error). These methods appear very promising for correction of salt marsh lidar data and, importantly, do not require an existing habitat map, biomass measurements, or image based remote sensing data such as multi/hyperspectral imagery.
Desmet, Gert
2013-11-01
The finite length parallel zone (FPZ)-model is proposed as an alternative model for the axial- or eddy-dispersion caused by the occurrence of local velocity biases or flow heterogeneities in porous media such as those used in liquid chromatography columns. The mathematical plate height expression evolving from the model shows that the A- and C-term band broadening effects that can originate from a given velocity bias should be coupled in an exponentially decaying way instead of harmonically as proposed in Giddings' coupling theory. In the low and high velocity limit both models converge, while a 12% difference can be observed in the (practically most relevant) intermediate range of reduced velocities. Explicit expressions for the A- and C-constants appearing in the exponential decay-based plate height expression have been derived for each of the different possible velocity bias levels (single through-pore and particle level, multi-particle level and trans-column level). These expressions allow to directly relate the band broadening originating from these different levels to the local fundamental transport parameters, hence offering the possibility to include a velocity-dependent and, if, needed retention factor-dependent transversal dispersion coefficient. Having developed the mathematics for the general case wherein a difference in retention equilibrium establishes between the two parallel zones, the effect of any possible local variations in packing density and/or retention capacity on the eddy-dispersion can be explicitly accounted for as well. It is furthermore also shown that, whereas the lumped transport parameter model used in the basic variant of the FPZ-model only provides a first approximation of the true decay constant, the model can be extended by introducing a constant correction factor to correctly account for the continuous transversal dispersion transport in the velocity bias zones. Copyright © 2013 Elsevier B.V. All rights reserved.
Berglund, Lars; Garmo, Hans; Lindbäck, Johan; Svärdsudd, Kurt; Zethelius, Björn
2008-09-30
The least-squares estimator of the slope in a simple linear regression model is biased towards zero when the predictor is measured with random error. A corrected slope may be estimated by adding data from a reliability study, which comprises a subset of subjects from the main study. The precision of this corrected slope depends on the design of the reliability study and estimator choice. Previous work has assumed that the reliability study constitutes a random sample from the main study. A more efficient design is to use subjects with extreme values on their first measurement. Previously, we published a variance formula for the corrected slope, when the correction factor is the slope in the regression of the second measurement on the first. In this paper we show that both designs improve by maximum likelihood estimation (MLE). The precision gain is explained by the inclusion of data from all subjects for estimation of the predictor's variance and by the use of the second measurement for estimation of the covariance between response and predictor. The gain of MLE enhances with stronger true relationship between response and predictor and with lower precision in the predictor measurements. We present a real data example on the relationship between fasting insulin, a surrogate marker, and true insulin sensitivity measured by a gold-standard euglycaemic insulin clamp, and simulations, where the behavior of profile-likelihood-based confidence intervals is examined. MLE was shown to be a robust estimator for non-normal distributions and efficient for small sample situations. Copyright (c) 2008 John Wiley & Sons, Ltd.
Korn, Edward L; Freidlin, Boris
2017-06-01
For a fallback randomized clinical trial design with a marker, Choai and Matsui (2015, Biometrics 71, 25-32) estimate the bias of the estimator of the treatment effect in the marker-positive subgroup conditional on the treatment effect not being statistically significant in the overall population. This is used to construct and examine conditionally bias-corrected estimators of the treatment effect for the marker-positive subgroup. We argue that it may not be appropriate to correct for conditional bias in this setting. Instead, we consider the unconditional bias of estimators of the treatment effect for marker-positive patients. © 2016, The International Biometric Society.
Raiche, Gilles; Blais, Jean-Guy
2009-01-01
In a computerized adaptive test, we would like to obtain an acceptable precision of the proficiency level estimate using an optimal number of items. Unfortunately, decreasing the number of items is accompanied by a certain degree of bias when the true proficiency level differs significantly from the a priori estimate. The authors suggest that it is possible to reduced the bias, and even the standard error of the estimate, by applying to each provisional estimation one or a combination of the following strategies: adaptive correction for bias proposed by Bock and Mislevy (1982), adaptive a priori estimate, and adaptive integration interval.
Modal Correction Method For Dynamically Induced Errors In Wind-Tunnel Model Attitude Measurements
NASA Technical Reports Server (NTRS)
Buehrle, R. D.; Young, C. P., Jr.
1995-01-01
This paper describes a method for correcting the dynamically induced bias errors in wind tunnel model attitude measurements using measured modal properties of the model system. At NASA Langley Research Center, the predominant instrumentation used to measure model attitude is a servo-accelerometer device that senses the model attitude with respect to the local vertical. Under smooth wind tunnel operating conditions, this inertial device can measure the model attitude with an accuracy of 0.01 degree. During wind tunnel tests when the model is responding at high dynamic amplitudes, the inertial device also senses the centrifugal acceleration associated with model vibration. This centrifugal acceleration results in a bias error in the model attitude measurement. A study of the response of a cantilevered model system to a simulated dynamic environment shows significant bias error in the model attitude measurement can occur and is vibration mode and amplitude dependent. For each vibration mode contributing to the bias error, the error is estimated from the measured modal properties and tangential accelerations at the model attitude device. Linear superposition is used to combine the bias estimates for individual modes to determine the overall bias error as a function of time. The modal correction model predicts the bias error to a high degree of accuracy for the vibration modes characterized in the simulated dynamic environment.
Hunt, Andrew P; Bach, Aaron J E; Borg, David N; Costello, Joseph T; Stewart, Ian B
2017-01-01
An accurate measure of core body temperature is critical for monitoring individuals, groups and teams undertaking physical activity in situations of high heat stress or prolonged cold exposure. This study examined the range in systematic bias of ingestible temperature sensors compared to a certified and traceable reference thermometer. A total of 119 ingestible temperature sensors were immersed in a circulated water bath at five water temperatures (TEMP A: 35.12 ± 0.60°C, TEMP B: 37.33 ± 0.56°C, TEMP C: 39.48 ± 0.73°C, TEMP D: 41.58 ± 0.97°C, and TEMP E: 43.47 ± 1.07°C) along with a certified traceable reference thermometer. Thirteen sensors (10.9%) demonstrated a systematic bias > ±0.1°C, of which 4 (3.3%) were > ± 0.5°C. Limits of agreement (95%) indicated that systematic bias would likely fall in the range of -0.14 to 0.26°C, highlighting that it is possible for temperatures measured between sensors to differ by more than 0.4°C. The proportion of sensors with systematic bias > ±0.1°C (10.9%) confirms that ingestible temperature sensors require correction to ensure their accuracy. An individualized linear correction achieved a mean systematic bias of 0.00°C, and limits of agreement (95%) to 0.00-0.00°C, with 100% of sensors achieving ±0.1°C accuracy. Alternatively, a generalized linear function (Corrected Temperature (°C) = 1.00375 × Sensor Temperature (°C) - 0.205549), produced as the average slope and intercept of a sub-set of 51 sensors and excluding sensors with accuracy outside ±0.5°C, reduced the systematic bias to < ±0.1°C in 98.4% of the remaining sensors ( n = 64). In conclusion, these data show that using an uncalibrated ingestible temperature sensor may provide inaccurate data that still appears to be statistically, physiologically, and clinically meaningful. Correction of sensor temperature to a reference thermometer by linear function eliminates this systematic bias (individualized functions) or ensures systematic bias is within ±0.1°C in 98% of the sensors (generalized function).
NASA Astrophysics Data System (ADS)
Saitoh, Naoko; Kimoto, Shuhei; Sugimura, Ryo; Imasu, Ryoichi; Shiomi, Kei; Kuze, Akihiko; Niwa, Yosuke; Machida, Toshinobu; Sawa, Yousuke; Matsueda, Hidekazu
2017-10-01
CO2 observations in the free troposphere can be useful for constraining CO2 source and sink estimates at the surface since they represent CO2 concentrations away from point source emissions. The thermal infrared (TIR) band of the Thermal and Near Infrared Sensor for Carbon Observation (TANSO) Fourier transform spectrometer (FTS) on board the Greenhouse Gases Observing Satellite (GOSAT) has been observing global CO2 concentrations in the free troposphere for about 8 years and thus could provide a dataset with which to evaluate the vertical transport of CO2 from the surface to the upper atmosphere. This study evaluated biases in the TIR version 1 (V1) CO2 product in the lower troposphere (LT) and the middle troposphere (MT) (736-287 hPa), on the basis of comparisons with CO2 profiles obtained over airports using Continuous CO2 Measuring Equipment (CME) in the Comprehensive Observation Network for Trace gases by AIrLiner (CONTRAIL) project. Bias-correction values are presented for TIR CO2 data for each pressure layer in the LT and MT regions during each season and in each latitude band: 40-20° S, 20° S-20° N, 20-40° N, and 40-60° N. TIR V1 CO2 data had consistent negative biases of 1-1.5 % compared with CME CO2 data in the LT and MT regions, with the largest negative biases at 541-398 hPa, partly due to the use of 10 µm CO2 absorption band in conjunction with 15 and 9 µm absorption bands in the V1 retrieval algorithm. Global comparisons between TIR CO2 data to which the bias-correction values were applied and CO2 data simulated by a transport model based on the Nonhydrostatic ICosahedral Atmospheric Model (NICAM-TM) confirmed the validity of the bias-correction values evaluated over airports in limited areas. In low latitudes in the upper MT region (398-287 hPa), however, TIR CO2 data in northern summer were overcorrected by these bias-correction values; this is because the bias-correction values were determined using comparisons mainly over airports in Southeast Asia, where CO2 concentrations in the upper atmosphere display relatively large variations due to strong updrafts.
NASA Astrophysics Data System (ADS)
Shankar, Francesco; Bernardi, Mariangela; Sheth, Ravi K.; Ferrarese, Laura; Graham, Alister W.; Savorgnan, Giulia; Allevato, Viola; Marconi, Alessandro; Läsker, Ronald; Lapi, Andrea
2016-08-01
We compare the set of local galaxies having dynamically measured black holes with a large, unbiased sample of galaxies extracted from the Sloan Digital Sky Survey. We confirm earlier work showing that the majority of black hole hosts have significantly higher velocity dispersions σ than local galaxies of similar stellar mass. We use Monte Carlo simulations to illustrate the effect on black hole scaling relations if this bias arises from the requirement that the black hole sphere of influence must be resolved to measure black hole masses with spatially resolved kinematics. We find that this selection effect artificially increases the normalization of the Mbh-σ relation by a factor of at least ˜3; the bias for the Mbh-Mstar relation is even larger. Our Monte Carlo simulations and analysis of the residuals from scaling relations both indicate that σ is more fundamental than Mstar or effective radius. In particular, the Mbh-Mstar relation is mostly a consequence of the Mbh-σ and σ-Mstar relations, and is heavily biased by up to a factor of 50 at small masses. This helps resolve the discrepancy between dynamically based black hole-galaxy scaling relations versus those of active galaxies. Our simulations also disfavour broad distributions of black hole masses at fixed σ. Correcting for this bias suggests that the calibration factor used to estimate black hole masses in active galaxies should be reduced to values of fvir ˜ 1. Black hole mass densities should also be proportionally smaller, perhaps implying significantly higher radiative efficiencies/black hole spins. Reducing black hole masses also reduces the gravitational wave signal expected from black hole mergers.
NASA Astrophysics Data System (ADS)
Shankar, Francesco; Bernardi, M.; Sheth, R. K.; Weinberg, D. H.; Miralda-Escudé, J.; Ferrarese, L.; Graham, A.; Sesana, A.; Lapi, A.; Marconi, A.; Allevato, V.; Savorgnan, G.; Laesker, R.
2016-08-01
We compare the set of local galaxies having dynamically measured black holes with a large, unbiased sample of galaxies extracted from the Sloan Digital Sky Survey. We confirm earlier work showing that the majority of black hole hosts have significantly higher velocity dispersions sigma than local galaxies of similar stellar mass. We use Monte-Carlo simulations to illustrate the effect on black hole scaling relations if this bias arises from the requirement that the black hole sphere of influence must be resolved to measure black hole masses with spatially resolved kinematics. We find that this selection effect artificially increases the normalization of the Mbh-sigma relation by a factor of at least ~3; the bias for the Mbh-Mstar relation is even larger. Our Monte Carlo simulations and analysis of the residuals from scaling relations both indicate that sigma is more fundamental than Mstar or effective radius. In particular, the Mbh-Mstar relation is mostly a consequence of the Mbh-sigma and sigma-Mstar relations, and is heavily biased by up to a factor of 50 at small masses. This helps resolve the discrepancy between dynamically-based black hole-galaxy scaling relations versus those of active galaxies. Our simulations also disfavour broad distributions of black hole masses at fixed sigma. Correcting for this bias suggests that the calibration factor used to estimate black hole masses in active galaxies should be reduced to values of fvir~1. Black hole mass densities should also be proportionally smaller, perhaps implying significantly higher radiative efficiencies/black hole spins. Reducing black hole masses also reduces the gravitational wave signal expected from black hole mergers.
CD-SEM real time bias correction using reference metrology based modeling
NASA Astrophysics Data System (ADS)
Ukraintsev, V.; Banke, W.; Zagorodnev, G.; Archie, C.; Rana, N.; Pavlovsky, V.; Smirnov, V.; Briginas, I.; Katnani, A.; Vaid, A.
2018-03-01
Accuracy of patterning impacts yield, IC performance and technology time to market. Accuracy of patterning relies on optical proximity correction (OPC) models built using CD-SEM inputs and intra die critical dimension (CD) control based on CD-SEM. Sub-nanometer measurement uncertainty (MU) of CD-SEM is required for current technologies. Reported design and process related bias variation of CD-SEM is in the range of several nanometers. Reference metrology and numerical modeling are used to correct SEM. Both methods are slow to be used for real time bias correction. We report on real time CD-SEM bias correction using empirical models based on reference metrology (RM) data. Significant amount of currently untapped information (sidewall angle, corner rounding, etc.) is obtainable from SEM waveforms. Using additional RM information provided for specific technology (design rules, materials, processes) CD extraction algorithms can be pre-built and then used in real time for accurate CD extraction from regular CD-SEM images. The art and challenge of SEM modeling is in finding robust correlation between SEM waveform features and bias of CD-SEM as well as in minimizing RM inputs needed to create accurate (within the design and process space) model. The new approach was applied to improve CD-SEM accuracy of 45 nm GATE and 32 nm MET1 OPC 1D models. In both cases MU of the state of the art CD-SEM has been improved by 3x and reduced to a nanometer level. Similar approach can be applied to 2D (end of line, contours, etc.) and 3D (sidewall angle, corner rounding, etc.) cases.
Validation of the AMSU-B Bias Corrections Based on Satellite Measurements from SSM/T-2
NASA Technical Reports Server (NTRS)
Kolodner, Marc A.
1999-01-01
The NOAA-15 Advanced Microwave Sounding Unit-B (AMSU-B) was designed in the same spirit as the Special Sensor Microwave Water Vapor Profiler (SSM/T-2) on board the DMSP F11-14 satellites, to perform remote sensing of spatial and temporal variations in mid and upper troposphere humidity. While the SSM/T-2 instruments have a 48 km spatial resolution at nadir and 28 beam positions per scan, AMSU-B provides an improvement with a 16 km spatial resolution at nadir and 90 beam positions per scan. The AMSU-B instrument, though, has been experiencing radio frequency interference (RFI) contamination from the NOAA-15 transmitters whose effect is dependent upon channel, geographic location, and current spacecraft antenna configuration. This has lead to large cross-track biases reaching as high as 100 Kelvin for channel 17 (150 GHz) and 50 Kelvin for channel 19 (183 +/-3 GHz). NOAA-NESDIS has recently provided a series of bias corrections for AMSU-B data starting from March, 1999. These corrections are available for each of the five channels, for every third field of view, and for three cycles within an eight second period. There is also a quality indicator in each data record to indicate whether or not the bias corrections should be applied. As a precursor to performing retrievals of mid and upper troposphere humidity, a validation study is performed by statistically analyzing the differences between the F14 SSM/T-2 and the bias corrected AMSU-B brightness temperatures for three months in the spring of 1999.
The empirical Bayes estimators of fine-scale population structure in high gene flow species.
Kitada, Shuichi; Nakamichi, Reiichiro; Kishino, Hirohisa
2017-11-01
An empirical Bayes (EB) pairwise F ST estimator was previously introduced and evaluated for its performance by numerical simulation. In this study, we conducted coalescent simulations and generated genetic population structure mechanistically, and compared the performance of the EBF ST with Nei's G ST , Nei and Chesser's bias-corrected G ST (G ST_NC ), Weir and Cockerham's θ (θ WC ) and θ with finite sample correction (θ WC_F ). We also introduced EB estimators for Hedrick' G' ST and Jost' D. We applied these estimators to publicly available SNP genotypes of Atlantic herring. We also examined the power to detect the environmental factors causing the population structure. Our coalescent simulations revealed that the finite sample correction of θ WC is necessary to assess population structure using pairwise F ST values. For microsatellite markers, EBF ST performed the best among the present estimators regarding both bias and precision under high gene flow scenarios (FST≤0.032). For 300 SNPs, EBF ST had the highest precision in all cases, but the bias was negative and greater than those for G ST_NC and θ WC_F in all cases. G ST_NC and θ WC_F performed very similarly at all levels of F ST . As the number of loci increased up to 10 000, the precision of G ST_NC and θ WC_F became slightly better than for EBF ST for cases with FST≥0.004, even though the size of the bias remained constant. The EB estimators described the fine-scale population structure of the herring and revealed that ~56% of the genetic differentiation was caused by sea surface temperature and salinity. The R package finepop for implementing all estimators used here is available on CRAN. © 2017 The Authors. Molecular Ecology Resources Published by John Wiley & Sons Ltd.
How and how much does RAD-seq bias genetic diversity estimates?
Cariou, Marie; Duret, Laurent; Charlat, Sylvain
2016-11-08
RAD-seq is a powerful tool, increasingly used in population genomics. However, earlier studies have raised red flags regarding possible biases associated with this technique. In particular, polymorphism on restriction sites results in preferential sampling of closely related haplotypes, so that RAD data tends to underestimate genetic diversity. Here we (1) clarify the theoretical basis of this bias, highlighting the potential confounding effects of population structure and selection, (2) confront predictions to real data from in silico digestion of full genomes and (3) provide a proof of concept toward an ABC-based correction of the RAD-seq bias. Under a neutral and panmictic model, we confirm the previously established relationship between the true polymorphism and its RAD-based estimation, showing a more pronounced bias when polymorphism is high. Using more elaborate models, we show that selection, resulting in heterogeneous levels of polymorphism along the genome, exacerbates the bias and leads to a more pronounced underestimation. On the contrary, spatial genetic structure tends to reduce the bias. We confront the neutral and panmictic model to "ideal" empirical data (in silico RAD-sequencing) using full genomes from natural populations of the fruit fly Drosophila melanogaster and the fungus Shizophyllum commune, harbouring respectively moderate and high genetic diversity. In D. melanogaster, predictions fit the model, but the small difference between the true and RAD polymorphism makes this comparison insensitive to deviations from the model. In the highly polymorphic fungus, the model captures a large part of the bias but makes inaccurate predictions. Accordingly, ABC corrections based on this model improve the estimations, albeit with some imprecisions. The RAD-seq underestimation of genetic diversity associated with polymorphism in restriction sites becomes more pronounced when polymorphism is high. In practice, this means that in many systems where polymorphism does not exceed 2 %, the bias is of minor importance in the face of other sources of uncertainty, such as heterogeneous bases composition or technical artefacts. The neutral panmictic model provides a practical mean to correct the bias through ABC, albeit with some imprecisions. More elaborate ABC methods might integrate additional parameters, such as population structure and selection, but their opposite effects could hinder accurate corrections.
Howe, Chanelle J.; Cole, Stephen R.; Chmiel, Joan S.; Muñoz, Alvaro
2011-01-01
In time-to-event analyses, artificial censoring with correction for induced selection bias using inverse probability-of-censoring weights can be used to 1) examine the natural history of a disease after effective interventions are widely available, 2) correct bias due to noncompliance with fixed or dynamic treatment regimens, and 3) estimate survival in the presence of competing risks. Artificial censoring entails censoring participants when they meet a predefined study criterion, such as exposure to an intervention, failure to comply, or the occurrence of a competing outcome. Inverse probability-of-censoring weights use measured common predictors of the artificial censoring mechanism and the outcome of interest to determine what the survival experience of the artificially censored participants would be had they never been exposed to the intervention, complied with their treatment regimen, or not developed the competing outcome. Even if all common predictors are appropriately measured and taken into account, in the context of small sample size and strong selection bias, inverse probability-of-censoring weights could fail because of violations in assumptions necessary to correct selection bias. The authors used an example from the Multicenter AIDS Cohort Study, 1984–2008, regarding estimation of long-term acquired immunodeficiency syndrome-free survival to demonstrate the impact of violations in necessary assumptions. Approaches to improve correction methods are discussed. PMID:21289029
Correcting the SAT's Ethnic and Social-Class Bias: A Method for Reestimating SAT Scores.
ERIC Educational Resources Information Center
Freedle, Roy O.
2003-01-01
A corrective scoring method, the Revised-Scholastic Achievement Test (R-SAT), addresses nonrandom ethnic test bias patterns found in the SAT. The R-SAT has been shown to reduce the mean-score difference between African-American and white test-takers by one-third, increase verbal scores by as much as 200-300 points for individuals, and benefit…
Kim, Kio; Habas, Piotr A.; Rajagopalan, Vidya; Scott, Julia A.; Corbett-Detig, James M.; Rousseau, Francois; Barkovich, A. James; Glenn, Orit A.; Studholme, Colin
2012-01-01
A common solution to clinical MR imaging in the presence of large anatomical motion is to use fast multi-slice 2D studies to reduce slice acquisition time and provide clinically usable slice data. Recently, techniques have been developed which retrospectively correct large scale 3D motion between individual slices allowing the formation of a geometrically correct 3D volume from the multiple slice stacks. One challenge, however, in the final reconstruction process is the possibility of varying intensity bias in the slice data, typically due to the motion of the anatomy relative to imaging coils. As a result, slices which cover the same region of anatomy at different times may exhibit different sensitivity. This bias field inconsistency can induce artifacts in the final 3D reconstruction that can impact both clinical interpretation of key tissue boundaries and the automated analysis of the data. Here we describe a framework to estimate and correct the bias field inconsistency in each slice collectively across all motion corrupted image slices. Experiments using synthetic and clinical data show that the proposed method reduces intensity variability in tissues and improves the distinction between key tissue types. PMID:21511561
Kim, Kio; Habas, Piotr A; Rajagopalan, Vidya; Scott, Julia A; Corbett-Detig, James M; Rousseau, Francois; Barkovich, A James; Glenn, Orit A; Studholme, Colin
2011-09-01
A common solution to clinical MR imaging in the presence of large anatomical motion is to use fast multislice 2D studies to reduce slice acquisition time and provide clinically usable slice data. Recently, techniques have been developed which retrospectively correct large scale 3D motion between individual slices allowing the formation of a geometrically correct 3D volume from the multiple slice stacks. One challenge, however, in the final reconstruction process is the possibility of varying intensity bias in the slice data, typically due to the motion of the anatomy relative to imaging coils. As a result, slices which cover the same region of anatomy at different times may exhibit different sensitivity. This bias field inconsistency can induce artifacts in the final 3D reconstruction that can impact both clinical interpretation of key tissue boundaries and the automated analysis of the data. Here we describe a framework to estimate and correct the bias field inconsistency in each slice collectively across all motion corrupted image slices. Experiments using synthetic and clinical data show that the proposed method reduces intensity variability in tissues and improves the distinction between key tissue types.
A Systematic Error Correction Method for TOVS Radiances
NASA Technical Reports Server (NTRS)
Joiner, Joanna; Rokke, Laurie; Einaudi, Franco (Technical Monitor)
2000-01-01
Treatment of systematic errors is crucial for the successful use of satellite data in a data assimilation system. Systematic errors in TOVS radiance measurements and radiative transfer calculations can be as large or larger than random instrument errors. The usual assumption in data assimilation is that observational errors are unbiased. If biases are not effectively removed prior to assimilation, the impact of satellite data will be lessened and can even be detrimental. Treatment of systematic errors is important for short-term forecast skill as well as the creation of climate data sets. A systematic error correction algorithm has been developed as part of a 1D radiance assimilation. This scheme corrects for spectroscopic errors, errors in the instrument response function, and other biases in the forward radiance calculation for TOVS. Such algorithms are often referred to as tuning of the radiances. The scheme is able to account for the complex, air-mass dependent biases that are seen in the differences between TOVS radiance observations and forward model calculations. We will show results of systematic error correction applied to the NOAA 15 Advanced TOVS as well as its predecessors. We will also discuss the ramifications of inter-instrument bias with a focus on stratospheric measurements.
Consistency analysis and correction of ground-based radar observations using space-borne radar
NASA Astrophysics Data System (ADS)
Zhang, Shuai; Zhu, Yiqing; Wang, Zhenhui; Wang, Yadong
2018-04-01
The lack of an accurate determination of radar constant can introduce biases in ground-based radar (GR) reflectivity factor data, and lead to poor consistency of radar observations. The geometry-matching method was applied to carry out spatial matching of radar data from the Precipitation Radar (PR) on board the Tropical Rainfall Measuring Mission (TRMM) satellite to observations from a GR deployed at Nanjing, China, in their effective sampling volume, with 250 match-up cases obtained from January 2008 to October 2013. The consistency of the GR was evaluated with reference to the TRMM PR, whose stability is established. The results show that the below-bright-band-height data of the Nanjing radar can be split into three periods: Period I from January 2008 to March 2010, Period II from March 2010 to May 2013, and Period III from May 2013 to October 2013. There are distinct differences in overall reflectivity factor between the three periods, and the overall reflectivity factor in period II is smaller by a factor of over 3 dB than in periods I and III, although the overall reflectivity within each period remains relatively stable. Further investigation shows that in period II the difference between the GR and PR observations changed with echo intensity. A best-fit relation between the two radar reflectivity factors provides a linear correction that is applied to the reflectivity of the Nanjing radar, and which is effective in improving its consistency. Rain-gauge data were used to verify the correction, and the estimated precipitation based on the corrected GR reflectivity data was closer to the rain-gauge observations than that without correction.
A novel method for correcting scanline-observational bias of discontinuity orientation
Huang, Lei; Tang, Huiming; Tan, Qinwen; Wang, Dingjian; Wang, Liangqing; Ez Eldin, Mutasim A. M.; Li, Changdong; Wu, Qiong
2016-01-01
Scanline observation is known to introduce an angular bias into the probability distribution of orientation in three-dimensional space. In this paper, numerical solutions expressing the functional relationship between the scanline-observational distribution (in one-dimensional space) and the inherent distribution (in three-dimensional space) are derived using probability theory and calculus under the independence hypothesis of dip direction and dip angle. Based on these solutions, a novel method for obtaining the inherent distribution (also for correcting the bias) is proposed, an approach which includes two procedures: 1) Correcting the cumulative probabilities of orientation according to the solutions, and 2) Determining the distribution of the corrected orientations using approximation methods such as the one-sample Kolmogorov-Smirnov test. The inherent distribution corrected by the proposed method can be used for discrete fracture network (DFN) modelling, which is applied to such areas as rockmass stability evaluation, rockmass permeability analysis, rockmass quality calculation and other related fields. To maximize the correction capacity of the proposed method, the observed sample size is suggested through effectiveness tests for different distribution types, dispersions and sample sizes. The performance of the proposed method and the comparison of its correction capacity with existing methods are illustrated with two case studies. PMID:26961249
Clark, Kirsty A.; White Hughto, Jaclyn M.; Pachankis, John E.
2017-01-01
Rational Incarcerated transgender individuals may need to access physical and mental health services to meet their general and gender-affirming (e.g., hormones, surgery) medical needs while incarcerated. Objective This study sought to examine correctional healthcare providers’ knowledge of, attitudes toward, and experiences providing care to transgender inmates. Method In 2016, 20 correctional healthcare providers (e.g., physicians, social workers, psychologists, mental health counselors) from New England participated in in-depth, semi-structured interviews examining their experiences caring for transgender inmates. The interview guide drew on healthcare-related interviews with recently incarcerated transgender women and key informant interviews with correctional healthcare providers and administrators. Data were analyzed using a modified grounded theory framework and thematic analysis. Results Findings revealed that transgender inmates do not consistently receive adequate or gender-affirming care while incarcerated. Factors at the structural level (i.e., lack of training, restrictive healthcare policies, limited budget, and an unsupportive prison culture); interpersonal level (i.e., custody staff bias); and individual level (i.e., lack of transgender cultural and clinical competence) impede correctional healthcare providers’ ability to provide gender-affirming care to transgender patients. These factors result in negative health consequences for incarcerated transgender patients. Conclusions Results call for transgender-specific healthcare policy changes and the implementation of transgender competency trainings for both correctional healthcare providers and custody staff (e.g., officers, lieutenants, wardens). PMID:29028559
Ratios of total suspended solids to suspended sediment concentrations by particle size
Selbig, W.R.; Bannerman, R.T.
2011-01-01
Wet-sieving sand-sized particles from a whole storm-water sample before splitting the sample into laboratory-prepared containers can reduce bias and improve the precision of suspended-sediment concentrations (SSC). Wet-sieving, however, may alter concentrations of total suspended solids (TSS) because the analytical method used to determine TSS may not have included the sediment retained on the sieves. Measuring TSS is still commonly used by environmental managers as a regulatory metric for solids in storm water. For this reason, a new method of correlating concentrations of TSS and SSC by particle size was used to develop a series of correction factors for SSC as a means to estimate TSS. In general, differences between TSS and SSC increased with greater particle size and higher sand content. Median correction factors to SSC ranged from 0.29 for particles larger than 500m to 0.85 for particles measuring from 32 to 63m. Great variability was observed in each fraction-a result of varying amounts of organic matter in the samples. Wide variability in organic content could reduce the transferability of the correction factors. ?? 2011 American Society of Civil Engineers.
Fully correcting the meteor speed distribution for radar observing biases
NASA Astrophysics Data System (ADS)
Moorhead, Althea V.; Brown, Peter G.; Campbell-Brown, Margaret D.; Heynen, Denis; Cooke, William J.
2017-09-01
Meteor radars such as the Canadian Meteor Orbit Radar (CMOR) have the ability to detect millions of meteors, making it possible to study the meteoroid environment in great detail. However, meteor radars also suffer from a number of detection biases; these biases must be fully corrected for in order to derive an accurate description of the meteoroid population. We present a bias correction method for patrol radars that accounts for the full form of ionization efficiency and mass distribution. This is an improvement over previous methods such as that of Taylor (1995), which requires power-law distributions for ionization efficiency and a single mass index. We apply this method to the meteor speed distribution observed by CMOR and find a significant enhancement of slow meteors compared to earlier treatments. However, when the data set is severely restricted to include only meteors with very small uncertainties in speed, the fraction of slow meteors is substantially reduced, indicating that speed uncertainties must be carefully handled.
Romero-Martínez, Martín; Téllez-Rojo Solís, Martha María; Sandoval-Zárate, América Andrea; Zurita-Luna, Juan Manuel; Gutiérrez-Reyes, Juan Pablo
2013-01-01
To determine the presence of bias on the estimation of the consumption sometime in life of alcohol, tobacco or illegal drugs and inhalable substances, and to propose a correction for this in the case it is present. Mexican National Addictions Surveys (NAS) 2002, 2008, and 2011 were analyzed to compare population estimations of consumption sometime in life of tobacco, alcohol or illegal drugs and inhalable substances. A couple of alternative approaches for bias correction were developed. Estimated national prevalences of consumption sometime in life of alcohol and tobacco in the NAS 2008 are not plausible. There was no evidence of bias on the consumption sometime in life of illegal drugs and inhalable substances. New estimations for tobacco and alcohol consumption sometime in life were made, which resulted in plausible values when compared to other data available. Future analyses regarding tobacco and alcohol using NAS 2008 data will have to rely on these newly generated data weights, that are able to reproduce the new (plausible) estimations.
A method to preserve trends in quantile mapping bias correction of climate modeled temperature
NASA Astrophysics Data System (ADS)
Grillakis, Manolis G.; Koutroulis, Aristeidis G.; Daliakopoulos, Ioannis N.; Tsanis, Ioannis K.
2017-09-01
Bias correction of climate variables is a standard practice in climate change impact (CCI) studies. Various methodologies have been developed within the framework of quantile mapping. However, it is well known that quantile mapping may significantly modify the long-term statistics due to the time dependency of the temperature bias. Here, a method to overcome this issue without compromising the day-to-day correction statistics is presented. The methodology separates the modeled temperature signal into a normalized and a residual component relative to the modeled reference period climatology, in order to adjust the biases only for the former and preserve the signal of the later. The results show that this method allows for the preservation of the originally modeled long-term signal in the mean, the standard deviation and higher and lower percentiles of temperature. To illustrate the improvements, the methodology is tested on daily time series obtained from five Euro CORDEX regional climate models (RCMs).
DeCoster, Jamie; Claypool, Heather M
2004-01-01
Priming researchers have long investigated how providing information about traits in one context can influence the impressions people form of social targets in another. The literature has demonstrated that this can have 3 different effects: Sometimes primes become incorporated in the impression of the target (assimilation), sometimes they are used as standards of comparison (anchoring), and sometimes they cause people to consciously alter their judgments (correction). In this article, we present meta-analyses of these 3 effects. The mean effect size was significant in each case, such that assimilation resulted in impressions biased toward the primes, whereas anchoring and correction resulted in impressions biased away from the primes. Additionally, moderator analyses uncovered a number of variables that influence the strength of these effects, such as applicability, processing capacity, and the type of response measure. Based on these results, we propose a general model of how irrelevant information can bias judgments, detailing when and why assimilation and contrast effects result from default and corrective processes.
Revisiting the Logan plot to account for non-negligible blood volume in brain tissue.
Schain, Martin; Fazio, Patrik; Mrzljak, Ladislav; Amini, Nahid; Al-Tawil, Nabil; Fitzer-Attas, Cheryl; Bronzova, Juliana; Landwehrmeyer, Bernhard; Sampaio, Christina; Halldin, Christer; Varrone, Andrea
2017-08-18
Reference tissue-based quantification of brain PET data does not typically include correction for signal originating from blood vessels, which is known to result in biased outcome measures. The bias extent depends on the amount of radioactivity in the blood vessels. In this study, we seek to revisit the well-established Logan plot and derive alternative formulations that provide estimation of distribution volume ratios (DVRs) that are corrected for the signal originating from the vasculature. New expressions for the Logan plot based on arterial input function and reference tissue were derived, which included explicit terms for whole blood radioactivity. The new methods were evaluated using PET data acquired using [ 11 C]raclopride and [ 18 F]MNI-659. The two-tissue compartment model (2TCM), with which signal originating from blood can be explicitly modeled, was used as a gold standard. DVR values obtained for [ 11 C]raclopride using the either blood-based or reference tissue-based Logan plot were systematically underestimated compared to 2TCM, and for [ 18 F]MNI-659, a proportionality bias was observed, i.e., the bias varied across regions. The biases disappeared when optimal blood-signal correction was used for respective tracer, although for the case of [ 18 F]MNI-659 a small but systematic overestimation of DVR was still observed. The new method appears to remove the bias introduced due to absence of correction for blood volume in regular graphical analysis and can be considered in clinical studies. Further studies are however required to derive a generic mapping between plasma and whole-blood radioactivity levels.
Validation of satellite-based rainfall in Kalahari
NASA Astrophysics Data System (ADS)
Lekula, Moiteela; Lubczynski, Maciek W.; Shemang, Elisha M.; Verhoef, Wouter
2018-06-01
Water resources management in arid and semi-arid areas is hampered by insufficient rainfall data, typically obtained from sparsely distributed rain gauges. Satellite-based rainfall estimates (SREs) are alternative sources of such data in these areas. In this study, daily rainfall estimates from FEWS-RFE∼11 km, TRMM-3B42∼27 km, CMOPRH∼27 km and CMORPH∼8 km were evaluated against nine, daily rain gauge records in Central Kalahari Basin (CKB), over a five-year period, 01/01/2001-31/12/2005. The aims were to evaluate the daily rainfall detection capabilities of the four SRE algorithms, analyze the spatio-temporal variability of rainfall in the CKB and perform bias-correction of the four SREs. Evaluation methods included scatter plot analysis, descriptive statistics, categorical statistics and bias decomposition. The spatio-temporal variability of rainfall, was assessed using the SREs' mean annual rainfall, standard deviation, coefficient of variation and spatial correlation functions. Bias correction of the four SREs was conducted using a Time-Varying Space-Fixed bias-correction scheme. The results underlined the importance of validating daily SREs, as they had different rainfall detection capabilities in the CKB. The FEWS-RFE∼11 km performed best, providing better results of descriptive and categorical statistics than the other three SREs, although bias decomposition showed that all SREs underestimated rainfall. The analysis showed that the most reliable SREs performance analysis indicator were the frequency of "miss" rainfall events and the "miss-bias", as they directly indicated SREs' sensitivity and bias of rainfall detection, respectively. The Time Varying and Space Fixed (TVSF) bias-correction scheme, improved some error measures but resulted in the reduction of the spatial correlation distance, thus increased, already high, spatial rainfall variability of all the four SREs. This study highlighted SREs as valuable source of daily rainfall data providing good spatio-temporal data coverage especially suitable for areas with limited rain gauges, such as the CKB, but also emphasized SREs' drawbacks, creating avenue for follow up research.
A systematic bias in the interpretation of CFI results
Warren E. Frayer
1967-01-01
It is not generally recognized that a serious bias arises in the estimates of annual ingrowth and accretion, two of the growth components available from continuous forest inventory (CFI). The bias is demonstrated, and suggestions for correction are given.
ERIC Educational Resources Information Center
Le Mens, Gael; Denrell, Jerker
2011-01-01
Recent research has argued that several well-known judgment biases may be due to biases in the available information sample rather than to biased information processing. Most of these sample-based explanations assume that decision makers are "naive": They are not aware of the biases in the available information sample and do not correct for them.…
NASA Astrophysics Data System (ADS)
Zhang, Shengjun; Li, Jiancheng; Jin, Taoyong; Che, Defu
2018-04-01
Marine gravity anomaly derived from satellite altimetry can be computed using either sea surface height or sea surface slope measurements. Here we consider the slope method and evaluate the errors in the slope of the corrections supplied with the Jason-1 geodetic mission data. The slope corrections are divided into three groups based on whether they are small, comparable, or large with respect to the 1 microradian error in the current sea surface slope models. (1) The small and thus negligible corrections include dry tropospheric correction, inverted barometer correction, solid earth tide and geocentric pole tide. (2) The moderately important corrections include wet tropospheric correction, dual-frequency ionospheric correction and sea state bias. The radiometer measurements are more preferred than model values in the geophysical data records for constraining wet tropospheric effect owing to the highly variable water-vapor structure in atmosphere. The items of dual-frequency ionospheric correction and sea state bias should better not be directly added to range observations for obtaining sea surface slopes since their inherent errors may cause abnormal sea surface slopes and along-track smoothing with uniform distribution weight in certain width is an effective strategy for avoiding introducing extra noises. The slopes calculated from radiometer wet tropospheric corrections, and along-track smoothed dual-frequency ionospheric corrections, sea state bias are generally within ±0.5 microradians and no larger than 1 microradians. (3) Ocean tide has the largest influence on obtaining sea surface slopes while most of ocean tide slopes distribute within ±3 microradians. Larger ocean tide slopes mostly occur over marginal and island-surrounding seas, and extra tidal models with better precision or with extending process (e.g. Got-e) are strongly recommended for updating corrections in geophysical data records.
An entropy correction method for unsteady full potential flows with strong shocks
NASA Technical Reports Server (NTRS)
Whitlow, W., Jr.; Hafez, M. M.; Osher, S. J.
1986-01-01
An entropy correction method for the unsteady full potential equation is presented. The unsteady potential equation is modified to account for entropy jumps across shock waves. The conservative form of the modified equation is solved in generalized coordinates using an implicit, approximate factorization method. A flux-biasing differencing method, which generates the proper amounts of artificial viscosity in supersonic regions, is used to discretize the flow equations in space. Comparisons between the present method and solutions of the Euler equations and between the present method and experimental data are presented. The comparisons show that the present method more accurately models solutions of the Euler equations and experiment than does the isentropic potential formulation.
Assessment of the impact of the scanner-related factors on brain morphometry analysis with Brainvisa
2011-01-01
Background Brain morphometry is extensively used in cross-sectional studies. However, the difference in the estimated values of the morphometric measures between patients and healthy subjects may be small and hence overshadowed by the scanner-related variability, especially with multicentre and longitudinal studies. It is important therefore to investigate the variability and reliability of morphometric measurements between different scanners and different sessions of the same scanner. Methods We assessed the variability and reliability for the grey matter, white matter, cerebrospinal fluid and cerebral hemisphere volumes as well as the global sulcal index, sulcal surface and mean geodesic depth using Brainvisa. We used datasets obtained across multiple MR scanners at 1.5 T and 3 T from the same groups of 13 and 11 healthy volunteers, respectively. For each morphometric measure, we conducted ANOVA analysis and verified whether the estimated values were significantly different across different scanners or different sessions of the same scanner. The between-centre and between-visit reliabilities were estimated from their contribution to the total variance, using a random-effects ANOVA model. To estimate the main processes responsible for low reliability, the results of brain segmentation were compared to those obtained using FAST within FSL. Results In a considerable number of cases, the main effects of both centre and visit factors were found to be significant. Moreover, both between-centre and between-visit reliabilities ranged from poor to excellent for most morphometric measures. A comparison between segmentation using Brainvisa and FAST revealed that FAST improved the reliabilities for most cases, suggesting that morphometry could benefit from improving the bias correction. However, the results were still significantly different across different scanners or different visits. Conclusions Our results confirm that for morphometry analysis with the current version of Brainvisa using data from multicentre or longitudinal studies, the scanner-related variability must be taken into account and where possible should be corrected for. We also suggest providing some flexibility to Brainvisa for a step-by-step analysis of the robustness of this package in terms of reproducibility of the results by allowing the bias corrected images to be imported from other packages and bias correction step be skipped, for example. PMID:22189342
NASA Astrophysics Data System (ADS)
Curci, Gabriele; Falasca, Serena
2017-04-01
Deterministic air quality forecast is routinely carried out at many local Environmental Agencies in Europe and throughout the world by means of eulerian chemistry-transport models. The skill of these models in predicting the ground-level concentrations of relevant pollutants (ozone, nitrogen dioxide, particulate matter) a few days ahead has greatly improved in recent years, but it is not yet always compliant with the required quality level for decision making (e.g. the European Commission has set a maximum uncertainty of 50% on daily values of relevant pollutants). Post-processing of deterministic model output is thus still regarded as a useful tool to make the forecast more reliable. In this work, we test several bias correction techniques applied to a long-term dataset of air quality forecasts over Europe and Italy. We used the WRF-CHIMERE modelling system, which provides operational experimental chemical weather forecast at CETEMPS (http://pumpkin.aquila.infn.it/forechem/), to simulate the years 2008-2012 at low resolution over Europe (0.5° x 0.5°) and moderate resolution over Italy (0.15° x 0.15°). We compared the simulated dataset with available observation from the European Environmental Agency database (AirBase) and characterized model skill and compliance with EU legislation using the Delta tool from FAIRMODE project (http://fairmode.jrc.ec.europa.eu/). The bias correction techniques adopted are, in order of complexity: (1) application of multiplicative factors calculated as the ratio of model-to-observed concentrations averaged over the previous days; (2) correction of the statistical distribution of model forecasts, in order to make it similar to that of the observations; (3) development and application of Model Output Statistics (MOS) regression equations. We illustrate differences and advantages/disadvantages of the three approaches. All the methods are relatively easy to implement for other modelling systems.
Helium Mass Spectrometer Leak Detection: A Method to Quantify Total Measurement Uncertainty
NASA Technical Reports Server (NTRS)
Mather, Janice L.; Taylor, Shawn C.
2015-01-01
In applications where leak rates of components or systems are evaluated against a leak rate requirement, the uncertainty of the measured leak rate must be included in the reported result. However, in the helium mass spectrometer leak detection method, the sensitivity, or resolution, of the instrument is often the only component of the total measurement uncertainty noted when reporting results. To address this shortfall, a measurement uncertainty analysis method was developed that includes the leak detector unit's resolution, repeatability, hysteresis, and drift, along with the uncertainty associated with the calibration standard. In a step-wise process, the method identifies the bias and precision components of the calibration standard, the measurement correction factor (K-factor), and the leak detector unit. Together these individual contributions to error are combined and the total measurement uncertainty is determined using the root-sum-square method. It was found that the precision component contributes more to the total uncertainty than the bias component, but the bias component is not insignificant. For helium mass spectrometer leak rate tests where unit sensitivity alone is not enough, a thorough evaluation of the measurement uncertainty such as the one presented herein should be performed and reported along with the leak rate value.
Collection of holes in thick TlBr detectors at low temperature
NASA Astrophysics Data System (ADS)
Dönmez, Burçin; He, Zhong; Kim, Hadong; Cirignano, Leonard J.; Shah, Kanai S.
2012-10-01
A 3.5×3.5×4.6 mm3 thick TlBr detector with pixellated Au/Cr anodes made by Radiation Monitoring Devices Inc. was studied. The detector has a planar cathode and nine anode pixels surrounded by a guard ring. The pixel pitch is 1.0 mm. Digital pulse waveforms of preamplifier outputs were recorded using a multi-channel GaGe PCI digitizer board. Several experiments were carried out at -20 °C, with the detector under bias for over a month. An energy resolution of 1.7% FWHM at 662 keV was measured without any correction at -2400 V bias. Holes generated at all depths can be collected by the cathode at -2400 V bias which made depth correction using the cathode-to-anode ratio technique difficult since both charge carriers contribute to the signal. An energy resolution of 5.1% FWHM at 662 keV was obtained from the best pixel electrode without depth correction at +1000 V bias. In this positive bias case, the pixel electrode was actually collecting holes. A hole mobility-lifetime of 0.95×10-4 cm2/V has been estimated from measurement data.
Michael J. Erickson; Brian A. Colle; Joseph J. Charney
2012-01-01
The performance of a multimodel ensemble over the northeast United States is evaluated before and after applying bias correction and Bayesian model averaging (BMA). The 13-member Stony Brook University (SBU) ensemble at 0000 UTC is combined with the 21-member National Centers for Environmental Prediction (NCEP) Short-Range Ensemble Forecast (SREF) system at 2100 UTC....
Use of Bayes theorem to correct size-specific sampling bias in growth data.
Troynikov, V S
1999-03-01
The bayesian decomposition of posterior distribution was used to develop a likelihood function to correct bias in the estimates of population parameters from data collected randomly with size-specific selectivity. Positive distributions with time as a parameter were used for parametrization of growth data. Numerical illustrations are provided. The alternative applications of the likelihood to estimate selectivity parameters are discussed.
Hay, L.E.; Clark, M.P.
2003-01-01
This paper examines the hydrologic model performance in three snowmelt-dominated basins in the western United States to dynamically- and statistically downscaled output from the National Centers for Environmental Prediction/National Center for Atmospheric Research Reanalysis (NCEP). Runoff produced using a distributed hydrologic model is compared using daily precipitation and maximum and minimum temperature timeseries derived from the following sources: (1) NCEP output (horizontal grid spacing of approximately 210 km); (2) dynamically downscaled (DDS) NCEP output using a Regional Climate Model (RegCM2, horizontal grid spacing of approximately 52 km); (3) statistically downscaled (SDS) NCEP output; (4) spatially averaged measured data used to calibrate the hydrologic model (Best-Sta) and (5) spatially averaged measured data derived from stations located within the area of the RegCM2 model output used for each basin, but excluding Best-Sta set (All-Sta). In all three basins the SDS-based simulations of daily runoff were as good as runoff produced using the Best-Sta timeseries. The NCEP, DDS, and All-Sta timeseries were able to capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all three basins, the NCEP-, DDS-, and All-Sta-based simulations of runoff showed little skill on a daily basis. When the precipitation and temperature biases were corrected in the NCEP, DDS, and All-Sta timeseries, the accuracy of the daily runoff simulations improved dramatically, but, with the exception of the bias-corrected All-Sta data set, these simulations were never as accurate as the SDS-based simulations. This need for a bias correction may be somewhat troubling, but in the case of the large station-timeseries (All-Sta), the bias correction did indeed 'correct' for the change in scale. It is unknown if bias corrections to model output will be valid in a future climate. Future work is warranted to identify the causes for (and removal of) systematic biases in DDS simulations, and improve DDS simulations of daily variability in local climate. Until then, SDS based simulations of runoff appear to be the safer downscaling choice.
Bias-correction and Spatial Disaggregation for Climate Change Impact Assessments at a basin scale
NASA Astrophysics Data System (ADS)
Nyunt, Cho; Koike, Toshio; Yamamoto, Akio; Nemoto, Toshihoro; Kitsuregawa, Masaru
2013-04-01
Basin-scale climate change impact studies mainly rely on general circulation models (GCMs) comprising the related emission scenarios. Realistic and reliable data from GCM is crucial for national scale or basin scale impact and vulnerability assessments to build safety society under climate change. However, GCM fail to simulate regional climate features due to the imprecise parameterization schemes in atmospheric physics and coarse resolution scale. This study describes how to exclude some unsatisfactory GCMs with respect to focused basin, how to minimize the biases of GCM precipitation through statistical bias correction and how to cover spatial disaggregation scheme, a kind of downscaling, within in a basin. GCMs rejection is based on the regional climate features of seasonal evolution as a bench mark and mainly depends on spatial correlation and root mean square error of precipitation and atmospheric variables over the target region. Global Precipitation Climatology Project (GPCP) and Japanese 25-uear Reanalysis Project (JRA-25) are specified as references in figuring spatial pattern and error of GCM. Statistical bias-correction scheme comprises improvements of three main flaws of GCM precipitation such as low intensity drizzled rain days with no dry day, underestimation of heavy rainfall and inter-annual variability of local climate. Biases of heavy rainfall are conducted by generalized Pareto distribution (GPD) fitting over a peak over threshold series. Frequency of rain day error is fixed by rank order statistics and seasonal variation problem is solved by using a gamma distribution fitting in each month against insi-tu stations vs. corresponding GCM grids. By implementing the proposed bias-correction technique to all insi-tu stations and their respective GCM grid, an easy and effective downscaling process for impact studies at the basin scale is accomplished. The proposed method have been examined its applicability to some of the basins in various climate regions all over the world. The biases are controlled very well by using this scheme in all applied basins. After that, bias-corrected and downscaled GCM precipitation are ready to use for simulating the Water and Energy Budget based Distributed Hydrological Model (WEB-DHM) to analyse the stream flow change or water availability of a target basin under the climate change in near future. Furthermore, it can be investigated any inter-disciplinary studies such as drought, flood, food, health and so on.In summary, an effective and comprehensive statistical bias-correction method was established to fulfil the generative applicability of GCM scale to basin scale without difficulty. This gap filling also promotes the sound decision of river management in the basin with more reliable information to build the resilience society.
Grinde, Kelsey E.; Arbet, Jaron; Green, Alden; O'Connell, Michael; Valcarcel, Alessandra; Westra, Jason; Tintle, Nathan
2017-01-01
To date, gene-based rare variant testing approaches have focused on aggregating information across sets of variants to maximize statistical power in identifying genes showing significant association with diseases. Beyond identifying genes that are associated with diseases, the identification of causal variant(s) in those genes and estimation of their effect is crucial for planning replication studies and characterizing the genetic architecture of the locus. However, we illustrate that straightforward single-marker association statistics can suffer from substantial bias introduced by conditioning on gene-based test significance, due to the phenomenon often referred to as “winner's curse.” We illustrate the ramifications of this bias on variant effect size estimation and variant prioritization/ranking approaches, outline parameters of genetic architecture that affect this bias, and propose a bootstrap resampling method to correct for this bias. We find that our correction method significantly reduces the bias due to winner's curse (average two-fold decrease in bias, p < 2.2 × 10−6) and, consequently, substantially improves mean squared error and variant prioritization/ranking. The method is particularly helpful in adjustment for winner's curse effects when the initial gene-based test has low power and for relatively more common, non-causal variants. Adjustment for winner's curse is recommended for all post-hoc estimation and ranking of variants after a gene-based test. Further work is necessary to continue seeking ways to reduce bias and improve inference in post-hoc analysis of gene-based tests under a wide variety of genetic architectures. PMID:28959274
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larraga-Gutierrez, J
Purpose: To correct for the over-response of mini-ionization chambers with high-Z central electrodes. The hypothesis is that by applying a negative/reverse voltage, it is possible to suppress the signal generated in the high-Z central electrode by low-energy photons. Methods: The mini-ionization chambers used in the experiments were a PTW-31014, PTW-31006 and IBA-CC01. The PTW-31014 has an aluminum central electrode while the PTW-31006 and IBA-CC01 have a steel one. Total scatter factors (Scp) were measured for a 6 MV photon beam down to a square field size of 0.5 cm. The measurements were performed in water at 10 cm depth withmore » SAD of 100 cm. The Scp were measured with the dosimeters with +400V bias voltage. In the case of the PTW-31006 and IBA-CC01, the measurements were repeated with −400V bias voltage. Also, the field factors in water were calculated with Monte Carlo simulations for comparison. Results: The measured Scp at +400V with the PTW-31006 and IBA-CC01 detectors were in agreement within 0.2% down to a field size of 1.5 cm. Both dosimeters shown a systematic difference about 2.5% with the Scp measured with the PTW-31014 and the Monte Carlo calculated field factors. The measured Scp at −400V with the PTW-31006 and IBA-CC01 detectors were in close agreement with the PTW-31014 measured Scp and the field factors within 0.3 and 1.0%, respectively. In the case of the IBA-CC01 it was found a good agreement (1%) down to field size of 1.0 cm. All the dosimeters shown differences up to 17% between the measured Scp and the field factor for the 0.5 cm field size. Conclusion: By applying a negative/reverse voltage to the mini-ionization chambers with high-Z central electrode it was possible to correct for their over-response to low energy photons.« less
Biases in comparative analyses of extinction risk: mind the gap.
González-Suárez, Manuela; Lucas, Pablo M; Revilla, Eloy
2012-11-01
1. Comparative analyses are used to address the key question of what makes a species more prone to extinction by exploring the links between vulnerability and intrinsic species' traits and/or extrinsic factors. This approach requires comprehensive species data but information is rarely available for all species of interest. As a result comparative analyses often rely on subsets of relatively few species that are assumed to be representative samples of the overall studied group. 2. Our study challenges this assumption and quantifies the taxonomic, spatial, and data type biases associated with the quantity of data available for 5415 mammalian species using the freely available life-history database PanTHERIA. 3. Moreover, we explore how existing biases influence results of comparative analyses of extinction risk by using subsets of data that attempt to correct for detected biases. In particular, we focus on links between four species' traits commonly linked to vulnerability (distribution range area, adult body mass, population density and gestation length) and conduct univariate and multivariate analyses to understand how biases affect model predictions. 4. Our results show important biases in data availability with c.22% of mammals completely lacking data. Missing data, which appear to be not missing at random, occur frequently in all traits (14-99% of cases missing). Data availability is explained by intrinsic traits, with larger mammals occupying bigger range areas being the best studied. Importantly, we find that existing biases affect the results of comparative analyses by overestimating the risk of extinction and changing which traits are identified as important predictors. 5. Our results raise concerns over our ability to draw general conclusions regarding what makes a species more prone to extinction. Missing data represent a prevalent problem in comparative analyses, and unfortunately, because data are not missing at random, conventional approaches to fill data gaps, are not valid or present important challenges. These results show the importance of making appropriate inferences from comparative analyses by focusing on the subset of species for which data are available. Ultimately, addressing the data bias problem requires greater investment in data collection and dissemination, as well as the development of methodological approaches to effectively correct existing biases. © 2012 The Authors. Journal of Animal Ecology © 2012 British Ecological Society.
Corrected ROC analysis for misclassified binary outcomes.
Zawistowski, Matthew; Sussman, Jeremy B; Hofer, Timothy P; Bentley, Douglas; Hayward, Rodney A; Wiitala, Wyndy L
2017-06-15
Creating accurate risk prediction models from Big Data resources such as Electronic Health Records (EHRs) is a critical step toward achieving precision medicine. A major challenge in developing these tools is accounting for imperfect aspects of EHR data, particularly the potential for misclassified outcomes. Misclassification, the swapping of case and control outcome labels, is well known to bias effect size estimates for regression prediction models. In this paper, we study the effect of misclassification on accuracy assessment for risk prediction models and find that it leads to bias in the area under the curve (AUC) metric from standard ROC analysis. The extent of the bias is determined by the false positive and false negative misclassification rates as well as disease prevalence. Notably, we show that simply correcting for misclassification while building the prediction model is not sufficient to remove the bias in AUC. We therefore introduce an intuitive misclassification-adjusted ROC procedure that accounts for uncertainty in observed outcomes and produces bias-corrected estimates of the true AUC. The method requires that misclassification rates are either known or can be estimated, quantities typically required for the modeling step. The computational simplicity of our method is a key advantage, making it ideal for efficiently comparing multiple prediction models on very large datasets. Finally, we apply the correction method to a hospitalization prediction model from a cohort of over 1 million patients from the Veterans Health Administrations EHR. Implementations of the ROC correction are provided for Stata and R. Published 2017. This article is a U.S. Government work and is in the public domain in the USA. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
Process-based evaluation of the ÖKS15 Austrian climate scenarios: First results
NASA Astrophysics Data System (ADS)
Mendlik, Thomas; Truhetz, Heimo; Jury, Martin; Maraun, Douglas
2017-04-01
The climate scenarios for Austria from the ÖKS15 project consists of 13 downscaled and bias-corrected RCMs from the EURO-CORDEX project. This dataset is meant for the broad public and is now available at the central national archive for climate data (CCCA Data Center). Because of this huge public outreach it is absolutely necessary to objectively discuss the limitations of this dataset and to publish these limitations, which should also be understood by a non-scientific audience. Even though systematical climatological biases have been accounted for by the Scaled-Distribution-Mapping (SDM) bias-correction method, it is not guaranteed that the model biases have been removed for the right reasons. If climate scenarios do not get the patterns of synoptic variability right, biases will still prevail in certain weather patterns. Ultimately this will have consequences for the projected climate change signals. In this study we derive typical weather types in the Alpine Region based on patterns from mean sea level pressure from ERA-INTERIM data and check the occurrence of these synoptic phenomena in EURO-CORDEX data and their corresponding driving GCMs. Based on these weather patterns we analyze the remaining biases of the downscaled and bias-corrected scenarios. We argue that such a process-based evaluation is not only necessary from a scientific point of view, but can also help the broader public to understand the limitations of downscaled climate scenarios, as model errors can be interpreted in terms of everyday observable weather.
Bias Corrections for Regional Estimates of the Time-averaged Geomagnetic Field
NASA Astrophysics Data System (ADS)
Constable, C.; Johnson, C. L.
2009-05-01
We assess two sources of bias in the time-averaged geomagnetic field (TAF) and paleosecular variation (PSV): inadequate temporal sampling, and the use of unit vectors in deriving temporal averages of the regional geomagnetic field. For the first temporal sampling question we use statistical resampling of existing data sets to minimize and correct for bias arising from uneven temporal sampling in studies of the time- averaged geomagnetic field (TAF) and its paleosecular variation (PSV). The techniques are illustrated using data derived from Hawaiian lava flows for 0-5~Ma: directional observations are an updated version of a previously published compilation of paleomagnetic directional data centered on ± 20° latitude by Lawrence et al./(2006); intensity data are drawn from Tauxe & Yamazaki, (2007). We conclude that poor temporal sampling can produce biased estimates of TAF and PSV, and resampling to appropriate statistical distribution of ages reduces this bias. We suggest that similar resampling should be attempted as a bias correction for all regional paleomagnetic data to be used in TAF and PSV modeling. The second potential source of bias is the use of directional data in place of full vector data to estimate the average field. This is investigated for the full vector subset of the updated Hawaiian data set. Lawrence, K.P., C.G. Constable, and C.L. Johnson, 2006, Geochem. Geophys. Geosyst., 7, Q07007, DOI 10.1029/2005GC001181. Tauxe, L., & Yamazkai, 2007, Treatise on Geophysics,5, Geomagnetism, Elsevier, Amsterdam, Chapter 13,p509
Bias-field equalizer for bubble memories
NASA Technical Reports Server (NTRS)
Keefe, G. E.
1977-01-01
Magnetoresistive Perm-alloy sensor monitors bias field required to maintain bubble memory. Sensor provides error signal that, in turn, corrects magnitude of bias field. Error signal from sensor can be used to control magnitude of bias field in either auxiliary set of bias-field coils around permanent magnet field, or current in small coils used to remagnetize permanent magnet by infrequent, short, high-current pulse or short sequence of pulses.
Analytic Methods for Adjusting Subjective Rating Schemes.
ERIC Educational Resources Information Center
Cooper, Richard V. L.; Nelson, Gary R.
Statistical and econometric techniques of correcting for supervisor bias in models of individual performance appraisal were developed, using a variant of the classical linear regression model. Location bias occurs when individual performance is systematically overestimated or underestimated, while scale bias results when raters either exaggerate…
Use of the Magnetic Field for Improving Gyroscopes’ Biases Estimation
Munoz Diaz, Estefania; de Ponte Müller, Fabian; García Domínguez, Juan Jesús
2017-01-01
An accurate orientation is crucial to a satisfactory position in pedestrian navigation. The orientation estimation, however, is greatly affected by errors like the biases of gyroscopes. In order to minimize the error in the orientation, the biases of gyroscopes must be estimated and subtracted. In the state of the art it has been proposed, but not proved, that the estimation of the biases can be accomplished using magnetic field measurements. The objective of this work is to evaluate the effectiveness of using magnetic field measurements to estimate the biases of medium-cost micro-electromechanical sensors (MEMS) gyroscopes. We carry out the evaluation with experiments that cover both, quasi-error-free turn rate and magnetic measurements and medium-cost MEMS turn rate and magnetic measurements. The impact of different homogeneous magnetic field distributions and magnetically perturbed environments is analyzed. Additionally, the effect of the successful biases subtraction on the orientation and the estimated trajectory is detailed. Our results show that the use of magnetic field measurements is beneficial to the correct biases estimation. Further, we show that different magnetic field distributions affect differently the biases estimation process. Moreover, the biases are likewise correctly estimated under perturbed magnetic fields. However, for indoor and urban scenarios the biases estimation process is very slow. PMID:28398232
Power spectrum precision for redshift space distortions
NASA Astrophysics Data System (ADS)
Linder, Eric V.; Samsing, Johan
2013-02-01
Redshift space distortions in galaxy clustering offer a promising technique for probing the growth rate of structure and testing dark energy properties and gravity. We consider the issue of to what accuracy they need to be modeled in order not to unduly bias cosmological conclusions. Fitting for nonlinear and redshift space corrections to the linear theory real space density power spectrum in bins in wavemode, we analyze both the effect of marginalizing over these corrections and of the bias due to not correcting them fully. While naively subpercent accuracy is required to avoid bias in the unmarginalized case, in the fitting approach the Kwan-Lewis-Linder reconstruction function for redshift space distortions is found to be accurately selfcalibrated with little degradation in dark energy and gravity parameter estimation for a next generation galaxy redshift survey such as BigBOSS.
NASA Astrophysics Data System (ADS)
Abitew, T. A.; Roy, T.; Serrat-Capdevila, A.; van Griensven, A.; Bauwens, W.; Valdes, J. B.
2016-12-01
The Tekeze Basin supports one of Africans largest Arch Dam located in northern Ethiopian has vital role in hydropower generation. However, little has been done on the hydrology of the basin due to limited in situ hydroclimatological data. Therefore, the main objective of this research is to simulate streamflow upstream of the Tekeze Dam using Soil and Water Assessment Tool (SWAT) forced by bias-corrected multiple satellite rainfall products (CMORPH, TMPA and PERSIANN-CCS). This talk will present the potential as well as skills of bias-corrected satellite rainfall products for streamflow prediction in in Tropical Africa. Additionally, the SWAT model results will also be compared with previous conceptual Hydrological models (HyMOD and HBV) from SERVIR Streamflow forecasting in African Basin project (http://www.swaat.arizona.edu/index.html).
Ciceri, E; Recchia, S; Dossi, C; Yang, L; Sturgeon, R E
2008-01-15
The development and validation of a method for the determination of mercury in sediments using a sector field inductively coupled plasma mass spectrometer (SF-ICP-MS) for detection is described. The utilization of isotope dilution (ID) calibration is shown to solve analytical problems related to matrix composition. Mass bias is corrected using an internal mass bias correction technique, validated against the traditional standard bracketing method. The overall analytical protocol is validated against NRCC PACS-2 marine sediment CRM. The estimated limit of detection is 12ng/g. The proposed procedure was applied to the analysis of a real sediment core sampled to a depth of 160m in Lake Como, where Hg concentrations ranged from 66 to 750ng/g.
Relative risk estimates from spatial and space-time scan statistics: Are they biased?
Prates, Marcos O.; Kulldorff, Martin; Assunção, Renato M.
2014-01-01
The purely spatial and space-time scan statistics have been successfully used by many scientists to detect and evaluate geographical disease clusters. Although the scan statistic has high power in correctly identifying a cluster, no study has considered the estimates of the cluster relative risk in the detected cluster. In this paper we evaluate whether there is any bias on these estimated relative risks. Intuitively, one may expect that the estimated relative risks has upward bias, since the scan statistic cherry picks high rate areas to include in the cluster. We show that this intuition is correct for clusters with low statistical power, but with medium to high power the bias becomes negligible. The same behaviour is not observed for the prospective space-time scan statistic, where there is an increasing conservative downward bias of the relative risk as the power to detect the cluster increases. PMID:24639031
NASA Astrophysics Data System (ADS)
Zhao, Lei; Lee, Xuhui; Liu, Shoudong
2013-09-01
Solar radiation at the Earth's surface is an important driver of meteorological and ecological processes. The objective of this study is to evaluate the accuracy of the reanalysis solar radiation produced by NARR (North American Regional Reanalysis) and MERRA (Modern-Era Retrospective Analysis for Research and Applications) against the FLUXNET measurements in North America. We found that both assimilation systems systematically overestimated the surface solar radiation flux on the monthly and annual scale, with an average bias error of +37.2 Wm-2 for NARR and of +20.2 Wm-2 for MERRA. The bias errors were larger under cloudy skies than under clear skies. A postreanalysis algorithm consisting of empirical relationships between model bias, a clearness index, and site elevation was proposed to correct the model errors. Results show that the algorithm can remove the systematic bias errors for both FLUXNET calibration sites (sites used to establish the algorithm) and independent validation sites. After correction, the average annual mean bias errors were reduced to +1.3 Wm-2 for NARR and +2.7 Wm-2 for MERRA. Applying the correction algorithm to the global domain of MERRA brought the global mean surface incoming shortwave radiation down by 17.3 W m-2 to 175.5 W m-2. Under the constraint of the energy balance, other radiation and energy balance terms at the Earth's surface, estimated from independent global data products, also support the need for a downward adjustment of the MERRA surface solar radiation.
Assessing the Added Value of Dynamical Downscaling in the Context of Hydrologic Implication
NASA Astrophysics Data System (ADS)
Lu, M.; IM, E. S.; Lee, M. H.
2017-12-01
There is a scientific consensus that high-resolution climate simulations downscaled by Regional Climate Models (RCMs) can provide valuable refined information over the target region. However, a significant body of hydrologic impact assessment has been performing using the climate information provided by Global Climate Models (GCMs) in spite of a fundamental spatial scale gap. It is probably based on the assumption that the substantial biases and spatial scale gap from GCMs raw data can be simply removed by applying the statistical bias correction and spatial disaggregation. Indeed, many previous studies argue that the benefit of dynamical downscaling using RCMs is minimal when linking climate data with the hydrological model, from the comparison of the impact between bias-corrected GCMs and bias-corrected RCMs on hydrologic simulations. It may be true for long-term averaged climatological pattern, but it is not necessarily the case when looking into variability across various temporal spectrum. In this study, we investigate the added value of dynamical downscaling focusing on the performance in capturing climate variability. For doing this, we evaluate the performance of the distributed hydrological model over the Korean river basin using the raw output from GCM and RCM, and bias-corrected output from GCM and RCM. The impacts of climate input data on streamflow simulation are comprehensively analyzed. [Acknowledgements]This research is supported by the Korea Agency for Infrastructure Technology Advancement (KAIA) grant funded by the Ministry of Land, Infrastructure and Transport (Grant 17AWMP-B083066-04).
Controlling for anthropogenically induced atmospheric variation in stable carbon isotope studies
Long, E.S.; Sweitzer, R.A.; Diefenbach, D.R.; Ben-David, M.
2005-01-01
Increased use of stable isotope analysis to examine food-web dynamics, migration, transfer of nutrients, and behavior will likely result in expansion of stable isotope studies investigating human-induced global changes. Recent elevation of atmospheric CO2 concentration, related primarily to fossil fuel combustion, has reduced atmospheric CO2 ??13C (13C/12C), and this change in isotopic baseline has, in turn, reduced plant and animal tissue ??13C of terrestrial and aquatic organisms. Such depletion in CO2 ??13C and its effects on tissue ??13C may introduce bias into ??13C investigations, and if this variation is not controlled, may confound interpretation of results obtained from tissue samples collected over a temporal span. To control for this source of variation, we used a high-precision record of atmospheric CO2 ??13C from ice cores and direct atmospheric measurements to model modern change in CO2 ??13C. From this model, we estimated a correction factor that controls for atmospheric change; this correction reduces bias associated with changes in atmospheric isotopic baseline and facilitates comparison of tissue ??13C collected over multiple years. To exemplify the importance of accounting for atmospheric CO2 ??13C depletion, we applied the correction to a dataset of collagen ??13C obtained from mountain lion (Puma concolor) bone samples collected in California between 1893 and 1995. Before correction, in three of four ecoregions collagen ??13C decreased significantly concurrent with depletion of atmospheric CO2 ??13C (n ??? 32, P ??? 0.01). Application of the correction to collagen ??13C data removed trends from regions demonstrating significant declines, and measurement error associated with the correction did not add substantial variation to adjusted estimates. Controlling for long-term atmospheric variation and correcting tissue samples for changes in isotopic baseline facilitate analysis of samples that span a large temporal range. ?? Springer-Verlag 2005.
NASA Astrophysics Data System (ADS)
Kaiser, J.; Zhu, L.; Travis, K.; Jacob, D.
2017-12-01
In the South East United States, biogenic isoprene fuels tropospheric ozone formation, and its oxidation products contribute significantly to organic aerosol. Bottom-up emission inventories rely on very limited isoprene emission and land-cover data, yielding uncertainties of a factor of 2 or more. Here, we use formaldehyde columns from the Ozone Monitoring Instrument in a high-resolution (0.25 x 0.325o) adjoint-based inversion to constrain isoprene emissions over the SE US during Aug-Sept of 2013. We find that the MEGANv2.1 inventory is biased high over most of the SE US. Our derived scaling factors show significant spatial variability, with the largest corrections applied to Louisiana and the Edwards Plateau in Texas. We test our inversion results against a comprehensive set of isoprene oxidation product observations from the NASA SEAC4RS flight campaign. The SEAC4RS data provides new confidence in the satellite retrievals and in mechanism linking isoprene oxidation to formaldehyde production. Finally, we relate the posterior scaling factors to the underlying land-type, and examine potential sources of observed biases.
Mirzazadeh, Ali; Mansournia, Mohammad-Ali; Nedjat, Saharnaz; Navadeh, Soodabeh; McFarland, Willi; Haghdoost, Ali Akbar; Mohammad, Kazem
2013-10-01
We present probabilistic and Bayesian techniques to correct for bias in categorical and numerical measures and empirically apply them to a recent survey of female sex workers (FSW) conducted in Iran. We used bias parameters from a previous validation study to correct estimates of behaviours reported by FSW. Monte-Carlo Sensitivity Analysis and Bayesian bias analysis produced point and simulation intervals (SI). The apparent and corrected prevalence differed by a minimum of 1% for the number of 'non-condom use sexual acts' (36.8% vs 35.8%) to a maximum of 33% for 'ever associated with a venue to sell sex' (35.5% vs 68.0%). The negative predictive value of the questionnaire for 'history of STI' and 'ever associated with a venue to sell sex' was 36.3% (95% SI 4.2% to 69.1%) and 46.9% (95% SI 6.3% to 79.1%), respectively. Bias-adjusted numerical measures of behaviours increased by 0.1 year for 'age at first sex act for money' to 1.5 for 'number of sexual contacts in last 7 days'. The 'true' estimates of most behaviours are considerably higher than those reported and the related SIs are wider than conventional CIs. Our analysis indicates the need for and applicability of bias analysis in surveys, particularly in stigmatised settings.
A Simple Noise Correction Scheme for Diffusional Kurtosis Imaging
Glenn, G. Russell; Tabesh, Ali; Jensen, Jens H.
2014-01-01
Purpose Diffusional kurtosis imaging (DKI) is sensitive to the effects of signal noise due to strong diffusion weightings and higher order modeling of the diffusion weighted signal. A simple noise correction scheme is proposed to remove the majority of the noise bias in the estimated diffusional kurtosis. Methods Weighted linear least squares (WLLS) fitting together with a voxel-wise, subtraction-based noise correction from multiple, independent acquisitions are employed to reduce noise bias in DKI data. The method is validated in phantom experiments and demonstrated for in vivo human brain for DKI-derived parameter estimates. Results As long as the signal-to-noise ratio (SNR) for the most heavily diffusion weighted images is greater than 2.1, errors in phantom diffusional kurtosis estimates are found to be less than 5 percent with noise correction, but as high as 44 percent for uncorrected estimates. In human brain, noise correction is also shown to improve diffusional kurtosis estimates derived from measurements made with low SNR. Conclusion The proposed correction technique removes the majority of noise bias from diffusional kurtosis estimates in noisy phantom data and is applicable to DKI of human brain. Features of the method include computational simplicity and ease of integration into standard WLLS DKI post-processing algorithms. PMID:25172990
Young Kim, Eun; Johnson, Hans J
2013-01-01
A robust multi-modal tool, for automated registration, bias correction, and tissue classification, has been implemented for large-scale heterogeneous multi-site longitudinal MR data analysis. This work focused on improving the an iterative optimization framework between bias-correction, registration, and tissue classification inspired from previous work. The primary contributions are robustness improvements from incorporation of following four elements: (1) utilize multi-modal and repeated scans, (2) incorporate high-deformable registration, (3) use extended set of tissue definitions, and (4) use of multi-modal aware intensity-context priors. The benefits of these enhancements were investigated by a series of experiments with both simulated brain data set (BrainWeb) and by applying to highly-heterogeneous data from a 32 site imaging study with quality assessments through the expert visual inspection. The implementation of this tool is tailored for, but not limited to, large-scale data processing with great data variation with a flexible interface. In this paper, we describe enhancements to a joint registration, bias correction, and the tissue classification, that improve the generalizability and robustness for processing multi-modal longitudinal MR scans collected at multi-sites. The tool was evaluated by using both simulated and simulated and human subject MRI images. With these enhancements, the results showed improved robustness for large-scale heterogeneous MRI processing.
NASA Technical Reports Server (NTRS)
Radakovich, Jon; Bosilovich, M.; Chern, Jiun-dar; daSilva, Arlindo
2004-01-01
The NASA/NCAR Finite Volume GCM (fvGCM) with the NCAR CLM (Community Land Model) version 2.0 was integrated into the NASA/GMAO Finite Volume Data Assimilation System (fvDAS). A new method was developed for coupled skin temperature assimilation and bias correction where the analysis increment and bias correction term is passed into the CLM2 and considered a forcing term in the solution to the energy balance. For our purposes, the fvDAS CLM2 was run at 1 deg. x 1.25 deg. horizontal resolution with 55 vertical levels. We assimilate the ISCCP-DX (30 km resolution) surface temperature product. The atmospheric analysis was performed 6-hourly, while the skin temperature analysis was performed 3-hourly. The bias correction term, which was updated at the analysis times, was added to the skin temperature tendency equation at every timestep. In this presentation, we focus on the validation of the surface energy budget at the in situ reference sites for the Coordinated Enhanced Observation Period (CEOP). We will concentrate on sites that include independent skin temperature measurements and complete energy budget observations for the month of July 2001. In addition, MODIS skin temperature will be used for validation. Several assimilations were conducted and preliminary results will be presented.
Study Protocol, Sample Characteristics, and Loss to Follow-Up: The OPPERA Prospective Cohort Study
Bair, Eric; Brownstein, Naomi C.; Ohrbach, Richard; Greenspan, Joel D.; Dubner, Ron; Fillingim, Roger B.; Maixner, William; Smith, Shad; Diatchenko, Luda; Gonzalez, Yoly; Gordon, Sharon; Lim, Pei-Feng; Ribeiro-Dasilva, Margarete; Dampier, Dawn; Knott, Charles; Slade, Gary D.
2013-01-01
When studying incidence of pain conditions such as temporomandibular disorders (TMDs), repeated monitoring is needed in prospective cohort studies. However, monitoring methods usually have limitations and, over a period of years, some loss to follow-up is inevitable. The OPPERA prospective cohort study of first-onset TMD screened for symptoms using quarterly questionnaires and examined symptomatic participants to definitively ascertain TMD incidence. During the median 2.8-year observation period, 16% of the 3,263 enrollees completed no follow-up questionnaires, others provided incomplete follow-up, and examinations were not conducted for one third of symptomatic episodes. Although screening methods and examinations were found to have excellent reliability and validity, they were not perfect. Loss to follow-up varied according to some putative TMD risk factors, although multiple imputation to correct the problem suggested that bias was minimal. A second method of multiple imputation that evaluated bias associated with omitted and dubious examinations revealed a slight underestimate of incidence and some small biases in hazard ratios used to quantify effects of risk factors. Although “bottom line” statistical conclusions were not affected, multiply-imputed estimates should be considered when evaluating the large number of risk factors under investigation in the OPPERA study. Perspective These findings support the validity of the OPPERA prospective cohort study for the purpose of investigating the etiology of first-onset TMD, providing the foundation for other papers investigating risk factors hypothesized in the OPPERA project. PMID:24275220
Array-based satellite phase bias sensing: theory and GPS/BeiDou/QZSS results
NASA Astrophysics Data System (ADS)
Khodabandeh, A.; Teunissen, P. J. G.
2014-09-01
Single-receiver integer ambiguity resolution (IAR) is a measurement concept that makes use of network-derived non-integer satellite phase biases (SPBs), among other corrections, to recover and resolve the integer ambiguities of the carrier-phase data of a single GNSS receiver. If it is realized, the very precise integer ambiguity-resolved carrier-phase data would then contribute to the estimation of the receiver’s position, thus making (near) real-time precise point positioning feasible. Proper definition and determination of the SPBs take a leading part in developing the idea of single-receiver IAR. In this contribution, the concept of array-based between-satellite single-differenced (SD) SPB determination is introduced, which is aimed to reduce the code-dominated precision of the SD-SPB corrections. The underlying model is realized by giving the role of the local reference network to an array of antennas, mounted on rigid platforms, that are separated by short distances so that the same ionospheric delay is assumed to be experienced by all the antennas. To that end, a closed-form expression of the array-aided SD-SPB corrections is presented, thereby proposing a simple strategy to compute the SD-SPBs. After resolving double-differenced ambiguities of the array’s data, the variance of the SD-SPB corrections is shown to be reduced by a factor equal to the number of antennas. This improvement in precision is also affirmed by numerical results of the three GNSSs GPS, BeiDou and QZSS. Experimental results demonstrate that the integer-recovered ambiguities converge to integers faster, upon increasing the number of antennas aiding the SD-SPB corrections.
Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15
ERIC Educational Resources Information Center
Zhang, Jinming
2005-01-01
Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…
Further tests of entreaties to avoid hypothetical bias in referendum contingent valuation
Thomas C. Brown; Icek Ajzen; Daniel Hrubes
2003-01-01
Over-estimation of willingness to pay in contingent markets has been attributed largely to hypothetical bias. One promising approach for avoiding hypothetical bias is to tell respondents enough about such bias that they self-correct for it. A script designed for this purpose by Cummings and Taylor was used in hypothetical referenda that differed in payment amount. In...
Accurate and predictive antibody repertoire profiling by molecular amplification fingerprinting.
Khan, Tarik A; Friedensohn, Simon; Gorter de Vries, Arthur R; Straszewski, Jakub; Ruscheweyh, Hans-Joachim; Reddy, Sai T
2016-03-01
High-throughput antibody repertoire sequencing (Ig-seq) provides quantitative molecular information on humoral immunity. However, Ig-seq is compromised by biases and errors introduced during library preparation and sequencing. By using synthetic antibody spike-in genes, we determined that primer bias from multiplex polymerase chain reaction (PCR) library preparation resulted in antibody frequencies with only 42 to 62% accuracy. Additionally, Ig-seq errors resulted in antibody diversity measurements being overestimated by up to 5000-fold. To rectify this, we developed molecular amplification fingerprinting (MAF), which uses unique molecular identifier (UID) tagging before and during multiplex PCR amplification, which enabled tagging of transcripts while accounting for PCR efficiency. Combined with a bioinformatic pipeline, MAF bias correction led to measurements of antibody frequencies with up to 99% accuracy. We also used MAF to correct PCR and sequencing errors, resulting in enhanced accuracy of full-length antibody diversity measurements, achieving 98 to 100% error correction. Using murine MAF-corrected data, we established a quantitative metric of recent clonal expansion-the intraclonal diversity index-which measures the number of unique transcripts associated with an antibody clone. We used this intraclonal diversity index along with antibody frequencies and somatic hypermutation to build a logistic regression model for prediction of the immunological status of clones. The model was able to predict clonal status with high confidence but only when using MAF error and bias corrected Ig-seq data. Improved accuracy by MAF provides the potential to greatly advance Ig-seq and its utility in immunology and biotechnology.
Accurate and predictive antibody repertoire profiling by molecular amplification fingerprinting
Khan, Tarik A.; Friedensohn, Simon; de Vries, Arthur R. Gorter; Straszewski, Jakub; Ruscheweyh, Hans-Joachim; Reddy, Sai T.
2016-01-01
High-throughput antibody repertoire sequencing (Ig-seq) provides quantitative molecular information on humoral immunity. However, Ig-seq is compromised by biases and errors introduced during library preparation and sequencing. By using synthetic antibody spike-in genes, we determined that primer bias from multiplex polymerase chain reaction (PCR) library preparation resulted in antibody frequencies with only 42 to 62% accuracy. Additionally, Ig-seq errors resulted in antibody diversity measurements being overestimated by up to 5000-fold. To rectify this, we developed molecular amplification fingerprinting (MAF), which uses unique molecular identifier (UID) tagging before and during multiplex PCR amplification, which enabled tagging of transcripts while accounting for PCR efficiency. Combined with a bioinformatic pipeline, MAF bias correction led to measurements of antibody frequencies with up to 99% accuracy. We also used MAF to correct PCR and sequencing errors, resulting in enhanced accuracy of full-length antibody diversity measurements, achieving 98 to 100% error correction. Using murine MAF-corrected data, we established a quantitative metric of recent clonal expansion—the intraclonal diversity index—which measures the number of unique transcripts associated with an antibody clone. We used this intraclonal diversity index along with antibody frequencies and somatic hypermutation to build a logistic regression model for prediction of the immunological status of clones. The model was able to predict clonal status with high confidence but only when using MAF error and bias corrected Ig-seq data. Improved accuracy by MAF provides the potential to greatly advance Ig-seq and its utility in immunology and biotechnology. PMID:26998518
Does RAIM with Correct Exclusion Produce Unbiased Positions?
Teunissen, Peter J. G.; Imparato, Davide; Tiberius, Christian C. J. M.
2017-01-01
As the navigation solution of exclusion-based RAIM follows from a combination of least-squares estimation and a statistically based exclusion-process, the computation of the integrity of the navigation solution has to take the propagated uncertainty of the combined estimation-testing procedure into account. In this contribution, we analyse, theoretically as well as empirically, the effect that this combination has on the first statistical moment, i.e., the mean, of the computed navigation solution. It will be shown, although statistical testing is intended to remove biases from the data, that biases will always remain under the alternative hypothesis, even when the correct alternative hypothesis is properly identified. The a posteriori exclusion of a biased satellite range from the position solution will therefore never remove the bias in the position solution completely. PMID:28672862
Effects of vibration on inertial wind-tunnel model attitude measurement devices
NASA Technical Reports Server (NTRS)
Young, Clarence P., Jr.; Buehrle, Ralph D.; Balakrishna, S.; Kilgore, W. Allen
1994-01-01
Results of an experimental study of a wind tunnel model inertial angle-of-attack sensor response to a simulated dynamic environment are presented. The inertial device cannot distinguish between the gravity vector and the centrifugal accelerations associated with wind tunnel model vibration, this situation results in a model attitude measurement bias error. Significant bias error in model attitude measurement was found for the model system tested. The model attitude bias error was found to be vibration mode and amplitude dependent. A first order correction model was developed and used for estimating attitude measurement bias error due to dynamic motion. A method for correcting the output of the model attitude inertial sensor in the presence of model dynamics during on-line wind tunnel operation is proposed.
NASA Astrophysics Data System (ADS)
Jayasekera, D. L.; Kaluarachchi, J.; Kim, U.
2011-12-01
Rural river basins with sufficient water availability to maintain economic livelihoods can be affected with seasonal fluctuations of precipitation and sometimes by droughts. In addition, climate change impacts can also alter future water availability. General Circulation Models (GCMs) provide credible quantitative estimates of future climate conditions but such estimates are often characterized by bias and coarse scale resolution making it necessary to downscale the outputs for use in regional hydrologic models. This study develops a methodology to downscale and project future monthly precipitation in moderate scale basins where data are limited. A stochastic framework for single-site and multi-site generation of weekly rainfall is developed while preserving the historical temporal and spatial correlation structures. The spatial correlations in the simulated occurrences and the amounts are induced using spatially correlated yet serially independent random numbers. This method is applied to generate weekly precipitation data for a 100-year period in the Nam Ngum River Basin (NNRB) that has a land area of 16,780 km2 located in Lao P.D.R. This method is developed and applied using precipitation data from 1961 to 2000 for 10 selected weather stations that represents the basin rainfall characteristics. Bias-correction method, based on fitted theoretical probability distribution transformations, is applied to improve monthly mean frequency, intensity and the amount of raw GCM precipitation predicted at a given weather station using CGCM3.1 and ECHAM5 for SRES A2 emission scenario. Bias-correction procedure adjusts GCM precipitation to approximate the long-term frequency and the intensity distribution observed at a given weather station. Index of agreement and mean absolute error are determined to assess the overall ability and performance of the bias correction method. The generated precipitation series aggregated at monthly time step was perturbed by the change factors estimated using the corrected GCM and baseline scenarios for future time periods of 2011-2050 and 2051-2090. A network based hydrologic and water resources model, WEAP, was used to simulate the current water allocation and management practices to identify the impacts of climate change in the 20th century. The results of this work are used to identify the multiple challenges faced by stakeholders and planners in water allocation for competing demands in the presence of climate change impacts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angelis, Georgios I., E-mail: georgios.angelis@sydney.edu.au; Ryder, William J.; Bashar, Rezaul
Purpose: Single photon emission computed tomography (SPECT) brain imaging of freely moving small animals would allow a wide range of important neurological processes and behaviors to be studied, which are normally inhibited by anesthetic drugs or precluded due to the animal being restrained. While rigid body motion of the head can be tracked and accounted for in the reconstruction, activity in the torso may confound brain measurements, especially since motion of the torso is more complex (i.e., nonrigid) and not well correlated with that of the head. The authors investigated the impact of mispositioned events and attenuation due to themore » torso on the accuracy of motion corrected brain images of freely moving mice. Methods: Monte Carlo simulations of a realistic voxelized mouse phantom and a dual compartment phantom were performed. Each phantom comprised a target and an extraneous compartment which were able to move independently of each other. Motion correction was performed based on the known motion of the target compartment only. Two SPECT camera geometries were investigated: a rotating single head detector and a stationary full ring detector. The effects of motion, detector geometry, and energy of the emitted photons (hence, attenuation) on bias and noise in reconstructed brain regions were evaluated. Results: The authors observed two main sources of bias: (a) motion-related inconsistencies in the projection data and (b) the mismatch between attenuation and emission. Both effects are caused by the assumption that the orientation of the torso is difficult to track and model, and therefore cannot be conveniently corrected for. The motion induced bias in some regions was up to 12% when no attenuation effects were considered, while it reached 40% when also combined with attenuation related inconsistencies. The detector geometry (i.e., rotating vs full ring) has a big impact on the accuracy of the reconstructed images, with the full ring detector being more advantageous. Conclusions: Motion-induced inconsistencies in the projection data and attenuation/emission mismatch are the two main causes of bias in reconstructed brain images when there is complex motion. It appears that these two factors have a synergistic effect on the qualitative and quantitative accuracy of the reconstructed images.« less
Reddy, Vinod; Swanson, Stanley M; Segelke, Brent; Kantardjieff, Katherine A; Sacchettini, James C; Rupp, Bernhard
2003-12-01
Anticipating a continuing increase in the number of structures solved by molecular replacement in high-throughput crystallography and drug-discovery programs, a user-friendly web service for automated molecular replacement, map improvement, bias removal and real-space correlation structure validation has been implemented. The service is based on an efficient bias-removal protocol, Shake&wARP, and implemented using EPMR and the CCP4 suite of programs, combined with various shell scripts and Fortran90 routines. The service returns improved maps, converted data files and real-space correlation and B-factor plots. User data are uploaded through a web interface and the CPU-intensive iteration cycles are executed on a low-cost Linux multi-CPU cluster using the Condor job-queuing package. Examples of map improvement at various resolutions are provided and include model completion and reconstruction of absent parts, sequence correction, and ligand validation in drug-target structures.
Declining Bias and Gender Wage Discrimination? A Meta-Regression Analysis
ERIC Educational Resources Information Center
Jarrell, Stephen B.; Stanley, T. D.
2004-01-01
The meta-regression analysis reveals that there is a strong tendency for discrimination estimates to fall and wage discrimination exist against the woman. The biasing effect of researchers' gender of not correcting for selection bias has weakened and changes in labor market have made it less important.
Härkänen, Tommi; Kaikkonen, Risto; Virtala, Esa; Koskinen, Seppo
2014-11-06
To assess the nonresponse rates in a questionnaire survey with respect to administrative register data, and to correct the bias statistically. The Finnish Regional Health and Well-being Study (ATH) in 2010 was based on a national sample and several regional samples. Missing data analysis was based on socio-demographic register data covering the whole sample. Inverse probability weighting (IPW) and doubly robust (DR) methods were estimated using the logistic regression model, which was selected using the Bayesian information criteria. The crude, weighted and true self-reported turnout in the 2008 municipal election and prevalences of entitlements to specially reimbursed medication, and the crude and weighted body mass index (BMI) means were compared. The IPW method appeared to remove a relatively large proportion of the bias compared to the crude prevalence estimates of the turnout and the entitlements to specially reimbursed medication. Several demographic factors were shown to be associated with missing data, but few interactions were found. Our results suggest that the IPW method can improve the accuracy of results of a population survey, and the model selection provides insight into the structure of missing data. However, health-related missing data mechanisms are beyond the scope of statistical methods, which mainly rely on socio-demographic information to correct the results.
Correction of self-reported BMI based on objective measurements: a Belgian experience.
Drieskens, S; Demarest, S; Bel, S; De Ridder, K; Tafforeau, J
2018-01-01
Based on successive Health Interview Surveys (HIS), it has been demonstrated that also in Belgium obesity, measured by means of a self-reported body mass index (BMI in kg/m 2 ), is a growing public health problem that needs to be monitored as accurately as possible. Studies have shown that a self-reported BMI can be biased. Consequently, if the aim is to rely on a self-reported BMI, adjustment is recommended. Data on measured and self-reported BMI, derived from the Belgian Food Consumption Survey (FCS) 2014 offers the opportunity to do so. The HIS and FCS are cross-sectional surveys based on representative population samples. This study focused on adults aged 18-64 years (sample HIS = 6545 and FCS = 1213). Measured and self-reported BMI collected in FCS were used to assess possible misreporting. Using FCS data, correction factors (measured BMI/self-reported BMI) were calculated in function of a combination of background variables (region, gender, educational level and age group). Individual self-reported BMI of the HIS 2013 were then multiplied with the corresponding correction factors to produce a corrected BMI-classification. When compared with the measured BMI, the self-reported BMI in the FCS was underestimated (mean 0.97 kg/m 2 ). 28% of the obese people underestimated their BMI. After applying the correction factors, the prevalence of obesity based on HIS data significantly increased (from 13% based on the original HIS data to 17% based on the corrected HIS data) and approximated the measured one derived from the FCS data. Since self-reported calculations of BMI are underestimated, it is recommended to adjust them to obtain accurate estimates which are important for decision making.
2017-06-01
Reports an error in "Racial Bias in Mock Juror Decision-Making: A Meta-Analytic Review of Defendant Treatment" by Tara L. Mitchell, Ryann M. Haw, Jeffrey E. Pfeifer and Christian A. Meissner ( Law and Human Behavior , 2005[Dec], Vol 29[6], 621-637). In the article, all of the numbers in Appendix A were correct, but the signs were reversed for z' in a number of studies, which are listed. Also, in Appendix B, some values were incorrect, some signs were reversed, and some values were missing. The corrected appendix is included. (The following abstract of the original article appeared in record 2006-00971-001.) Common wisdom seems to suggest that racial bias, defined as disparate treatment of minority defendants, exists in jury decision-making, with Black defendants being treated more harshly by jurors than White defendants. The empirical research, however, is inconsistent--some studies show racial bias while others do not. Two previous meta-analyses have found conflicting results regarding the existence of racial bias in juror decision-making (Mazzella & Feingold, 1994, Journal of Applied Social Psychology, 24, 1315-1344; Sweeney & Haney, 1992, Behavioral Sciences and the Law, 10, 179-195). This research takes a meta-analytic approach to further investigate the inconsistencies within the empirical literature on racial bias in juror decision-making by defining racial bias as disparate treatment of racial out-groups (rather than focusing upon the minority group alone). Our results suggest that a small, yet significant, effect of racial bias in decision-making is present across studies, but that the effect becomes more pronounced when certain moderators are considered. The state of the research will be discussed in light of these findings. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Mixed Model Association with Family-Biased Case-Control Ascertainment.
Hayeck, Tristan J; Loh, Po-Ru; Pollack, Samuela; Gusev, Alexander; Patterson, Nick; Zaitlen, Noah A; Price, Alkes L
2017-01-05
Mixed models have become the tool of choice for genetic association studies; however, standard mixed model methods may be poorly calibrated or underpowered under family sampling bias and/or case-control ascertainment. Previously, we introduced a liability threshold-based mixed model association statistic (LTMLM) to address case-control ascertainment in unrelated samples. Here, we consider family-biased case-control ascertainment, where case and control subjects are ascertained non-randomly with respect to family relatedness. Previous work has shown that this type of ascertainment can severely bias heritability estimates; we show here that it also impacts mixed model association statistics. We introduce a family-based association statistic (LT-Fam) that is robust to this problem. Similar to LTMLM, LT-Fam is computed from posterior mean liabilities (PML) under a liability threshold model; however, LT-Fam uses published narrow-sense heritability estimates to avoid the problem of biased heritability estimation, enabling correct calibration. In simulations with family-biased case-control ascertainment, LT-Fam was correctly calibrated (average χ 2 = 1.00-1.02 for null SNPs), whereas the Armitage trend test (ATT), standard mixed model association (MLM), and case-control retrospective association test (CARAT) were mis-calibrated (e.g., average χ 2 = 0.50-1.22 for MLM, 0.89-2.65 for CARAT). LT-Fam also attained higher power than other methods in some settings. In 1,259 type 2 diabetes-affected case subjects and 5,765 control subjects from the CARe cohort, downsampled to induce family-biased ascertainment, LT-Fam was correctly calibrated whereas ATT, MLM, and CARAT were again mis-calibrated. Our results highlight the importance of modeling family sampling bias in case-control datasets with related samples. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Boundary pint corrections for variable radius plots - simulation results
Margaret Penner; Sam Otukol
2000-01-01
The boundary plot problem is encountered when a forest inventory plot includes two or more forest conditions. Depending on the correction method used, the resulting estimates can be biased. The various correction alternatives are reviewed. No correction, area correction, half sweep, and toss-back methods are evaluated using simulation on an actual data set. Based on...
Is the Pearson r[squared] Biased, and if So, What Is the Best Correction Formula?
ERIC Educational Resources Information Center
Wang, Zhongmiao; Thompson, Bruce
2007-01-01
In this study the authors investigated the use of 5 (i.e., Claudy, Ezekiel, Olkin-Pratt, Pratt, and Smith) R[squared] correction formulas with the Pearson r[squared]. The authors estimated adjustment bias and precision under 6 x 3 x 6 conditions (i.e., population [rho] values of 0.0, 0.1, 0.3, 0.5, 0.7, and 0.9; population shapes normal, skewness…
Calibration results for the GEOS-3 altimeter
NASA Technical Reports Server (NTRS)
Martin, C. F.; Butler, M. L.
1977-01-01
Data from the GEOS-3 altimeter were analyzed, for both the intensive and global modes, to determine the altitude bias levels for each mode and to verify the accuracy of the time tags which have been applied to the data. The best estimates of the biases are -5.30 + or - .2 m (intensive mode) and -3.55 m + or - .4 m (global mode). These values include the approximately 1.6 m offset of the altimeter antenna focal point from the GEOS-3 spacecraft center-of-mass. The negative signs indicate that the measured altitudes are too short. The data is corrected by subtracting the above bias numbers for the respective modes. Timing corrections which should be applied to the altimeter data were calculated theoretically, and subsequently confirmed through crossover analysis for passes 6-8 revolutions apart. The time tag correction that should be applied consists of -20.8 msec + 1 interpulse period (10.240512 msec).
A new approach to correct for absorbing aerosols in OMI UV
NASA Astrophysics Data System (ADS)
Arola, A.; Kazadzis, S.; Lindfors, A.; Krotkov, N.; Kujanpää, J.; Tamminen, J.; Bais, A.; di Sarra, A.; Villaplana, J. M.; Brogniez, C.; Siani, A. M.; Janouch, M.; Weihs, P.; Webb, A.; Koskela, T.; Kouremeti, N.; Meloni, D.; Buchard, V.; Auriol, F.; Ialongo, I.; Staneck, M.; Simic, S.; Smedley, A.; Kinne, S.
2009-11-01
Several validation studies of surface UV irradiance based on the Ozone Monitoring Instrument (OMI) satellite data have shown a high correlation with ground-based measurements but a positive bias in many locations. The main part of the bias can be attributed to the boundary layer aerosol absorption that is not accounted for in the current satellite UV algorithms. To correct for this shortfall, a post-correction procedure was applied, based on global climatological fields of aerosol absorption optical depth. These fields were obtained by using global aerosol optical depth and aerosol single scattering albedo data assembled by combining global aerosol model data and ground-based aerosol measurements from AERONET. The resulting improvements in the satellite-based surface UV irradiance were evaluated by comparing satellite and ground-based spectral irradiances at various European UV monitoring sites. The results generally showed a significantly reduced bias by 5-20%, a lower variability, and an unchanged, high correlation coefficient.
Two-compartment modeling of tissue microcirculation revisited.
Brix, Gunnar; Salehi Ravesh, Mona; Griebel, Jürgen
2017-05-01
Conventional two-compartment modeling of tissue microcirculation is used for tracer kinetic analysis of dynamic contrast-enhanced (DCE) computed tomography or magnetic resonance imaging studies although it is well-known that the underlying assumption of an instantaneous mixing of the administered contrast agent (CA) in capillaries is far from being realistic. It was thus the aim of the present study to provide theoretical and computational evidence in favor of a conceptually alternative modeling approach that makes it possible to characterize the bias inherent to compartment modeling and, moreover, to approximately correct for it. Starting from a two-region distributed-parameter model that accounts for spatial gradients in CA concentrations within blood-tissue exchange units, a modified lumped two-compartment exchange model was derived. It has the same analytical structure as the conventional two-compartment model, but indicates that the apparent blood flow identifiable from measured DCE data is substantially overestimated, whereas the three other model parameters (i.e., the permeability-surface area product as well as the volume fractions of the plasma and interstitial distribution space) are unbiased. Furthermore, a simple formula was derived to approximately compute a bias-corrected flow from the estimates of the apparent flow and permeability-surface area product obtained by model fitting. To evaluate the accuracy of the proposed modeling and bias correction method, representative noise-free DCE curves were analyzed. They were simulated for 36 microcirculation and four input scenarios by an axially distributed reference model. As analytically proven, the considered two-compartment exchange model is structurally identifiable from tissue residue data. The apparent flow values estimated for the 144 simulated tissue/input scenarios were considerably biased. After bias-correction, the deviations between estimated and actual parameter values were (11.2 ± 6.4) % (vs. (105 ± 21) % without correction) for the flow, (3.6 ± 6.1) % for the permeability-surface area product, (5.8 ± 4.9) % for the vascular volume and (2.5 ± 4.1) % for the interstitial volume; with individual deviations of more than 20% being the exception and just marginal. Increasing the duration of CA administration only had a statistically significant but opposite effect on the accuracy of the estimated flow (declined) and intravascular volume (improved). Physiologically well-defined tissue parameters are structurally identifiable and accurately estimable from DCE data by the conceptually modified two-compartment model in combination with the bias correction. The accuracy of the bias-corrected flow is nearly comparable to that of the three other (theoretically unbiased) model parameters. As compared to conventional two-compartment modeling, this feature constitutes a major advantage for tracer kinetic analysis of both preclinical and clinical DCE imaging studies. © 2017 American Association of Physicists in Medicine.
Lynn, Spencer K.; Ibagon, Camila; Bui, Eric; Palitz, Sophie A.; Simon, Naomi M.; Barrett, Lisa Feldman
2017-01-01
Emotion perception, inferring the emotional state of another person, is a frequent judgment made under perceptual uncertainty (e.g., a scowling facial expression can indicate anger or concentration) and behavioral risk (e.g., incorrect judgment can be costly to the perceiver). Working memory capacity (WMC), the ability to maintain controlled processing in the face of competing demands, is an important component of many decisions. We investigated the association of WMC and anger perception in a task in which “angry” and “not angry” categories comprised overlapping ranges of scowl intensity, and correct and incorrect responses earned and lost points, respectively. Participants attempted to earn as many points as they could; adopting an optimal response bias would maximize decision utility. Participants with higher WMC more optimally tuned their anger perception response bias to accommodate their perceptual sensitivity (their ability to discriminate the categories) than did participants with lower WMC. Other factors that influence response bias (i.e., the relative base rate of angry vs. not angry faces and the decision costs & benefits) were ruled out as contributors to the WMC-bias relationship. Our results suggest that WMC optimizes emotion perception by contributing to perceivers’ ability to adjust their response bias to account for their level of perceptual sensitivity, likely an important component of adapting emotion perception to dynamic social interactions and changing circumstances. PMID:26461251
Data assimilation in integrated hydrological modelling in the presence of observation bias
NASA Astrophysics Data System (ADS)
Rasmussen, J.; Madsen, H.; Jensen, K. H.; Refsgaard, J. C.
2015-08-01
The use of bias-aware Kalman filters for estimating and correcting observation bias in groundwater head observations is evaluated using both synthetic and real observations. In the synthetic test, groundwater head observations with a constant bias and unbiased stream discharge observations are assimilated in a catchment scale integrated hydrological model with the aim of updating stream discharge and groundwater head, as well as several model parameters relating to both stream flow and groundwater modeling. The Colored Noise Kalman filter (ColKF) and the Separate bias Kalman filter (SepKF) are tested and evaluated for correcting the observation biases. The study found that both methods were able to estimate most of the biases and that using any of the two bias estimation methods resulted in significant improvements over using a bias-unaware Kalman Filter. While the convergence of the ColKF was significantly faster than the convergence of the SepKF, a much larger ensemble size was required as the estimation of biases would otherwise fail. Real observations of groundwater head and stream discharge were also assimilated, resulting in improved stream flow modeling in terms of an increased Nash-Sutcliffe coefficient while no clear improvement in groundwater head modeling was observed. Both the ColKF and the SepKF tended to underestimate the biases, which resulted in drifting model behavior and sub-optimal parameter estimation, but both methods provided better state updating and parameter estimation than using a bias-unaware filter.
Data assimilation in integrated hydrological modelling in the presence of observation bias
NASA Astrophysics Data System (ADS)
Rasmussen, Jørn; Madsen, Henrik; Høgh Jensen, Karsten; Refsgaard, Jens Christian
2016-05-01
The use of bias-aware Kalman filters for estimating and correcting observation bias in groundwater head observations is evaluated using both synthetic and real observations. In the synthetic test, groundwater head observations with a constant bias and unbiased stream discharge observations are assimilated in a catchment-scale integrated hydrological model with the aim of updating stream discharge and groundwater head, as well as several model parameters relating to both streamflow and groundwater modelling. The coloured noise Kalman filter (ColKF) and the separate-bias Kalman filter (SepKF) are tested and evaluated for correcting the observation biases. The study found that both methods were able to estimate most of the biases and that using any of the two bias estimation methods resulted in significant improvements over using a bias-unaware Kalman filter. While the convergence of the ColKF was significantly faster than the convergence of the SepKF, a much larger ensemble size was required as the estimation of biases would otherwise fail. Real observations of groundwater head and stream discharge were also assimilated, resulting in improved streamflow modelling in terms of an increased Nash-Sutcliffe coefficient while no clear improvement in groundwater head modelling was observed. Both the ColKF and the SepKF tended to underestimate the biases, which resulted in drifting model behaviour and sub-optimal parameter estimation, but both methods provided better state updating and parameter estimation than using a bias-unaware filter.
Correcting Biases in a lower resolution global circulation model with data assimilation
NASA Astrophysics Data System (ADS)
Canter, Martin; Barth, Alexander
2016-04-01
With this work, we aim at developping a new method of bias correction using data assimilation. This method is based on the stochastic forcing of a model to correct bias. First, through a preliminary run, we estimate the bias of the model and its possible sources. Then, we establish a forcing term which is directly added inside the model's equations. We create an ensemble of runs and consider the forcing term as a control variable during the assimilation of observations. We then use this analysed forcing term to correct the bias of the model. Since the forcing is added inside the model, it acts as a source term, unlike external forcings such as wind. This procedure has been developed and successfully tested with a twin experiment on a Lorenz 95 model. It is currently being applied and tested on the sea ice ocean NEMO LIM model, which is used in the PredAntar project. NEMO LIM is a global and low resolution (2 degrees) coupled model (hydrodynamic model and sea ice model) with long time steps allowing simulations over several decades. Due to its low resolution, the model is subject to bias in area where strong currents are present. We aim at correcting this bias by using perturbed current fields from higher resolution models and randomly generated perturbations. The random perturbations need to be constrained in order to respect the physical properties of the ocean, and not create unwanted phenomena. To construct those random perturbations, we first create a random field with the Diva tool (Data-Interpolating Variational Analysis). Using a cost function, this tool penalizes abrupt variations in the field, while using a custom correlation length. It also decouples disconnected areas based on topography. Then, we filter the field to smoothen it and remove small scale variations. We use this field as a random stream function, and take its derivatives to get zonal and meridional velocity fields. We also constrain the stream function along the coasts in order not to have currents perpendicular to the coast. The randomly generated stochastic forcing are then directly injected into the NEMO LIM model's equations in order to force the model at each timestep, and not only during the assimilation step. Results from a twin experiment will be presented. This method is being applied to a real case, with observations on the sea surface height available from the mean dynamic topography of CNES (Centre national d'études spatiales). The model, the bias correction, and more extensive forcings, in particular with a three dimensional structure and a time-varying component, will also be presented.
Elevated triglycerides may affect cystatin C recovery.
Witzel, Samantha H; Butts, Katherine; Filler, Guido
2014-05-01
The purpose of this study was to investigate the effect of triglyceride concentration on cystatin C (CysC) measurements. Serum samples collected from 10 nephrology patients, 43 to 78years of age, were air centrifuged to separate aqueous and lipid layers. The lipid layer from each patient was pooled together to create a mixture with a high triglyceride concentration. This pooled lipid layer was mixed with each of the ten patient aqueous layers in six different ratios. Single factor ANOVA was used to assess whether CysC recovery was affected by triglyceride levels. Regression analysis was used to develop a formula to correct for the effect of triglycerides on CysC measurement, based on samples from 6 randomly chosen patients from our study population. The formula was validated with the 4 remaining samples. The analysis revealed a significant reduction in measured CysC with increasing concentrations of triglycerides (Pearson r=-0.56, p<0.0001). The following formula was developed to correct for the effect of triglycerides: Subsequent Bland-Altman plots revealed a bias (mean±1 standard deviation [SD]) of -3.7±15.6% for the data used to generate the correction formula and a bias of 3.52±9.38% for the validation set. Our results suggest that triglyceride concentrations significantly impact cystatin C measurements and that this effect may be corrected in samples that cannot be sufficiently clarified by air centrifugation using the equation that we developed. Copyright © 2014 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Mizukami, Naoki; Clark, Martyn P.; Gutmann, Ethan D.; Mendoza, Pablo A.; Newman, Andrew J.; Nijssen, Bart; Livneh, Ben; Hay, Lauren E.; Arnold, Jeffrey R.; Brekke, Levi D.
2016-01-01
Continental-domain assessments of climate change impacts on water resources typically rely on statistically downscaled climate model outputs to force hydrologic models at a finer spatial resolution. This study examines the effects of four statistical downscaling methods [bias-corrected constructed analog (BCCA), bias-corrected spatial disaggregation applied at daily (BCSDd) and monthly scales (BCSDm), and asynchronous regression (AR)] on retrospective hydrologic simulations using three hydrologic models with their default parameters (the Community Land Model, version 4.0; the Variable Infiltration Capacity model, version 4.1.2; and the Precipitation–Runoff Modeling System, version 3.0.4) over the contiguous United States (CONUS). Biases of hydrologic simulations forced by statistically downscaled climate data relative to the simulation with observation-based gridded data are presented. Each statistical downscaling method produces different meteorological portrayals including precipitation amount, wet-day frequency, and the energy input (i.e., shortwave radiation), and their interplay affects estimations of precipitation partitioning between evapotranspiration and runoff, extreme runoff, and hydrologic states (i.e., snow and soil moisture). The analyses show that BCCA underestimates annual precipitation by as much as −250 mm, leading to unreasonable hydrologic portrayals over the CONUS for all models. Although the other three statistical downscaling methods produce a comparable precipitation bias ranging from −10 to 8 mm across the CONUS, BCSDd severely overestimates the wet-day fraction by up to 0.25, leading to different precipitation partitioning compared to the simulations with other downscaled data. Overall, the choice of downscaling method contributes to less spread in runoff estimates (by a factor of 1.5–3) than the choice of hydrologic model with use of the default parameters if BCCA is excluded.
Analyzing time-ordered event data with missed observations.
Dokter, Adriaan M; van Loon, E Emiel; Fokkema, Wimke; Lameris, Thomas K; Nolet, Bart A; van der Jeugd, Henk P
2017-09-01
A common problem with observational datasets is that not all events of interest may be detected. For example, observing animals in the wild can difficult when animals move, hide, or cannot be closely approached. We consider time series of events recorded in conditions where events are occasionally missed by observers or observational devices. These time series are not restricted to behavioral protocols, but can be any cyclic or recurring process where discrete outcomes are observed. Undetected events cause biased inferences on the process of interest, and statistical analyses are needed that can identify and correct the compromised detection processes. Missed observations in time series lead to observed time intervals between events at multiples of the true inter-event time, which conveys information on their detection probability. We derive the theoretical probability density function for observed intervals between events that includes a probability of missed detection. Methodology and software tools are provided for analysis of event data with potential observation bias and its removal. The methodology was applied to simulation data and a case study of defecation rate estimation in geese, which is commonly used to estimate their digestive throughput and energetic uptake, or to calculate goose usage of a feeding site from dropping density. Simulations indicate that at a moderate chance to miss arrival events ( p = 0.3), uncorrected arrival intervals were biased upward by up to a factor 3, while parameter values corrected for missed observations were within 1% of their true simulated value. A field case study shows that not accounting for missed observations leads to substantial underestimates of the true defecation rate in geese, and spurious rate differences between sites, which are introduced by differences in observational conditions. These results show that the derived methodology can be used to effectively remove observational biases in time-ordered event data.
Gilmore, Adam Matthew
2014-01-01
Contemporary spectrofluorimeters comprise exciting light sources, excitation and emission monochromators, and detectors that without correction yield data not conforming to an ideal spectral response. The correction of the spectral properties of the exciting and emission light paths first requires calibration of the wavelength and spectral accuracy. The exciting beam path can be corrected up to the sample position using a spectrally corrected reference detection system. The corrected reference response accounts for both the spectral intensity and drift of the exciting light source relative to emission and/or transmission detector responses. The emission detection path must also be corrected for the combined spectral bias of the sample compartment optics, emission monochromator, and detector. There are several crucial issues associated with both excitation and emission correction including the requirement to account for spectral band-pass and resolution, optical band-pass or neutral density filters, and the position and direction of polarizing elements in the light paths. In addition, secondary correction factors are described including (1) subtraction of the solvent's fluorescence background, (2) removal of Rayleigh and Raman scattering lines, as well as (3) correcting for sample concentration-dependent inner-filter effects. The importance of the National Institute of Standards and Technology (NIST) traceable calibration and correction protocols is explained in light of valid intra- and interlaboratory studies and effective spectral qualitative and quantitative analyses including multivariate spectral modeling.
NASA Astrophysics Data System (ADS)
Mathien-Blard, Elise; Bassinot, Franck
2009-12-01
Mg/Ca in foraminiferal calcite has recently been extensively used to estimate past oceanic temperatures. Here we show, however, that the Mg/Ca temperature relationship of the planktonic species Globigerinoides ruber is significantly affected by seawater salinity, with a +1 psu change in salinity resulting in a +1.6°C bias in Mg/Ca temperature calculations. If not accounted for, such a bias could lead, for instance, to systematic overestimations of Mg/Ca temperatures during glacial periods, when global ocean salinity had significantly increased compared to today. We present here a correction procedure to derive unbiased sea surface temperatures (SST) and δ18Osw from G. ruber TMg/Ca and δ18Of measurements. This correction procedure was applied to a sedimentary record to reconstruct hydrographic changes since the Last Glacial Maximum (LGM) in the Western Pacific Warm Pool. While uncorrected TMg/Ca data indicate a 3°C warming of the Western Pacific Warm Pool since the LGM, the salinity-corrected SST result in a stronger warming of 4°C.
Aigner, Annette; Grittner, Ulrike; Becher, Heiko
2018-01-01
Low response rates in epidemiologic research potentially lead to the recruitment of a non-representative sample of controls in case-control studies. Problems in the unbiased estimation of odds ratios arise when characteristics causing the probability of participation are associated with exposure and outcome. This is a specific setting of selection bias and a realistic hazard in many case-control studies. This paper formally describes the problem and shows its potential extent, reviews existing approaches for bias adjustment applicable under certain conditions, compares and applies them. We focus on two scenarios: a characteristic C causing differential participation of controls is linked to the outcome through its association with risk factor E (scenario I), and C is additionally a genuine risk factor itself (scenario II). We further assume external data sources are available which provide an unbiased estimate of C in the underlying population. Given these scenarios, we (i) review available approaches and their performance in the setting of bias due to differential participation; (ii) describe two existing approaches to correct for the bias in both scenarios in more detail; (iii) present the magnitude of the resulting bias by simulation if the selection of a non-representative sample is ignored; and (iv) demonstrate the approaches' application via data from a case-control study on stroke. The bias of the effect measure for variable E in scenario I and C in scenario II can be large and should therefore be adjusted for in any analysis. It is positively associated with the difference in response rates between groups of the characteristic causing differential participation, and inversely associated with the total response rate in the controls. Adjustment in a standard logistic regression framework is possible in both scenarios if the population distribution of the characteristic causing differential participation is known or can be approximated well.
Hildebrand, Martin; Wibbelink, Carlijn J M; Verschuere, Bruno
Self-report measures provide an important source of information in correctional/forensic settings, yet at the same time the validity of that information is often questioned because self-reports are thought to be highly vulnerable to self-presentation biases. Primary studies in offender samples have provided mixed results with regard to the impact of socially desirable responding on self-reports. The main aim of the current study was therefore to investigate-via a meta-analytic review of published studies-the association between the two dimensions of socially desirable responding, impression management and self-deceptive enhancement, and self-report measures with content of dynamic risk factors using the Balanced Inventory of Desirable Responding (BIDR) in offender samples. These self-report measures were significantly and negatively related with self-deception (r = -0.120, p < 0.001; k = 170 effect sizes) and impression management (r = -0.158, p < 0.001; k = 157 effect sizes), yet there was evidence of publication bias for the impression management effect with the trim and fill method indicating that the relation is probably even smaller (r = -0.07). The magnitude of the effect sizes was small. Moderation analyses suggested that type of dynamic risk factor (e.g., antisocial cognition versus antisocial personality), incentives, and publication year affected the relationship between impression management and self-report measures with content of dynamic risk factors, whereas sample size, setting (e.g., incarcerated, community), and publication year influenced the relation between self-deception and these self-report measures. The results indicate that the use of self-report measures to assess dynamic risk factors in correctional/forensic settings is not inevitably compromised by socially desirable responding, yet caution is warranted for some risk factors (antisocial personality traits), particularly when incentives are at play. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wild, M.; Hakuba, M. Z.; Folini, D.; Ott, P.; Long, C. N.
2017-12-01
Clear sky fluxes in the latest generation of Global Climate Models (GCM) from CMIP5 still vary largely particularly at the Earth's surface, covering in their global means a range of 16 and 24 Wm-2 in the surface downward clear sky shortwave (SW) and longwave radiation, respectively. We assess these fluxes with monthly clear sky reference climatologies derived from more than 40 Baseline Surface Radiation Network (BSRN) sites based on Long and Ackermann (2000) and Hakuba et al. (2015). The comparison is complicated by the fact that the monthly SW clear sky BSRN reference climatologies are inferred from measurements under true cloud-free conditions, whereas the GCM clear sky fluxes are calculated continuously at every timestep solely by removing the clouds, yet otherwise keeping the prevailing atmospheric composition (e.g. water vapor, temperature, aerosols) during the cloudy conditions. This induces the risk of biases in the GCMs just due to the additional sampling of clear sky fluxes calculated under atmospheric conditions representative for cloudy situations. Thereby, a wet bias may be expected in the GCMs compared to the observational references, which may induce spurious low biases in the downward clear sky SW fluxes. To estimate the magnitude of these spurious biases in the available monthly mean fields from 40 CMIP5 models, we used their respective multi-century control runs, and searched therein for each month and each BSRN station the month with the lowest cloud cover. The deviations of the clear sky fluxes in this month from their long-term means have then be used as indicators of the magnitude of the abovementioned sampling biases and as correction factors for an appropriate comparison with the BSRN climatologies, individually applied for each model and BSRN site. The overall correction is on the order of 2 Wm-2. This revises our best estimate for the global mean surface downward SW clear sky radiation, previously at 249 Wm-2 infered from the GCM clear sky flux fields and their biases compared to the BSRN climatologies, now to 247 Wm-2 including this additional correction. 34 out of 40 CMIP5 GCMs exceed this reference value. With a global mean surface albedo of 13 % and net TOA SW clear sky flux of 287 Wm-2 from CERES-EBAF this results in a global mean clear sky surface and atmospheric SW absorption of 214 and 73 Wm-2, respectively.
Correcting the bias against interdisciplinary research.
Shapiro, Ehud
2014-04-01
When making decisions about funding and jobs the scientific community should recognise that most of the tools used to evaluate scientific excellence are biased in favour of established disciplines and against interdisciplinary research.
Sensitivity of Hydrologic Response to Climate Model Debiasing Procedures
NASA Astrophysics Data System (ADS)
Channell, K.; Gronewold, A.; Rood, R. B.; Xiao, C.; Lofgren, B. M.; Hunter, T.
2017-12-01
Climate change is already having a profound impact on the global hydrologic cycle. In the Laurentian Great Lakes, changes in long-term evaporation and precipitation can lead to rapid water level fluctuations in the lakes, as evidenced by unprecedented change in water levels seen in the last two decades. These fluctuations often have an adverse impact on the region's human, environmental, and economic well-being, making accurate long-term water level projections invaluable to regional water resources management planning. Here we use hydrological components from a downscaled climate model (GFDL-CM3/WRF), to obtain future water supplies for the Great Lakes. We then apply a suite of bias correction procedures before propagating these water supplies through a routing model to produce lake water levels. Results using conventional bias correction methods suggest that water levels will decline by several feet in the coming century. However, methods that reflect the seasonal water cycle and explicitly debias individual hydrological components (overlake precipitation, overlake evaporation, runoff) imply that future water levels may be closer to their historical average. This discrepancy between debiased results indicates that water level forecasts are highly influenced by the bias correction method, a source of sensitivity that is commonly overlooked. Debiasing, however, does not remedy misrepresentation of the underlying physical processes in the climate model that produce these biases and contribute uncertainty to the hydrological projections. This uncertainty coupled with the differences in water level forecasts from varying bias correction methods are important for water management and long term planning in the Great Lakes region.
Impact of Atmospheric Chromatic Effects on Weak Lensing Measurements
NASA Astrophysics Data System (ADS)
Meyers, Joshua E.; Burchat, Patricia R.
2015-07-01
Current and future imaging surveys will measure cosmic shear with statistical precision that demands a deeper understanding of potential systematic biases in galaxy shape measurements than has been achieved to date. We use analytic and computational techniques to study the impact on shape measurements of two atmospheric chromatic effects for ground-based surveys such as the Dark Energy Survey and the Large Synoptic Survey Telescope (LSST): (1) atmospheric differential chromatic refraction and (2) wavelength dependence of seeing. We investigate the effects of using the point-spread function (PSF) measured with stars to determine the shapes of galaxies that have different spectral energy distributions than the stars. We find that both chromatic effects lead to significant biases in galaxy shape measurements for current and future surveys, if not corrected. Using simulated galaxy images, we find a form of chromatic “model bias” that arises when fitting a galaxy image with a model that has been convolved with a stellar, instead of galactic, PSF. We show that both forms of atmospheric chromatic biases can be predicted (and corrected) with minimal model bias by applying an ordered set of perturbative PSF-level corrections based on machine-learning techniques applied to six-band photometry. Catalog-level corrections do not address the model bias. We conclude that achieving the ultimate precision for weak lensing from current and future ground-based imaging surveys requires a detailed understanding of the wavelength dependence of the PSF from the atmosphere, and from other sources such as optics and sensors. The source code for this analysis is available at https://github.com/DarkEnergyScienceCollaboration/chroma.
NASA Astrophysics Data System (ADS)
Wang, W.; Zender, C. S.; van As, D.; Smeets, P.; van den Broeke, M.
2015-12-01
Surface melt and mass loss of Greenland Ice Sheet may play crucial roles in global climate change due to their positive feedbacks and large fresh water storage. With few other regular meteorological observations available in this extreme environment, measurements from Automatic Weather Stations (AWS) are the primary data source for the surface energy budget studies, and for validating satellite observations and model simulations. However, station tilt, due to surface melt and compaction, results in considerable biases in the radiation and thus albedo measurements by AWS. In this study, we identify the tilt-induced biases in the climatology of surface radiative flux and albedo, and then correct them based on geometrical principles. Over all the AWS from the Greenland Climate Network (GC-Net), the Kangerlussuaq transect (K-transect) and the Programme for Monitoring of the Greenland Ice Sheet (PROMICE), only ~15% of clear days have the correct solar noon time, with the largest bias to be 3 hours. Absolute hourly biases in the magnitude of surface insolation can reach up to 200 W/m2, with daily average exceeding 100 W/m2. The biases are larger in the accumulation zone due to the systematic tilt at each station, although variabilities of tilt angles are larger in the ablation zone. Averaged over the whole Greenland Ice Sheet in the melting season, the absolute bias in insolation is ~23 W/m2, enough to melt 0.51 m snow water equivalent. We estimate the tilt angles and their directions by comparing the simulated insolation at a horizontal surface with the observed insolation by these tilted AWS under clear-sky conditions. Our correction reduces the RMSE against satellite measurements and reanalysis by ~30 W/m2 relative to the uncorrected data, with correlation coefficients over 0.95 for both references. The corrected diurnal changes of albedo are more smooth, with consistent semi-smiling patterns (see Fig. 1). The seasonal cycles and annual variabilities of albedo are in a better agreement with previous studies (see Fig. 2 and 3). The consistent tilt-corrected shortwave radiation dataset derived here will provide better observations and validations for surface energy budget studies on Greenland Ice Sheet, including albedo variation, surface melt simulations and cloud radiative forcing estimates.
Deurenberg, P; Deurenberg-Yap, M; Schouten, F J M
2002-03-01
To test the impact of body build factors on the validity of impedance-based body composition predictions across (ethnic) population groups and to study the suitability of segmental impedance measurements. Cross-sectional observational study. Ministry of Health and School of Physical Education, Nanyang Technological University, Singapore. A total of 291 female and male Chinese, Malays and Indian Singaporeans, aged 18-69, body mass index (BMI) 16.0-40.2 kg/ m2. Anthropometric parameters were measured in addition to impedance (100 kHz) of the total body, arms and legs. Impedance indexes were calculated as height2/impedance. Arm length (span) and leg length (sitting height), wrist and knee width were measured from which body build indices were calculated. Total body water (TBW) was measured using deuterium oxide dilution. Extra cellular water (ECW) was measured using bromide dilution. Body fat percentage was determined using a chemical four-compartment model. The bias of TBW predicted from total body impedance index (bias: measured minus predicted TBW) was different among the three ethnic groups, TBW being significantly underestimated in Indians compared to Chinese and Malays. This bias was found to be dependent on body water distribution (ECW/TBW) and parameters of body build, mainly relative (to height) arm length. After correcting for differences in body water distribution and body build parameters the differences in bias across the ethnic groups disappeared. The impedance index using total body impedance was better correlated with TBW than the impedance index of arm or leg impedance, even after corrections for body build parameters. The study shows that ethnic-specific bias of impedance-based prediction formulas for body composition is due mainly to differences in body build among the ethnic groups. This means that the use of 'general' prediction equations across different (ethnic) population groups without prior testing of their validity should be avoided. Total body impedance has higher predictive value than segmental impedance.
Bootstrap Estimation of Sample Statistic Bias in Structural Equation Modeling.
ERIC Educational Resources Information Center
Thompson, Bruce; Fan, Xitao
This study empirically investigated bootstrap bias estimation in the area of structural equation modeling (SEM). Three correctly specified SEM models were used under four different sample size conditions. Monte Carlo experiments were carried out to generate the criteria against which bootstrap bias estimation should be judged. For SEM fit indices,…
Publication Bias in Research Synthesis: Sensitivity Analysis Using A Priori Weight Functions
ERIC Educational Resources Information Center
Vevea, Jack L.; Woods, Carol M.
2005-01-01
Publication bias, sometimes known as the "file-drawer problem" or "funnel-plot asymmetry," is common in empirical research. The authors review the implications of publication bias for quantitative research synthesis (meta-analysis) and describe existing techniques for detecting and correcting it. A new approach is proposed that is suitable for…
In Defense of the Chi-Square Continuity Correction.
ERIC Educational Resources Information Center
Veldman, Donald J.; McNemar, Quinn
Published studies of the sampling distribution of chi-square with and without Yates' correction for continuity have been interpreted as discrediting the correction. Yates' correction actually produces a biased chi-square value which in turn yields a better estimate of the exact probability of the discrete event concerned when used in conjunction…
NASA Astrophysics Data System (ADS)
Lorente-Plazas, Raquel; Hacker, Josua P.; Collins, Nancy; Lee, Jared A.
2017-04-01
The impact of assimilating surface observations has been shown in several publications, for improving weather prediction inside of the boundary layer as well as the flow aloft. However, the assimilation of surface observations is often far from optimal due to the presence of both model and observation biases. The sources of these biases can be diverse: an instrumental offset, errors associated to the comparison of point-based observations and grid-cell average, etc. To overcome this challenge, a method was developed using the ensemble Kalman filter. The approach consists on representing each observation bias as a parameter. These bias parameters are added to the forward operator and they extend the state vector. As opposed to the observation bias estimation approaches most common in operational systems (e.g. for satellite radiances), the state vector and parameters are simultaneously updated by applying the Kalman filter equations to the augmented state. The method to estimate and correct the observation bias is evaluated using observing system simulation experiments (OSSEs) with the Weather Research and Forecasting (WRF) model. OSSEs are constructed for the conventional observation network including radiosondes, aircraft observations, atmospheric motion vectors, and surface observations. Three different kinds of biases are added to 2-meter temperature for synthetic METARs. From the simplest to more sophisticated, imposed biases are: (1) a spatially invariant bias, (2) a spatially varying bias proportional to topographic height differences between the model and the observations, and (3) bias that is proportional to the temperature. The target region characterized by complex terrain is the western U.S. on a domain with 30-km grid spacing. Observations are assimilated every 3 hours using an 80-member ensemble during September 2012. Results demonstrate that the approach is able to estimate and correct the bias when it is spatially invariant (experiment 1). More complex bias structure in experiments (2) and (3) are more difficult to estimate, but still possible. Estimated the parameter in experiments with unbiased observations results in spatial and temporal parameter variability about zero, and establishes a threshold on the accuracy of the parameter in further experiments. When the observations are biased, the mean parameter value is close to the true bias, but temporal and spatial variability in the parameter estimates is similar to the parameters used when estimating a zero bias in the observations. The distributions are related to other errors in the forecasts, indicating that the parameters are absorbing some of the forecast error from other sources. In this presentation we elucidate the reasons for the resulting parameter estimates, and their variability.
Culpepper, Steven Andrew
2016-06-01
Standardized tests are frequently used for selection decisions, and the validation of test scores remains an important area of research. This paper builds upon prior literature about the effect of nonlinearity and heteroscedasticity on the accuracy of standard formulas for correcting correlations in restricted samples. Existing formulas for direct range restriction require three assumptions: (1) the criterion variable is missing at random; (2) a linear relationship between independent and dependent variables; and (3) constant error variance or homoscedasticity. The results in this paper demonstrate that the standard approach for correcting restricted correlations is severely biased in cases of extreme monotone quadratic nonlinearity and heteroscedasticity. This paper offers at least three significant contributions to the existing literature. First, a method from the econometrics literature is adapted to provide more accurate estimates of unrestricted correlations. Second, derivations establish bounds on the degree of bias attributed to quadratic functions under the assumption of a monotonic relationship between test scores and criterion measurements. New results are presented on the bias associated with using the standard range restriction correction formula, and the results show that the standard correction formula yields estimates of unrestricted correlations that deviate by as much as 0.2 for high to moderate selectivity. Third, Monte Carlo simulation results demonstrate that the new procedure for correcting restricted correlations provides more accurate estimates in the presence of quadratic and heteroscedastic test score and criterion relationships.
Meta-analysis of alcohol price and income elasticities – with corrections for publication bias
2013-01-01
Background This paper contributes to the evidence-base on prices and alcohol use by presenting meta-analytic summaries of price and income elasticities for alcohol beverages. The analysis improves on previous meta-analyses by correcting for outliers and publication bias. Methods Adjusting for outliers is important to avoid assigning too much weight to studies with very small standard errors or large effect sizes. Trimmed samples are used for this purpose. Correcting for publication bias is important to avoid giving too much weight to studies that reflect selection by investigators or others involved with publication processes. Cumulative meta-analysis is proposed as a method to avoid or reduce publication bias, resulting in more robust estimates. The literature search obtained 182 primary studies for aggregate alcohol consumption, which exceeds the database used in previous reviews and meta-analyses. Results For individual beverages, corrected price elasticities are smaller (less elastic) by 28-29 percent compared with consensus averages frequently used for alcohol beverages. The average price and income elasticities are: beer, -0.30 and 0.50; wine, -0.45 and 1.00; and spirits, -0.55 and 1.00. For total alcohol, the price elasticity is -0.50 and the income elasticity is 0.60. Conclusions These new results imply that attempts to reduce alcohol consumption through price or tax increases will be less effective or more costly than previously claimed. PMID:23883547
NASA Astrophysics Data System (ADS)
Faqih, A.
2017-03-01
Providing information regarding future climate scenarios is very important in climate change study. The climate scenario can be used as basic information to support adaptation and mitigation studies. In order to deliver future climate scenarios over specific region, baseline and projection data from the outputs of global climate models (GCM) is needed. However, due to its coarse resolution, the data have to be downscaled and bias corrected in order to get scenario data with better spatial resolution that match the characteristics of the observed data. Generating this downscaled data is mostly difficult for scientist who do not have specific background, experience and skill in dealing with the complex data from the GCM outputs. In this regards, it is necessary to develop a tool that can be used to simplify the downscaling processes in order to help scientist, especially in Indonesia, for generating future climate scenario data that can be used for their climate change-related studies. In this paper, we introduce a tool called as “Statistical Bias Correction for Climate Scenarios (SiBiaS)”. The tool is specially designed to facilitate the use of CMIP5 GCM data outputs and process their statistical bias corrections relative to the reference data from observations. It is prepared for supporting capacity building in climate modeling in Indonesia as part of the Indonesia 3rd National Communication (TNC) project activities.
NASA Astrophysics Data System (ADS)
Meyer, Jonathan D. D.; Jin, Jiming
2017-07-01
A 20-km regional climate model (RCM) dynamically downscaled the Community Climate System Model version 4 (CCSM4) to compare 32-year historical and future "end-of-the-century" climatologies of the North American Monsoon (NAM). CCSM4 and other phase 5 Coupled Model Intercomparison Project models have indicated a delayed NAM and overall general drying trend. Here, we test the suggested mechanism for this drier NAM where increasing atmospheric static stability and reduced early-season evapotranspiration under global warming will limit early-season convection and compress the mature-season of the NAM. Through our higher resolution RCM, we found the role of accelerated evaporation under a warmer climate is likely understated in coarse resolution models such as CCSM4. Improving the representation of mesoscale interactions associated with the Gulf of California and surrounding topography produced additional surface evaporation, which overwhelmed the convection-suppressing effects of a warmer troposphere. Furthermore, the improved land-sea temperature gradient helped drive stronger southerly winds and greater moisture transport. Finally, we addressed limitations from inherent CCSM4 biases through a form of mean bias correction, which resulted in a more accurate seasonality of the atmospheric thermodynamic profile. After bias correction, greater surface evaporation from average peak GoC SSTs of 32 °C compared to 29 °C from the original CCSM4 led to roughly 50 % larger changes to low-level moist static energy compared to that produced by the downscaled original CCSM4. The increasing destabilization of the NAM environment produced onset dates that were one to 2 weeks earlier in the core of the NAM and northern extent, respectively. Furthermore, a significantly more vigorous NAM signal was produced after bias correction, with >50 mm month-1 increases to the June-September precipitation found along east and west coasts of Mexico and into parts of Texas. A shift towards more extreme daily precipitation was found in both downscaled climatologies, with the bias-corrected climatology containing a much more apparent and extreme shift.
Did the Arctic Ice Recover? Demographics of True and False Climate Facts
NASA Astrophysics Data System (ADS)
Hamilton, L.
2012-12-01
Beliefs about climate change divide the U.S. public along party lines more distinctly than hot social issues. Research finds that better educated or informed respondents are more likely to align with their parties on climate change. This information-elite polarization resembles a process of biased assimilation first described in psychological experiments. In nonexperimental settings, college graduates could be prone to biased assimilation if they more effectively acquire information that supports their beliefs. Recent national and statewide survey data show response patterns consistent with biased assimilation (and biased guessing) contributing to the correlation observed between climate beliefs and knowledge. The survey knowledge questions involve key, uncontroversial observations such as whether the area of late-summer Arctic sea ice has declined, increased, or declined and then recovered to what it was 30 years ago. Correct answers are predicted by education, and some wrong answers (e.g., more ice) have predictors that suggest lack of knowledge. Other wrong answers (e.g., ice recovered) are predicted by political and belief factors instead. Responses show indications of causality in both directions: science information affecting climate beliefs, but also beliefs affecting the assimilation of science information.; ;
NASA Technical Reports Server (NTRS)
Boesch, Hartmut; Toon, G.; Sen, B; Li, Q. B.; Salawitch, R.; Miller, C.; Crisp, D.; Washenfelder, R.; Wennberg, P.; Natraj, V.;
2006-01-01
Global, space-based observations of atmospheric CO2 with precision, resolution, and coverage needed to monitor sources and sinks: a) Spectra of reflected/scattered sunlight in NIR CO2 and O2 bands used to estimate X(sub CO2) with large sensitivity to surface; b) A-train orbit (1:15 PM polar sun sync); c) 16 day repeat cycle samples seasonal cycle on semi-monthly intervals; and d) NASA ESSP (Earth Space System Pathfinder) scheduled for Sept 2008 launch; 2 yrs lifetime. Initial comparison of SCIAMACHY and FTS retrievals for Park Falls: a) Positive bias in X(sub CO2) of approx. 10 ppm; and b) Negative bias in surface pressure After correction of spectral artifacts in O2A band: a) Largely improved agreement between SCIAMACHY and FTS X(sub CO2) (without clear bias) and in surface pressure; and b) Standard deviation of SCIAMACHY X(sub CO2 approx. 6 ppm. Good qualitative agreement with GEOS-CHEM, with GEOS-CHEM underestimating seasonal cycle. OCO is a dedicated CO2 instrument and will achieve much higher accuracy and precision: a) much higher spectral resolution (by factor of 20); and b) smaller ground pixels (by factor of 600).
A minimalist approach to bias estimation for passive sensor measurements with targets of opportunity
NASA Astrophysics Data System (ADS)
Belfadel, Djedjiga; Osborne, Richard W.; Bar-Shalom, Yaakov
2013-09-01
In order to carry out data fusion, registration error correction is crucial in multisensor systems. This requires estimation of the sensor measurement biases. It is important to correct for these bias errors so that the multiple sensor measurements and/or tracks can be referenced as accurately as possible to a common tracking coordinate system. This paper provides a solution for bias estimation for the minimum number of passive sensors (two), when only targets of opportunity are available. The sensor measurements are assumed time-coincident (synchronous) and perfectly associated. Since these sensors provide only line of sight (LOS) measurements, the formation of a single composite Cartesian measurement obtained from fusing the LOS measurements from different sensors is needed to avoid the need for nonlinear filtering. We evaluate the Cramer-Rao Lower Bound (CRLB) on the covariance of the bias estimate, i.e., the quantification of the available information about the biases. Statistical tests on the results of simulations show that this method is statistically efficient, even for small sample sizes (as few as two sensors and six points on the trajectory of a single target of opportunity). We also show that the RMS position error is significantly improved with bias estimation compared with the target position estimation using the original biased measurements.
Bosy-Westphal, Anja; Danielzik, Sandra; Becker, Christine; Geisler, Corinna; Onur, Simone; Korth, Oliver; Bührens, Frederike; Müller, Manfred J
2005-09-01
Air-displacement plethysmography (ADP) is now widely used for body composition measurement in pediatric populations. However, the manufacturer's software developed for adults leaves a potential bias for application in children and adolescents, and recent publications do not consistently use child-specific corrections. Therefore we analyzed child-specific ADP corrections with respect to quantity and etiology of bias compared with adult formulas. An optimal correction protocol is provided giving step-by-step instructions for calculations. In this study, 258 children and adolescents (143 girls and 115 boys ranging from 5 to 18 y) with a high prevalence of overweight or obesity (28.0% in girls and 22.6% in boys) were examined by ADP applying the manufacturer's software as well as published equations for child-specific corrections for surface area artifact (SAA), thoracic gas volume (TGV), and density of fat-free mass (FFM). Compared with child-specific equations for SAA, TGV, and density of FFM, the mean overestimation of the percentage of fat mass using the manufacturer's software was 10% in children and adolescents. Half of the bias derived from the use of Siri's equation not corrected for age-dependent differences in FFM density. An additional 3 and 2% of bias resulted from the application of adult equations for prediction of SAA and TGV, respectively. Different child-specific equations used to predict TGV did not differ in the percentage of fat mass. We conclude that there is a need for child-specific equations in ADP raw data analysis considering SAA, TGV, and density of FFM.
Jiménez-Silva, Antonio; Carnevali-Arellano, Romano; Venegas-Aguilera, Matías; Tobar-Reyes, Julio; Palomino-Montenegro, Hernán
2018-05-01
To determine if the use of orthopaedic appliances in growing patients applied to correct Class II and III malocclusion is related to the development of temporomandibular disorders (TMD). A systematic review was conducted between 1960 and July 2017, based on electronic databases: PubMed, Cochrane Library, Embase, Medline, Scopus, EBSCOhost, Scielo, Lilacs and Bireme. Controlled clinical trials (CCTs) and randomized controlled trials (RCTs) were identified. The articles were selected and analyzed by two authors independently. The quality of the evidence was determined according to the guidelines of the Cochrane Risk Bias Assessment Tool and the Cochrane Quality Study Guide. Seven articles were included, four CCTs and three RCTs. The studies were grouped according to malocclusion treatment in (a) class II appliances (n = 4) and (b) class III appliances (n = 3). The quality of evidence was low due to the high risk of bias, independent of the association reported. All studies concluded that the use of orthopaedic appliances would not contribute to the development of TMD. The quality of evidence available is insufficient to establish definitive conclusions, since the studies were very heterogeneous and presented a high risk of bias. However, it is suggested that the use of orthopaedic appliances to correct class II and III malocclusion in growing patients would not be considered as a risk factor for the development of TMD. High-quality RCTs are required to draw any definitive conclusions.
Recursive algorithms for bias and gain nonuniformity correction in infrared videos.
Pipa, Daniel R; da Silva, Eduardo A B; Pagliari, Carla L; Diniz, Paulo S R
2012-12-01
Infrared focal-plane array (IRFPA) detectors suffer from fixed-pattern noise (FPN) that degrades image quality, which is also known as spatial nonuniformity. FPN is still a serious problem, despite recent advances in IRFPA technology. This paper proposes new scene-based correction algorithms for continuous compensation of bias and gain nonuniformity in FPA sensors. The proposed schemes use recursive least-square and affine projection techniques that jointly compensate for both the bias and gain of each image pixel, presenting rapid convergence and robustness to noise. The synthetic and real IRFPA videos experimentally show that the proposed solutions are competitive with the state-of-the-art in FPN reduction, by presenting recovered images with higher fidelity.
Phonon-induced renormalization of the electron spectrum of biased bilayer graphene
NASA Astrophysics Data System (ADS)
Kryuchkov, S. V.; Kukhar, E. I.
2018-05-01
The effect of the electron-phonon interaction on the electron subsystem of the bilayer graphene has been investigated in the case when there is a potential bias between the graphene layers. The electron-phonon interaction has been shown to lead to increasing of the curvature of the lower dispersion branch of the conduction band of the bigraphene in the vicinity of the Dirac point. The latter corresponds to the decreasing of the absolute value of the electron effective mass. The corresponding correction to the effective mass has been calculated. Dependence of this correction on the bias has been investigated. Influence of such effect on the bigraphene conductivity is discussed.
Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies
Theis, Fabian J.
2017-01-01
Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464
Mangin, B; Siberchicot, A; Nicolas, S; Doligez, A; This, P; Cierco-Ayrolles, C
2012-03-01
Among the several linkage disequilibrium measures known to capture different features of the non-independence between alleles at different loci, the most commonly used for diallelic loci is the r(2) measure. In the present study, we tackled the problem of the bias of r(2) estimate, which results from the sample structure and/or the relatedness between genotyped individuals. We derived two novel linkage disequilibrium measures for diallelic loci that are both extensions of the usual r(2) measure. The first one, r(S)(2), uses the population structure matrix, which consists of information about the origins of each individual and the admixture proportions of each individual genome. The second one, r(V)(2), includes the kinship matrix into the calculation. These two corrections can be applied together in order to correct for both biases and are defined either on phased or unphased genotypes.We proved that these novel measures are linked to the power of association tests under the mixed linear model including structure and kinship corrections. We validated them on simulated data and applied them to real data sets collected on Vitis vinifera plants. Our results clearly showed the usefulness of the two corrected r(2) measures, which actually captured 'true' linkage disequilibrium unlike the usual r(2) measure.
NASA Astrophysics Data System (ADS)
McCreight, J. L.; Gochis, D. J.; Hoar, T.; Dugger, A. L.; Yu, W.
2014-12-01
Uncertainty in precipitation forcing, soil moisture states, and model groundwater fluxes are first-order sources of error in streamflow forecasting. While near-surface estimates of soil moisture are now available from satellite, very few soil moisture observations below 5 cm depth or groundwater discharge estimates are available for operational forecasting. Radar precipitation estimates are subject to large biases, particularly during extreme events (e.g. Steiner et al., 2010) and their correction is not typically available in real-time. Streamflow data, however, are readily available in near-real-time and can be assimilated operationally to help constrain uncertainty in these uncertain states and improve streamflow forecasts. We examine the ability of streamflow observations to diagnose bias in the three most uncertain variables: precipitation forcing, soil moisture states, and groundwater fluxes. We investigate strategies for their subsequent bias correction. These include spinup and calibration strategies with and without the use of data assimilation and the determination of the proper spinup timescales. Global and spatially distributed multipliers on the uncertain states included in the assimilation state vector (e.g. Seo et al., 2003) will also be evaluated. We examine real cases and observing system simulation experiments for both normal and extreme rainfall events. One of our test cases considers the Colorado Front Range flood of September 2013 where the range of disagreement amongst five precipitation estimates spanned a factor of five with only one exhibiting appreciable positive bias (Gochis et al, submitted). Our experiments are conducted using the WRF-Hydro model with the NoahMP land surface component and the data assimilation research testbed (DART). A variety of ensemble data assimilation approaches (filters) are considered. ReferencesGochis, DJ, et al. "The Great Colorado Flood of September 2013" BAMS (Submitted 4-7-14). Seo, DJ, V Koren, and N Cajina. "Real-time variational assimilation of hydrologic and hydrometeorological data into operational hydrologic forecasting." J Hydromet (2003). Steiner, Matthias, JA Smith, SJ Burges, CV Alonso, and RW Darden. "Effect of bias adjustment and rain gauge data quality control on radar rainfall estimation." WRR (1999).
NASA Astrophysics Data System (ADS)
Garcia-Pintado, J.; Barberá, G. G.; Erena Arrabal, M.; Castillo, V. M.
2010-12-01
Objective analysis schemes (OAS), also called ``succesive correction methods'' or ``observation nudging'', have been proposed for multisensor precipitation estimation combining remote sensing data (meteorological radar or satellite) with data from ground-based raingauge networks. However, opposite to the more complex geostatistical approaches, the OAS techniques for this use are not optimized. On the other hand, geostatistical techniques ideally require, at the least, modelling the covariance from the rain gauge data at every time step evaluated, which commonly cannot be soundly done. Here, we propose a new procedure (concurrent multiplicative-additive objective analysis scheme [CMA-OAS]) for operational rainfall estimation using rain gauges and meteorological radar, which does not require explicit modelling of spatial covariances. On the basis of a concurrent multiplicative-additive (CMA) decomposition of the spatially nonuniform radar bias, within-storm variability of rainfall and fractional coverage of rainfall are taken into account. Thus both spatially nonuniform radar bias, given that rainfall is detected, and bias in radar detection of rainfall are handled. The interpolation procedure of CMA-OAS is built on the OAS, whose purpose is to estimate a filtered spatial field of the variable of interest through a successive correction of residuals resulting from a Gaussian kernel smoother applied on spatial samples. The CMA-OAS, first, poses an optimization problem at each gauge-radar support point to obtain both a local multiplicative-additive radar bias decomposition and a regionalization parameter. Second, local biases and regionalization parameters are integrated into an OAS to estimate the multisensor rainfall at the ground level. The approach considers radar estimates as background a priori information (first guess), so that nudging to observations (gauges) may be relaxed smoothly to the first guess, and the relaxation shape is obtained from the sequential optimization. The procedure is suited to relatively sparse rain gauge networks. To show the procedure, six storms are analyzed at hourly steps over 10,663 km2. Results generally indicated an improved quality with respect to other methods evaluated: a standard mean-field bias adjustment, an OAS spatially variable adjustment with multiplicative factors, ordinary cokriging, and kriging with external drift. In theory, it could be equally applicable to gauge-satellite estimates and other hydrometeorological variables.
Correction of Selection Bias in Survey Data: Is the Statistical Cure Worse Than the Bias?
Hanley, James A
2017-04-01
In previous articles in the American Journal of Epidemiology (Am J Epidemiol. 2013;177(5):431-442) and American Journal of Public Health (Am J Public Health. 2013;103(10):1895-1901), Masters et al. reported age-specific hazard ratios for the contrasts in mortality rates between obesity categories. They corrected the observed hazard ratios for selection bias caused by what they postulated was the nonrepresentativeness of the participants in the National Health Interview Study that increased with age, obesity, and ill health. However, it is possible that their regression approach to remove the alleged bias has not produced, and in general cannot produce, sensible hazard ratio estimates. First, we must consider how many nonparticipants there might have been in each category of obesity and of age at entry and how much higher the mortality rates would have to be in nonparticipants than in participants in these same categories. What plausible set of numerical values would convert the ("biased") decreasing-with-age hazard ratios seen in the data into the ("unbiased") increasing-with-age ratios that they computed? Can these values be encapsulated in (and can sensible values be recovered from) one additional internal variable in a regression model? Second, one must examine the age pattern of the hazard ratios that have been adjusted for selection. Without the correction, the hazard ratios are attenuated with increasing age. With it, the hazard ratios at older ages are considerably higher, but those at younger ages are well below one. Third, one must test whether the regression approach suggested by Masters et al. would correct the nonrepresentativeness that increased with age and ill health that I introduced into real and hypothetical data sets. I found that the approach did not recover the hazard ratio patterns present in the unselected data sets: the corrections overshot the target at older ages and undershot it at lower ages.
Recent Advances in the Salinity Retrieval Algorithms for Aquarius and SMAP
NASA Astrophysics Data System (ADS)
Meissner, T.; Wentz, F. J.
2016-12-01
Our presentation discusses the latest improvements in the salinity retrievals for both Aquarius and SMAP since the last releases. The Aquarius V4.0 was released in June 2015 and the SMAP V 1.0 was released in November 2015. Upcoming releases are planned for SMAP (V 2.0) in August 2016 and for Aquarius (V 5.0) late 2017. The full 360o look capability of SMAP makes it possible to take observations from the forward and backward looking direction at the same instance of time. This two-look capability strongly aids the salinity retrievals. One of the largest spurious contaminations in the salinity retrievals is caused by the galaxy that is reflected from the ocean surface. Because in most instances the reflected galaxy appears only in either the forward or the backward look, it is possible to determine its contribution by taking the difference of the measured SMAP brightness temperatures between the two looks. Our result suggests that the surface roughness that is used in the galactic correction needs to be increased and also the strength of some of the galactic sources need to be slightly adjusted. The improved galaxy correction is getting implemented in upcoming Aquarius and SMAP salinity releases and strongly aids the mitigation of residual zonal and temporal biases that are observed in both products. Another major cause of the observed zonal biases in SMAP is the emissive SMAP mesh antenna. In order to correct for it the physical temperature of the antenna is needed. No direct measurements but only a thermal model are available. We discuss recent improvements in the correction for the emissive SMAP antenna and show how most of the zonal biases in V1.0 can be mitigated. Finally, we show that observed salty biases at higher Northern latitudes can be explained by inaccuracies in the model that is used in correcting for the absorption by atmospheric oxygen. These biases can be decreased by fine-tuning the parameters in the absorption model.
NASA Astrophysics Data System (ADS)
Hasan, M. Alfi; Islam, A. K. M. Saiful; Akanda, Ali Shafqat
2017-11-01
In the era of global warning, the insight of future climate and their changing extremes is critical for climate-vulnerable regions of the world. In this study, we have conducted a robust assessment of Regional Climate Model (RCM) results in a monsoon-dominated region within the new Coupled Model Intercomparison Project Phase 5 (CMIP5) and the latest Representative Concentration Pathways (RCP) scenarios. We have applied an advanced bias correction approach to five RCM simulations in order to project future climate and associated extremes over Bangladesh, a critically climate-vulnerable country with a complex monsoon system. We have also generated a new gridded product that performed better in capturing observed climatic extremes than existing products. The bias-correction approach provided a notable improvement in capturing the precipitation extremes as well as mean climate. The majority of projected multi-model RCMs indicate an increase of rainfall, where one model shows contrary results during the 2080s (2071-2100) era. The multi-model mean shows that nighttime temperatures will increase much faster than daytime temperatures and the average annual temperatures are projected to be as hot as present-day summer temperatures. The expected increase of precipitation and temperature over the hilly areas are higher compared to other parts of the country. Overall, the projected extremities of future rainfall are more variable than temperature. According to the majority of the models, the number of the heavy rainy days will increase in future years. The severity of summer-day temperatures will be alarming, especially over hilly regions, where winters are relatively warm. The projected rise of both precipitation and temperature extremes over the intense rainfall-prone northeastern region of the country creates a possibility of devastating flash floods with harmful impacts on agriculture. Moreover, the effect of bias-correction, as presented in probable changes of both bias-corrected and uncorrected extremes, can be considered in future policy making.
Correcting length-frequency distributions for imperfect detection
Breton, André R.; Hawkins, John A.; Winkelman, Dana L.
2013-01-01
Sampling gear selects for specific sizes of fish, which may bias length-frequency distributions that are commonly used to assess population size structure, recruitment patterns, growth, and survival. To properly correct for sampling biases caused by gear and other sources, length-frequency distributions need to be corrected for imperfect detection. We describe a method for adjusting length-frequency distributions when capture and recapture probabilities are a function of fish length, temporal variation, and capture history. The method is applied to a study involving the removal of Smallmouth Bass Micropterus dolomieu by boat electrofishing from a 38.6-km reach on the Yampa River, Colorado. Smallmouth Bass longer than 100 mm were marked and released alive from 2005 to 2010 on one or more electrofishing passes and removed on all other passes from the population. Using the Huggins mark–recapture model, we detected a significant effect of fish total length, previous capture history (behavior), year, pass, year×behavior, and year×pass on capture and recapture probabilities. We demonstrate how to partition the Huggins estimate of abundance into length frequencies to correct for these effects. Uncorrected length frequencies of fish removed from Little Yampa Canyon were negatively biased in every year by as much as 88% relative to mark–recapture estimates for the smallest length-class in our analysis (100–110 mm). Bias declined but remained high even for adult length-classes (≥200 mm). The pattern of bias across length-classes was variable across years. The percentage of unadjusted counts that were below the lower 95% confidence interval from our adjusted length-frequency estimates were 95, 89, 84, 78, 81, and 92% from 2005 to 2010, respectively. Length-frequency distributions are widely used in fisheries science and management. Our simple method for correcting length-frequency estimates for imperfect detection could be widely applied when mark–recapture data are available.
NASA Astrophysics Data System (ADS)
Miguez-Macho, Gonzalo; Stenchikov, Georgiy L.; Robock, Alan
2005-04-01
The reasons for biases in regional climate simulations were investigated in an attempt to discern whether they arise from deficiencies in the model parameterizations or are due to dynamical problems. Using the Regional Atmospheric Modeling System (RAMS) forced by the National Centers for Environmental Prediction-National Center for Atmospheric Research reanalysis, the detailed climate over North America at 50-km resolution for June 2000 was simulated. First, the RAMS equations were modified to make them applicable to a large region, and its turbulence parameterization was corrected. The initial simulations showed large biases in the location of precipitation patterns and surface air temperatures. By implementing higher-resolution soil data, soil moisture and soil temperature initialization, and corrections to the Kain-Fritch convective scheme, the temperature biases and precipitation amount errors could be removed, but the precipitation location errors remained. The precipitation location biases could only be improved by implementing spectral nudging of the large-scale (wavelength of 2500 km) dynamics in RAMS. This corrected for circulation errors produced by interactions and reflection of the internal domain dynamics with the lateral boundaries where the model was forced by the reanalysis.
Szatkiewicz, Jin P; Wang, WeiBo; Sullivan, Patrick F; Wang, Wei; Sun, Wei
2013-02-01
Structural variation is an important class of genetic variation in mammals. High-throughput sequencing (HTS) technologies promise to revolutionize copy-number variation (CNV) detection but present substantial analytic challenges. Converging evidence suggests that multiple types of CNV-informative data (e.g. read-depth, read-pair, split-read) need be considered, and that sophisticated methods are needed for more accurate CNV detection. We observed that various sources of experimental biases in HTS confound read-depth estimation, and note that bias correction has not been adequately addressed by existing methods. We present a novel read-depth-based method, GENSENG, which uses a hidden Markov model and negative binomial regression framework to identify regions of discrete copy-number changes while simultaneously accounting for the effects of multiple confounders. Based on extensive calibration using multiple HTS data sets, we conclude that our method outperforms existing read-depth-based CNV detection algorithms. The concept of simultaneous bias correction and CNV detection can serve as a basis for combining read-depth with other types of information such as read-pair or split-read in a single analysis. A user-friendly and computationally efficient implementation of our method is freely available.
NASA Technical Reports Server (NTRS)
Vila, Daniel; deGoncalves, Luis Gustavo; Toll, David L.; Rozante, Jose Roberto
2008-01-01
This paper describes a comprehensive assessment of a new high-resolution, high-quality gauge-satellite based analysis of daily precipitation over continental South America during 2004. This methodology is based on a combination of additive and multiplicative bias correction schemes in order to get the lowest bias when compared with the observed values. Inter-comparisons and cross-validations tests have been carried out for the control algorithm (TMPA real-time algorithm) and different merging schemes: additive bias correction (ADD), ratio bias correction (RAT) and TMPA research version, for different months belonging to different seasons and for different network densities. All compared merging schemes produce better results than the control algorithm, but when finer temporal (daily) and spatial scale (regional networks) gauge datasets is included in the analysis, the improvement is remarkable. The Combined Scheme (CoSch) presents consistently the best performance among the five techniques. This is also true when a degraded daily gauge network is used instead of full dataset. This technique appears a suitable tool to produce real-time, high-resolution, high-quality gauge-satellite based analyses of daily precipitation over land in regional domains.
Ethnic Group Bias in Intelligence Test Items.
ERIC Educational Resources Information Center
Scheuneman, Janice
In previous studies of ethnic group bias in intelligence test items, the question of bias has been confounded with ability differences between the ethnic group samples compared. The present study is based on a conditional probability model in which an unbiased item is defined as one where the probability of a correct response to an item is the…
Ditto, Peter H; Wojcik, Sean P; Chen, Eric Evan; Grady, Rebecca Hofstein; Ringel, Megan M
2015-01-01
Duarte et al. are right to worry about political bias in social psychology but they underestimate the ease of correcting it. Both liberals and conservatives show partisan bias that often worsens with cognitive sophistication. More non-liberals in social psychology is unlikely to speed our convergence upon the truth, although it may broaden the questions we ask and the data we collect.
NASA Astrophysics Data System (ADS)
Kuik, Friderike; Kerschbaumer, Andreas; Lauer, Axel; Lupascu, Aurelia; von Schneidemesser, Erika; Butler, Tim M.
2018-06-01
With NO2 limit values being frequently exceeded in European cities, complying with the European air quality regulations still poses a problem for many cities. Traffic is typically a major source of NOx emissions in urban areas. High-resolution chemistry transport modelling can help to assess the impact of high urban NOx emissions on air quality inside and outside of urban areas. However, many modelling studies report an underestimation of modelled NOx and NO2 compared with observations. Part of this model bias has been attributed to an underestimation of NOx emissions, particularly in urban areas. This is consistent with recent measurement studies quantifying underestimations of urban NOx emissions by current emission inventories, identifying the largest discrepancies when the contribution of traffic NOx emissions is high. This study applies a high-resolution chemistry transport model in combination with ambient measurements in order to assess the potential underestimation of traffic NOx emissions in a frequently used emission inventory. The emission inventory is based on officially reported values and the Berlin-Brandenburg area in Germany is used as a case study. The WRF-Chem model is used at a 3 km × 3 km horizontal resolution, simulating the whole year of 2014. The emission data are downscaled from an original resolution of ca. 7 km × 7 km to a resolution of 1 km × 1 km. An in-depth model evaluation including spectral decomposition of observed and modelled time series and error apportionment suggests that an underestimation in traffic emissions is likely one of the main causes of the bias in modelled NO2 concentrations in the urban background, where NO2 concentrations are underestimated by ca. 8 µg m-3 (-30 %) on average over the whole year. Furthermore, a diurnal cycle of the bias in modelled NO2 suggests that a more realistic treatment of the diurnal cycle of traffic emissions might be needed. Model problems in simulating the correct mixing in the urban planetary boundary layer probably play an important role in contributing to the model bias, particularly in summer. Also taking into account this and other possible sources of model bias, a correction factor for traffic NOx emissions of ca. 3 is estimated for weekday daytime traffic emissions in the core urban area, which corresponds to an overall underestimation of traffic NOx emissions in the core urban area of ca. 50 %. Sensitivity simulations for the months of January and July using the calculated correction factor show that the weekday model bias can be improved from -8.8 µg m-3 (-26 %) to -5.4 µg m-3 (-16 %) in January on average in the urban background, and -10.3 µg m-3 (-46 %) to -7.6 µg m-3 (-34 %) in July. In addition, the negative bias of weekday NO2 concentrations downwind of the city in the rural and suburban background can be reduced from -3.4 µg m-3 (-12 %) to -1.2 µg m-3 (-4 %) in January and from -3.0 µg m-3 (-22 %) to -1.9 µg m-3 (-14 %) in July. The results and their consistency with findings from other studies suggest that more research is needed in order to more accurately understand the spatial and temporal variability in real-world NOx emissions from traffic, and apply this understanding to the inventories used in high-resolution chemical transport models.
Standing on the shoulders of giants: improving medical image segmentation via bias correction.
Wang, Hongzhi; Das, Sandhitsu; Pluta, John; Craige, Caryne; Altinay, Murat; Avants, Brian; Weiner, Michael; Mueller, Susanne; Yushkevich, Paul
2010-01-01
We propose a simple strategy to improve automatic medical image segmentation. The key idea is that without deep understanding of a segmentation method, we can still improve its performance by directly calibrating its results with respect to manual segmentation. We formulate the calibration process as a bias correction problem, which is addressed by machine learning using training data. We apply this methodology on three segmentation problems/methods and show significant improvements for all of them.
Investigation of Particle Sampling Bias in the Shear Flow Field Downstream of a Backward Facing Step
NASA Technical Reports Server (NTRS)
Meyers, James F.; Kjelgaard, Scott O.; Hepner, Timothy E.
1990-01-01
The flow field about a backward facing step was investigated to determine the characteristics of particle sampling bias in the various flow phenomena. The investigation used the calculation of the velocity:data rate correlation coefficient as a measure of statistical dependence and thus the degree of velocity bias. While the investigation found negligible dependence within the free stream region, increased dependence was found within the boundary and shear layers. Full classic correction techniques over-compensated the data since the dependence was weak, even in the boundary layer and shear regions. The paper emphasizes the necessity to determine the degree of particle sampling bias for each measurement ensemble and not use generalized assumptions to correct the data. Further, it recommends the calculation of the velocity:data rate correlation coefficient become a standard statistical calculation in the analysis of all laser velocimeter data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Namikawa, Toshiya
We present here a new method for delensing B modes of the cosmic microwave background (CMB) using a lensing potential reconstructed from the same realization of the CMB polarization (CMB internal delensing). The B -mode delensing is required to improve sensitivity to primary B modes generated by, e.g., the inflationary gravitational waves, axionlike particles, modified gravity, primordial magnetic fields, and topological defects such as cosmic strings. However, the CMB internal delensing suffers from substantial biases due to correlations between observed CMB maps to be delensed and that used for reconstructing a lensing potential. Since the bias depends on realizations, wemore » construct a realization-dependent (RD) estimator for correcting these biases by deriving a general optimal estimator for higher-order correlations. The RD method is less sensitive to simulation uncertainties. Compared to the previous ℓ -splitting method, we find that the RD method corrects the biases without substantial degradation of the delensing efficiency.« less
Gender bias in child care and child health: global patterns.
Khera, Rohan; Jain, Snigdha; Lodha, Rakesh; Ramakrishnan, Sivasubramanian
2014-04-01
Gender-based discrimination is reported across the spectrum of paediatric healthcare including emergency, inpatient, outpatient and preventive care and is mostly reported from South Asia and China with sporadic reports from Africa and South America. Biases against young girls have been documented even in immunisation percentage, home food allocation, seeking medical care for childhood ailments and percentage of household healthcare expenditures allocated to them. Such gender discrimination in access to medical care is likely to have an influence on the overall health of female children. Over the last five decades, the under-5 sex ratios are worsening in India with declining number of girls. Deliberate parental neglect of girls' essential and life-saving medical care is also an important contributing factor apart from sex-selective abortions to the declining gender ratios. Corrective measures and focused action are needed.
NASA Technical Reports Server (NTRS)
Kumar, S. V.; Peters-Lidard, C. D.; Santanello, J. A.; Reichle, R. H.; Draper, C. S.; Koster, R. D.; Nearing, G.; Jasinski, M. F.
2015-01-01
Earth's land surface is characterized by tremendous natural heterogeneity and human-engineered modifications, both of which are challenging to represent in land surface models. Satellite remote sensing is often the most practical and effective method to observe the land surface over large geographical areas. Agricultural irrigation is an important human-induced modification to natural land surface processes, as it is pervasive across the world and because of its significant influence on the regional and global water budgets. In this article, irrigation is used as an example of a human-engineered, often unmodeled land surface process, and the utility of satellite soil moisture retrievals over irrigated areas in the continental US is examined. Such retrievals are based on passive or active microwave observations from the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E), the Advanced Microwave Scanning Radiometer 2 (AMSR2), the Soil Moisture Ocean Salinity (SMOS) mission, WindSat and the Advanced Scatterometer (ASCAT). The analysis suggests that the skill of these retrievals for representing irrigation effects is mixed, with ASCAT-based products somewhat more skillful than SMOS and AMSR2 products. The article then examines the suitability of typical bias correction strategies in current land data assimilation systems when unmodeled processes dominate the bias between the model and the observations. Using a suite of synthetic experiments that includes bias correction strategies such as quantile mapping and trained forward modeling, it is demonstrated that the bias correction practices lead to the exclusion of the signals from unmodeled processes, if these processes are the major source of the biases. It is further shown that new methods are needed to preserve the observational information about unmodeled processes during data assimilation.
Anisotropic extinction distortion of the galaxy correlation function
NASA Astrophysics Data System (ADS)
Fang, Wenjuan; Hui, Lam; Ménard, Brice; May, Morgan; Scranton, Ryan
2011-09-01
Similar to the magnification of the galaxies’ fluxes by gravitational lensing, the extinction of the fluxes by comic dust, whose existence is recently detected by [B. Ménard, R. Scranton, M. Fukugita, and G. Richards, Mon. Not. R. Astron. Soc.MNRAA40035-8711 405, 1025 (2010)DOI: 10.1111/j.1365-2966.2010.16486.x.], also modifies the distribution of a flux-selected galaxy sample. We study the anisotropic distortion by dust extinction to the 3D galaxy correlation function, including magnification bias and redshift distortion at the same time. We find the extinction distortion is most significant along the line of sight and at large separations, similar to that by magnification bias. The correction from dust extinction is negative except at sufficiently large transverse separations, which is almost always opposite to that from magnification bias (we consider a number count slope s>0.4). Hence, the distortions from these two effects tend to reduce each other. At low z (≲1), the distortion by extinction is stronger than that by magnification bias, but at high z, the reverse holds. We also study how dust extinction affects probes in real space of the baryon acoustic oscillations (BAO) and the linear redshift distortion parameter β. We find its effect on BAO is negligible. However, it introduces a positive scale-dependent correction to β that can be as large as a few percent. At the same time, we also find a negative scale-dependent correction from magnification bias, which is up to percent level at low z, but to ˜40% at high z. These corrections are non-negligible for precision cosmology, and should be considered when testing General Relativity through the scale-dependence of β.
Methodological challenges to bridge the gap between regional climate and hydrology models
NASA Astrophysics Data System (ADS)
Bozhinova, Denica; José Gómez-Navarro, Juan; Raible, Christoph; Felder, Guido
2017-04-01
The frequency and severity of floods worldwide, together with their impacts, are expected to increase under climate change scenarios. It is therefore very important to gain insight into the physical mechanisms responsible for such events in order to constrain the associated uncertainties. Model simulations of the climate and hydrological processes are important tools that can provide insight in the underlying physical processes and thus enable an accurate assessment of the risks. Coupled together, they can provide a physically consistent picture that allows to assess the phenomenon in a comprehensive way. However, climate and hydrological models work at different temporal and spatial scales, so there are a number of methodological challenges that need to be carefully addressed. An important issue pertains the presence of biases in the simulation of precipitation. Climate models in general, and Regional Climate models (RCMs) in particular, are affected by a number of systematic biases that limit their reliability. In many studies, prominently the assessment of changes due to climate change, such biases are minimised by applying the so-called delta approach, which focuses on changes disregarding absolute values that are more affected by biases. However, this approach is not suitable in this scenario, as the absolute value of precipitation, rather than the change, is fed into the hydrological model. Therefore, bias has to be previously removed, being this a complex matter where various methodologies have been proposed. In this study, we apply and discuss the advantages and caveats of two different methodologies that correct the simulated precipitation to minimise differences with respect an observational dataset: a linear fit (FIT) of the accumulated distributions and Quantile Mapping (QM). The target region is Switzerland, and therefore the observational dataset is provided by MeteoSwiss. The RCM is the Weather Research and Forecasting model (WRF), driven at the boundaries by the Community Earth System Model (CESM). The raw simulation driven by CESM exhibit prominent biases that stand out in the evolution of the annual cycle and demonstrate that the correction of biases is mandatory in this type of studies, rather than a minor correction that might be neglected. The simulation spans the period 1976 - 2005, although the application of the correction is carried out on a daily basis. Both methods lead to a corrected field of precipitation that respects the temporal evolution of the simulated precipitation, at the same time that mimics the distribution of precipitation according to the one in the observations. Due to the nature of the two methodologies, there are important differences between the products of both corrections, that lead to dataset with different properties. FIT is generally more accurate regarding the reproduction of the tails of the distribution, i.e. extreme events, whereas the nature of QM renders it a general-purpose correction whose skill is equally distributed across the full distribution of precipitation, including central values.
The SAMI Galaxy Survey: can we trust aperture corrections to predict star formation?
NASA Astrophysics Data System (ADS)
Richards, S. N.; Bryant, J. J.; Croom, S. M.; Hopkins, A. M.; Schaefer, A. L.; Bland-Hawthorn, J.; Allen, J. T.; Brough, S.; Cecil, G.; Cortese, L.; Fogarty, L. M. R.; Gunawardhana, M. L. P.; Goodwin, M.; Green, A. W.; Ho, I.-T.; Kewley, L. J.; Konstantopoulos, I. S.; Lawrence, J. S.; Lorente, N. P. F.; Medling, A. M.; Owers, M. S.; Sharp, R.; Sweet, S. M.; Taylor, E. N.
2016-01-01
In the low-redshift Universe (z < 0.3), our view of galaxy evolution is primarily based on fibre optic spectroscopy surveys. Elaborate methods have been developed to address aperture effects when fixed aperture sizes only probe the inner regions for galaxies of ever decreasing redshift or increasing physical size. These aperture corrections rely on assumptions about the physical properties of galaxies. The adequacy of these aperture corrections can be tested with integral-field spectroscopic data. We use integral-field spectra drawn from 1212 galaxies observed as part of the SAMI Galaxy Survey to investigate the validity of two aperture correction methods that attempt to estimate a galaxy's total instantaneous star formation rate. We show that biases arise when assuming that instantaneous star formation is traced by broad-band imaging, and when the aperture correction is built only from spectra of the nuclear region of galaxies. These biases may be significant depending on the selection criteria of a survey sample. Understanding the sensitivities of these aperture corrections is essential for correct handling of systematic errors in galaxy evolution studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Favazza, C; Fetterly, K
2016-06-15
Purpose: Application of a channelized Hotelling model observer (CHO) over a wide range of x-ray angiography detector target dose (DTD) levels demonstrated substantial bias for conditions yielding low detectability indices (d’), including low DTD and small test objects. The purpose of this work was to develop theory and methods to correct this bias. Methods: A hypothesis was developed wherein the measured detectability index (d’b) for a known test object is positively biased by temporally variable non-stationary noise in the images. Hotelling’s T2 test statistic provided the foundation for a mathematical theory which accounts for independent contributions to the measured d’bmore » value from both the test object (d’o) and non-stationary noise (d’ns). Experimental methods were developed to directly estimate d’o by determining d’ns and subtracting it from d’b, in accordance with the theory. Specifically, d’ns was determined from two sets of images from which the traditional test object was withheld. This method was applied to angiography images with DTD levels in the range 0 to 240 nGy and for disk-shaped iodine-based contrast targets with diameters 0.5 to 4.0 mm. Results: Bias in d’ was evidenced by d’b values which exceeded values expected from a quantum limited imaging system and decreasing object size and DTD. d’ns increased with decreasing DTD, reaching a maximum of 2.6 for DTD = 0. Bias-corrected d’o estimates demonstrated sub-quantum limited performance of the x-ray angiography for low DTD. Findings demonstrated that the source of non-stationary noise was detector electronic readout noise. Conclusion: Theory and methods to estimate and correct bias in CHO measurements from temporally variable non-stationary noise were presented. The temporal non-stationary noise was shown to be due to electronic readout noise. This method facilitates accurate estimates of d’ values over a large range of object size and detector target dose.« less
Lujan, Heidi L; DiCarlo, Stephen E
2018-06-01
Spirometers are used globally to diagnose respiratory diseases, and most commercially available spirometers "correct" for race. "Race correction" is built into the software of spirometers. To evaluate pulmonary function and to make recordings, the operator must enter the subject's race. In fact, the Joint Working Party of the American Thoracic Society/European Respiratory Society recommends the use of race- and ethnic-specific reference values. In the United States, spirometers apply correction factors of 10-15% for individuals labeled "Black" and 4-6% for people labeled "Asian." Thus race is purported to be a biologically important and scientifically valid category. However, history suggests that race corrections may represent an implicit bias, discrimination, and racism. Furthermore, this practice masks economic and environmental factors. The flawed logic of innate, racial difference is also considered with disability estimates, preemployment physicals, and clinical diagnoses that rely on the spirometer. Thomas Jefferson's Notes on the State of Virginia (1832) may have initiated this mistaken belief by noting deficiencies of the "pulmonary apparatus" of blacks. Plantation physicians used Jefferson's statement to support slavery, believing that forced labor was a way to "vitalize the blood" of deficient black slaves. Samuel Cartwright, a Southern physician and slave holder, was the first to use spirometry to record deficiencies in pulmonary function of blacks. A massive study by Benjamin Apthorp Gould (1869) during the Civil War validated his results. The history of slavery created an environment where racial difference in lung capacity become so widely accepted that race correction became a scientifically valid procedure.
NASA Astrophysics Data System (ADS)
Gorbunov, Michael E.; Kirchengast, Gottfried
2018-01-01
A new reference occultation processing system (rOPS) will include a Global Navigation Satellite System (GNSS) radio occultation (RO) retrieval chain with integrated uncertainty propagation. In this paper, we focus on wave-optics bending angle (BA) retrieval in the lower troposphere and introduce (1) an empirically estimated boundary layer bias (BLB) model then employed to reduce the systematic uncertainty of excess phases and bending angles in about the lowest 2 km of the troposphere and (2) the estimation of (residual) systematic uncertainties and their propagation together with random uncertainties from excess phase to bending angle profiles. Our BLB model describes the estimated bias of the excess phase transferred from the estimated bias of the bending angle, for which the model is built, informed by analyzing refractivity fluctuation statistics shown to induce such biases. The model is derived from regression analysis using a large ensemble of Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) RO observations and concurrent European Centre for Medium-Range Weather Forecasts (ECMWF) analysis fields. It is formulated in terms of predictors and adaptive functions (powers and cross products of predictors), where we use six main predictors derived from observations: impact altitude, latitude, bending angle and its standard deviation, canonical transform (CT) amplitude, and its fluctuation index. Based on an ensemble of test days, independent of the days of data used for the regression analysis to establish the BLB model, we find the model very effective for bias reduction and capable of reducing bending angle and corresponding refractivity biases by about a factor of 5. The estimated residual systematic uncertainty, after the BLB profile subtraction, is lower bounded by the uncertainty from the (indirect) use of ECMWF analysis fields but is significantly lower than the systematic uncertainty without BLB correction. The systematic and random uncertainties are propagated from excess phase to bending angle profiles, using a perturbation approach and the wave-optical method recently introduced by Gorbunov and Kirchengast (2015), starting with estimated excess phase uncertainties. The results are encouraging and this uncertainty propagation approach combined with BLB correction enables a robust reduction and quantification of the uncertainties of excess phases and bending angles in the lower troposphere.
Ma, Jianzhong; Amos, Christopher I; Warwick Daw, E
2007-09-01
Although extended pedigrees are often sampled through probands with extreme levels of a quantitative trait, Markov chain Monte Carlo (MCMC) methods for segregation and linkage analysis have not been able to perform ascertainment corrections. Further, the extent to which ascertainment of pedigrees leads to biases in the estimation of segregation and linkage parameters has not been previously studied for MCMC procedures. In this paper, we studied these issues with a Bayesian MCMC approach for joint segregation and linkage analysis, as implemented in the package Loki. We first simulated pedigrees ascertained through individuals with extreme values of a quantitative trait in spirit of the sequential sampling theory of Cannings and Thompson [Cannings and Thompson [1977] Clin. Genet. 12:208-212]. Using our simulated data, we detected no bias in estimates of the trait locus location. However, in addition to allele frequencies, when the ascertainment threshold was higher than or close to the true value of the highest genotypic mean, bias was also found in the estimation of this parameter. When there were multiple trait loci, this bias destroyed the additivity of the effects of the trait loci, and caused biases in the estimation all genotypic means when a purely additive model was used for analyzing the data. To account for pedigree ascertainment with sequential sampling, we developed a Bayesian ascertainment approach and implemented Metropolis-Hastings updates in the MCMC samplers used in Loki. Ascertainment correction greatly reduced biases in parameter estimates. Our method is designed for multiple, but a fixed number of trait loci. Copyright (c) 2007 Wiley-Liss, Inc.
A New Online Calibration Method Based on Lord's Bias-Correction.
He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei
2017-09-01
Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.
Optimal two-stage enrichment design correcting for biomarker misclassification.
Zang, Yong; Guo, Beibei
2018-01-01
The enrichment design is an important clinical trial design to detect the treatment effect of the molecularly targeted agent (MTA) in personalized medicine. Under this design, patients are stratified into marker-positive and marker-negative subgroups based on their biomarker statuses and only the marker-positive patients are enrolled into the trial and randomized to receive either the MTA or a standard treatment. As the biomarker plays a key role in determining the enrollment of the trial, a misclassification of the biomarker can induce substantial bias, undermine the integrity of the trial, and seriously affect the treatment evaluation. In this paper, we propose a two-stage optimal enrichment design that utilizes the surrogate marker to correct for the biomarker misclassification. The proposed design is optimal in the sense that it maximizes the probability of correctly classifying each patient's biomarker status based on the surrogate marker information. In addition, after analytically deriving the bias caused by the biomarker misclassification, we develop a likelihood ratio test based on the EM algorithm to correct for such bias. We conduct comprehensive simulation studies to investigate the operating characteristics of the optimal design and the results confirm the desirable performance of the proposed design.
Improving Photometry and Stellar Signal Preservation with Pixel-Level Systematic Error Correction
NASA Technical Reports Server (NTRS)
Kolodzijczak, Jeffrey J.; Smith, Jeffrey C.; Jenkins, Jon M.
2013-01-01
The Kepler Mission has demonstrated that excellent stellar photometric performance can be achieved using apertures constructed from optimally selected CCD pixels. The clever methods used to correct for systematic errors, while very successful, still have some limitations in their ability to extract long-term trends in stellar flux. They also leave poorly correlated bias sources, such as drifting moiré pattern, uncorrected. We will illustrate several approaches where applying systematic error correction algorithms to the pixel time series, rather than the co-added raw flux time series, provide significant advantages. Examples include, spatially localized determination of time varying moiré pattern biases, greater sensitivity to radiation-induced pixel sensitivity drops (SPSDs), improved precision of co-trending basis vectors (CBV), and a means of distinguishing the stellar variability from co-trending terms even when they are correlated. For the last item, the approach enables physical interpretation of appropriately scaled coefficients derived in the fit of pixel time series to the CBV as linear combinations of various spatial derivatives of the pixel response function (PRF). We demonstrate that the residuals of a fit of soderived pixel coefficients to various PRF-related components can be deterministically interpreted in terms of physically meaningful quantities, such as the component of the stellar flux time series which is correlated with the CBV, as well as, relative pixel gain, proper motion and parallax. The approach also enables us to parameterize and assess the limiting factors in the uncertainties in these quantities.
Modulation of Soil Initial State on WRF Model Performance Over China
NASA Astrophysics Data System (ADS)
Xue, Haile; Jin, Qinjian; Yi, Bingqi; Mullendore, Gretchen L.; Zheng, Xiaohui; Jin, Hongchun
2017-11-01
The soil state (e.g., temperature and moisture) in a mesoscale numerical prediction model is typically initialized by reanalysis or analysis data that may be subject to large bias. Such bias may lead to unrealistic land-atmosphere interactions. This study shows that the Climate Forecast System Reanalysis (CFSR) dramatically underestimates soil temperature and overestimates soil moisture over most parts of China in the first (0-10 cm) and second (10-25 cm) soil layers compared to in situ observations in July 2013. A correction based on the global optimal dual kriging is employed to correct CFSR bias in soil temperature and moisture using in situ observations. To investigate the impacts of the corrected soil state on model forecasts, two numerical model simulations—a control run with CFSR soil state and a disturbed run with the corrected soil state—were conducted using the Weather Research and Forecasting model. All the simulations are initiated 4 times per day and run 48 h. Model results show that the corrected soil state, for example, warmer and drier surface over the most parts of China, can enhance evaporation over wet regions, which changes the overlying atmospheric temperature and moisture. The changes of the lifting condensation level, level of free convection, and water transport due to corrected soil state favor precipitation over wet regions, while prohibiting precipitation over dry regions. Moreover, diagnoses indicate that the remote moisture flux convergence plays a dominant role in the precipitation changes over the wet regions.
Wagner, Chad R.; Mueller, David S.
2011-01-01
A negative bias in discharge measurements made with an acoustic Doppler current profiler (ADCP) can be caused by the movement of sediment on or near the streambed. The integration of a global positioning system (GPS) to track the movement of the ADCP can be used to avoid the systematic negative bias associated with a moving streambed. More than 500 discharge transects from 63 discharge measurements with GPS data were collected at sites throughout the US, Canada, and New Zealand with no moving bed to compare GPS and bottom-track-referenced discharges. Although the data indicated some statistical bias depending on site conditions and type of GPS data used, these biases were typically about 0.5% or less. An assessment of differential correction sources was limited by a lack of data collected in a range of different correction sources and different GPS receivers at the same sites. Despite this limitation, the data indicate that the use of Wide Area Augmentation System (WAAS) corrected positional data is acceptable for discharge measurements using GGA as the boat-velocity reference. The discharge data based on GPS-referenced boat velocities from the VTG data string, which does not require differential correction, were comparable to the discharges based on GPS-referenced boat velocities from the differentially-corrected GGA data string. Spatial variability of measure discharges referenced to GGA, VTG and bottom-tracking is higher near the channel banks. The spatial variability of VTG-referenced discharges is correlated with the spatial distribution of maximum Horizontal Dilution of Precision (HDOP) values and the spatial variability of GGA-referenced discharges is correlated with proximity to channel banks.
De Kesel, Pieter M M; Capiau, Sara; Stove, Veronique V; Lambert, Willy E; Stove, Christophe P
2014-10-01
Although dried blood spot (DBS) sampling is increasingly receiving interest as a potential alternative to traditional blood sampling, the impact of hematocrit (Hct) on DBS results is limiting its final breakthrough in routine bioanalysis. To predict the Hct of a given DBS, potassium (K(+)) proved to be a reliable marker. The aim of this study was to evaluate whether application of an algorithm, based upon predicted Hct or K(+) concentrations as such, allowed correction for the Hct bias. Using validated LC-MS/MS methods, caffeine, chosen as a model compound, was determined in whole blood and corresponding DBS samples with a broad Hct range (0.18-0.47). A reference subset (n = 50) was used to generate an algorithm based on K(+) concentrations in DBS. Application of the developed algorithm on an independent test set (n = 50) alleviated the assay bias, especially at lower Hct values. Before correction, differences between DBS and whole blood concentrations ranged from -29.1 to 21.1%. The mean difference, as obtained by Bland-Altman comparison, was -6.6% (95% confidence interval (CI), -9.7 to -3.4%). After application of the algorithm, differences between corrected and whole blood concentrations lay between -19.9 and 13.9% with a mean difference of -2.1% (95% CI, -4.5 to 0.3%). The same algorithm was applied to a separate compound, paraxanthine, which was determined in 103 samples (Hct range, 0.17-0.47), yielding similar results. In conclusion, a K(+)-based algorithm allows correction for the Hct bias in the quantitative analysis of caffeine and its metabolite paraxanthine.
Tuerk, Andreas; Wiktorin, Gregor; Güler, Serhat
2017-05-01
Accuracy of transcript quantification with RNA-Seq is negatively affected by positional fragment bias. This article introduces Mix2 (rd. "mixquare"), a transcript quantification method which uses a mixture of probability distributions to model and thereby neutralize the effects of positional fragment bias. The parameters of Mix2 are trained by Expectation Maximization resulting in simultaneous transcript abundance and bias estimates. We compare Mix2 to Cufflinks, RSEM, eXpress and PennSeq; state-of-the-art quantification methods implementing some form of bias correction. On four synthetic biases we show that the accuracy of Mix2 overall exceeds the accuracy of the other methods and that its bias estimates converge to the correct solution. We further evaluate Mix2 on real RNA-Seq data from the Microarray and Sequencing Quality Control (MAQC, SEQC) Consortia. On MAQC data, Mix2 achieves improved correlation to qPCR measurements with a relative increase in R2 between 4% and 50%. Mix2 also yields repeatable concentration estimates across technical replicates with a relative increase in R2 between 8% and 47% and reduced standard deviation across the full concentration range. We further observe more accurate detection of differential expression with a relative increase in true positives between 74% and 378% for 5% false positives. In addition, Mix2 reveals 5 dominant biases in MAQC data deviating from the common assumption of a uniform fragment distribution. On SEQC data, Mix2 yields higher consistency between measured and predicted concentration ratios. A relative error of 20% or less is obtained for 51% of transcripts by Mix2, 40% of transcripts by Cufflinks and RSEM and 30% by eXpress. Titration order consistency is correct for 47% of transcripts for Mix2, 41% for Cufflinks and RSEM and 34% for eXpress. We, further, observe improved repeatability across laboratory sites with a relative increase in R2 between 8% and 44% and reduced standard deviation.
Assessment of bias correction under transient climate change
NASA Astrophysics Data System (ADS)
Van Schaeybroeck, Bert; Vannitsem, Stéphane
2015-04-01
Calibration of climate simulations is necessary since large systematic discrepancies are generally found between the model climate and the observed climate. Recent studies have cast doubt upon the common assumption of the bias being stationary when the climate changes. This led to the development of new methods, mostly based on linear sensitivity of the biases as a function of time or forcing (Kharin et al. 2012). However, recent studies uncovered more fundamental problems using both low-order systems (Vannitsem 2011) and climate models, showing that the biases may display complicated non-linear variations under climate change. This last analysis focused on biases derived from the equilibrium climate sensitivity, thereby ignoring the effect of the transient climate sensitivity. Based on the linear response theory, a general method of bias correction is therefore proposed that can be applied on any climate forcing scenario. The validity of the method is addressed using twin experiments with a climate model of intermediate complexity LOVECLIM (Goosse et al., 2010). We evaluate to what extent the bias change is sensitive to the structure (frequency) of the applied forcing (here greenhouse gases) and whether the linear response theory is valid for global and/or local variables. To answer these question we perform large-ensemble simulations using different 300-year scenarios of forced carbon-dioxide concentrations. Reality and simulations are assumed to differ by a model error emulated as a parametric error in the wind drag or in the radiative scheme. References [1] H. Goosse et al., 2010: Description of the Earth system model of intermediate complexity LOVECLIM version 1.2, Geosci. Model Dev., 3, 603-633. [2] S. Vannitsem, 2011: Bias correction and post-processing under climate change, Nonlin. Processes Geophys., 18, 911-924. [3] V.V. Kharin, G. J. Boer, W. J. Merryfield, J. F. Scinocca, and W.-S. Lee, 2012: Statistical adjustment of decadal predictions in a changing climate, Geophys. Res. Lett., 39, L19705.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jensen, M. P.; Toto, T.; Troyan, D.
2015-01-01
The Midlatitude Continental Convective Clouds Experiment (MC3E) took place during the spring of 2011 centered in north-central Oklahoma, USA. The main goal of this field campaign was to capture the dynamical and microphysical characteristics of precipitating convective systems in the US Central Plains. A major component of the campaign was a six-site radiosonde array designed to capture the large-scale variability of the atmospheric state with the intent of deriving model forcing data sets. Over the course of the 46-day MC3E campaign, a total of 1362 radiosondes were launched from the enhanced sonde network. This manuscript provides details on the instrumentationmore » used as part of the sounding array, the data processing activities including quality checks and humidity bias corrections and an analysis of the impacts of bias correction and algorithm assumptions on the determination of convective levels and indices. It is found that corrections for known radiosonde humidity biases and assumptions regarding the characteristics of the surface convective parcel result in significant differences in the derived values of convective levels and indices in many soundings. In addition, the impact of including the humidity corrections and quality controls on the thermodynamic profiles that are used in the derivation of a large-scale model forcing data set are investigated. The results show a significant impact on the derived large-scale vertical velocity field illustrating the importance of addressing these humidity biases.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jensen, M. P.; Toto, T.; Troyan, D.
2015-01-27
The Midlatitude Continental Convective Clouds Experiment (MC3E) took place during the spring of 2011 centered in north-central Oklahoma, USA. The main goal of this field campaign was to capture the dynamical and microphysical characteristics of precipitating convective systems in the US Central Plains. A major component of the campaign was a six-site radiosonde array designed to capture the large-scale variability of the atmospheric state with the intent of deriving model forcing data sets. Over the course of the 46-day MC3E campaign, a total of 1362 radiosondes were launched from the enhanced sonde network. This manuscript provides details on the instrumentationmore » used as part of the sounding array, the data processing activities including quality checks and humidity bias corrections and an analysis of the impacts of bias correction and algorithm assumptions on the determination of convective levels and indices. It is found that corrections for known radiosonde humidity biases and assumptions regarding the characteristics of the surface convective parcel result in significant differences in the derived values of convective levels and indices in many soundings. In addition, the impact of including the humidity corrections and quality controls on the thermodynamic profiles that are used in the derivation of a large-scale model forcing data set are investigated. The results show a significant impact on the derived large-scale vertical velocity field illustrating the importance of addressing these humidity biases.« less
A New Variational Method for Bias Correction and Its Applications to Rodent Brain Extraction.
Chang, Huibin; Huang, Weimin; Wu, Chunlin; Huang, Su; Guan, Cuntai; Sekar, Sakthivel; Bhakoo, Kishore Kumar; Duan, Yuping
2017-03-01
Brain extraction is an important preprocessing step for further analysis of brain MR images. Significant intensity inhomogeneity can be observed in rodent brain images due to the high-field MRI technique. Unlike most existing brain extraction methods that require bias corrected MRI, we present a high-order and L 0 regularized variational model for bias correction and brain extraction. The model is composed of a data fitting term, a piecewise constant regularization and a smooth regularization, which is constructed on a 3-D formulation for medical images with anisotropic voxel sizes. We propose an efficient multi-resolution algorithm for fast computation. At each resolution layer, we solve an alternating direction scheme, all subproblems of which have the closed-form solutions. The method is tested on three T2 weighted acquisition configurations comprising a total of 50 rodent brain volumes, which are with the acquisition field strengths of 4.7 Tesla, 9.4 Tesla and 17.6 Tesla, respectively. On one hand, we compare the results of bias correction with N3 and N4 in terms of the coefficient of variations on 20 different tissues of rodent brain. On the other hand, the results of brain extraction are compared against manually segmented gold standards, BET, BSE and 3-D PCNN based on a number of metrics. With the high accuracy and efficiency, our proposed method can facilitate automatic processing of large-scale brain studies.
Bakbergenuly, Ilyas; Kulinskaya, Elena; Morgenthaler, Stephan
2016-07-01
We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group-level studies or in meta-analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log-odds and arcsine transformations of the estimated probability p̂, both for single-group studies and in combining results from several groups or studies in meta-analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta-analysis and result in abysmal coverage of the combined effect for large K. We also propose bias-correction for the arcsine transformation. Our simulations demonstrate that this bias-correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta-analyses of prevalence. © 2016 The Authors. Biometrical Journal Published by Wiley-VCH Verlag GmbH & Co. KGaA.
A Variational Approach to Simultaneous Image Segmentation and Bias Correction.
Zhang, Kaihua; Liu, Qingshan; Song, Huihui; Li, Xuelong
2015-08-01
This paper presents a novel variational approach for simultaneous estimation of bias field and segmentation of images with intensity inhomogeneity. We model intensity of inhomogeneous objects to be Gaussian distributed with different means and variances, and then introduce a sliding window to map the original image intensity onto another domain, where the intensity distribution of each object is still Gaussian but can be better separated. The means of the Gaussian distributions in the transformed domain can be adaptively estimated by multiplying the bias field with a piecewise constant signal within the sliding window. A maximum likelihood energy functional is then defined on each local region, which combines the bias field, the membership function of the object region, and the constant approximating the true signal from its corresponding object. The energy functional is then extended to the whole image domain by the Bayesian learning approach. An efficient iterative algorithm is proposed for energy minimization, via which the image segmentation and bias field correction are simultaneously achieved. Furthermore, the smoothness of the obtained optimal bias field is ensured by the normalized convolutions without extra cost. Experiments on real images demonstrated the superiority of the proposed algorithm to other state-of-the-art representative methods.
Albanese, B.; Angermeier, P.L.; Gowan, C.
2003-01-01
Mark-recapture studies generate biased, or distance-weighted, movement data because short distances are sampled more frequently than long distances. Using models and field data, we determined how study design affects distance weighting and the movement distributions of stream fishes. We first modeled distance weighting as a function of recapture section length in an unbranching stream. The addition of an unsampled tributary to one of these models substantially increased distance weighting by decreasing the percentage of upstream distances that were sampled. Similarly, the presence of multiple tributaries in the field study resulted in severe bias. However, increasing recapture section length strongly affected distance weighting in both the model and the field study, producing a zone where the number of fish moving could be estimated with little bias. Subsampled data from the field study indicated that longer median (three of three species) and maximum distances (two of three species) can be detected by increasing the length of the recapture section. The effect was extreme for bluehead chub Nocomis leptocephalus, a highly mobile species, which exhibited a longer median distance (133 m versus 60 m), a longer maximum distance (1,144 m versus 708 m), and a distance distribution that differed in shape when the full (4,123-m recapture section) and subsampled (1,978-m recapture section) data sets were compared. Correction factors that adjust the observed number of movements to undersampled distances upwards and those to oversampled distances downwards could not mitigate the distance weighting imposed by the shorter recapture section. Future studies should identify the spatial scale over which movements can be accurately measured before data are collected. Increasing recapture section length a priori is far superior to using post hoc correction factors to reduce the influence of distance weighting on observed distributions. Implementing these strategies will be especially important in stream networks where fish can follow multiple pathways out of the recapture section.
Richardson, Michael L; Petscavage, Jonelle M
2011-11-01
The sensitivity and specificity of magnetic resonance imaging (MRI) for diagnosis of meniscal tears has been studied extensively, with tears usually verified by surgery. However, surgically unverified cases are often not considered in these studies, leading to verification bias, which can falsely increase the sensitivity and decrease the specificity estimates. Our study suggests that such bias may be very common in the meniscal MRI literature, and illustrates techniques to detect and correct for such bias. PubMed was searched for articles estimating sensitivity and specificity of MRI for meniscal tears. These were assessed for verification bias, deemed potentially present if a study included any patients whose MRI findings were not surgically verified. Retrospective global sensitivity analysis (GSA) was performed when possible. Thirty-nine of the 314 studies retrieved from PubMed specifically dealt with meniscal tears. All 39 included unverified patients, and hence, potential verification bias. Only seven articles included sufficient information to perform GSA. Of these, one showed definite verification bias, two showed no bias, and four others showed bias within certain ranges of disease prevalence. Only 9 of 39 acknowledged the possibility of verification bias. Verification bias is underrecognized and potentially common in published estimates of the sensitivity and specificity of MRI for the diagnosis of meniscal tears. When possible, it should be avoided by proper study design. If unavoidable, it should be acknowledged. Investigators should tabulate unverified as well as verified data. Finally, verification bias should be estimated; if present, corrected estimates of sensitivity and specificity should be used. Our online web-based calculator makes this process relatively easy. Copyright © 2011 AUR. Published by Elsevier Inc. All rights reserved.
Changes in the TRMM Version-5 and Version-6 Precipitation Radar Products Due to Orbit Boost
NASA Technical Reports Server (NTRS)
Liao, Liang; Meneghini, Robert
2010-01-01
The performance of the version-5 and version-6 Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR) products before and after the satellite orbit boost is assessed through a series of comparisons with Weather Surveillance Radar (WSR)-88D ground-based radar in Melbourne, Florida. Analysis of the comparisons of radar reflectivity near the storm top from the ground radar and both versions of the PR indicates that the PR bias relative to the WSR radar at Melbourne is on the order of 1dB for both pre- and post-boost periods, indicating that the PR products maintain accurate calibration after the orbit boost. Comparisons with the WSR-88D near-surface reflectivity factors indicate that both versions of the PR products accurately correct for attenuation in stratiform rain. However, in convective rain, both versions exhibit negative biases in the near-surface radar reflectivity with version-6 products having larger negative biases than version-5. Rain rate comparisons between the ground and space radars show similar characteristics
Spatial scaling of net primary productivity using subpixel landcover information
NASA Astrophysics Data System (ADS)
Chen, X. F.; Chen, Jing M.; Ju, Wei M.; Ren, L. L.
2008-10-01
Gridding the land surface into coarse homogeneous pixels may cause important biases on ecosystem model estimations of carbon budget components at local, regional and global scales. These biases result from overlooking subpixel variability of land surface characteristics. Vegetation heterogeneity is an important factor introducing biases in regional ecological modeling, especially when the modeling is made on large grids. This study suggests a simple algorithm that uses subpixel information on the spatial variability of land cover type to correct net primary productivity (NPP) estimates, made at coarse spatial resolutions where the land surface is considered as homogeneous within each pixel. The algorithm operates in such a way that NPP obtained from calculations made at coarse spatial resolutions are multiplied by simple functions that attempt to reproduce the effects of subpixel variability of land cover type on NPP. Its application to a carbon-hydrology coupled model(BEPS-TerrainLab model) estimates made at a 1-km resolution over a watershed (named Baohe River Basin) located in the southwestern part of Qinling Mountains, Shaanxi Province, China, improved estimates of average NPP as well as its spatial variability.
Neuropsychological and Cognitive Correlates of Recovery in Anorexia Nervosa.
Harper, Jessica A; Brodrick, Brooks; Van Enkevort, Erin; McAdams, Carrie J
2017-11-01
To identify clinical or cognitive measures either predictive of illness trajectory or altered with sustained weight recovery in adult women with anorexia nervosa. Participants were recruited from prior studies of women with anorexia nervosa (AN-C) and in weight-recovery following anorexia nervosa (AN-WR). Participants completed a neuropsychological battery at baseline and clinical assessments at both baseline and follow-up. Groups based on clinical outcome (continued eating disorder, AN-CC; newly in recovery, AN-CR; sustained weight-recovery, AN-WR) were compared by using one-way ANOVAs with Bonferroni-corrected post hoc comparisons. Women with continued eating disorder had poorer neuropsychological function and self-competence at baseline than AN-CR. AN-CR showed changes in depression and externalizing bias, a measure of self-related attributions. AN-WR differed from both AN-CC and AN-CR at baseline in externalizing bias, but only from AN-CC at outcome. Neuropsychological function when recently ill may be a prognostic factor, while externalizing bias may provide a clinical target for recovery. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association.
NASA Astrophysics Data System (ADS)
Liu, Y.; Zhang, Y.; Wood, A.; Lee, H. S.; Wu, L.; Schaake, J. C.
2016-12-01
Seasonal precipitation forecasts are a primary driver for seasonal streamflow prediction that is critical for a range of water resources applications, such as reservoir operations and drought management. However, it is well known that seasonal precipitation forecasts from climate models are often biased and also too coarse in spatial resolution for hydrologic applications. Therefore, post-processing procedures such as downscaling and bias correction are often needed. In this presentation, we discuss results from a recent study that applies a two-step methodology to downscale and correct the ensemble mean precipitation forecasts from the Climate Forecast System (CFS). First, CFS forecasts are downscaled and bias corrected using monthly reforecast analogs: we identify past precipitation forecasts that are similar to the current forecast, and then use the finer-scale observational analysis fields from the corresponding dates to represent the post-processed ensemble forecasts. Second, we construct the posterior distribution of forecast precipitation from the post-processed ensemble by integrating climate indices: a correlation analysis is performed to identify dominant climate indices for the study region, which are then used to weight the analysis analogs selected in the first step using a Bayesian approach. The methodology is applied to the California Nevada River Forecast Center (CNRFC) and the Middle Atlantic River Forecast Center (MARFC) regions for 1982-2015, using the North American Land Data Assimilation System (NLDAS-2) precipitation as the analysis. The results from cross validation show that the post-processed CFS precipitation forecast are considerably more skillful than the raw CFS with the analog approach only. Integrating climate indices can further improve the skill if the number of ensemble members considered is large enough; however, the improvement is generally limited to the first couple of months when compared against climatology. Impacts of various factors such as ensemble size, lead time, and choice of climate indices will also be discussed.
Characterization of 3-Dimensional PET Systems for Accurate Quantification of Myocardial Blood Flow.
Renaud, Jennifer M; Yip, Kathy; Guimond, Jean; Trottier, Mikaël; Pibarot, Philippe; Turcotte, Eric; Maguire, Conor; Lalonde, Lucille; Gulenchyn, Karen; Farncombe, Troy; Wisenberg, Gerald; Moody, Jonathan; Lee, Benjamin; Port, Steven C; Turkington, Timothy G; Beanlands, Rob S; deKemp, Robert A
2017-01-01
Three-dimensional (3D) mode imaging is the current standard for PET/CT systems. Dynamic imaging for quantification of myocardial blood flow with short-lived tracers, such as 82 Rb-chloride, requires accuracy to be maintained over a wide range of isotope activities and scanner counting rates. We proposed new performance standard measurements to characterize the dynamic range of PET systems for accurate quantitative imaging. 82 Rb or 13 N-ammonia (1,100-3,000 MBq) was injected into the heart wall insert of an anthropomorphic torso phantom. A decaying isotope scan was obtained over 5 half-lives on 9 different 3D PET/CT systems and 1 3D/2-dimensional PET-only system. Dynamic images (28 × 15 s) were reconstructed using iterative algorithms with all corrections enabled. Dynamic range was defined as the maximum activity in the myocardial wall with less than 10% bias, from which corresponding dead-time, counting rates, and/or injected activity limits were established for each scanner. Scatter correction residual bias was estimated as the maximum cavity blood-to-myocardium activity ratio. Image quality was assessed via the coefficient of variation measuring nonuniformity of the left ventricular myocardium activity distribution. Maximum recommended injected activity/body weight, peak dead-time correction factor, counting rates, and residual scatter bias for accurate cardiac myocardial blood flow imaging were 3-14 MBq/kg, 1.5-4.0, 22-64 Mcps singles and 4-14 Mcps prompt coincidence counting rates, and 2%-10% on the investigated scanners. Nonuniformity of the myocardial activity distribution varied from 3% to 16%. Accurate dynamic imaging is possible on the 10 3D PET systems if the maximum injected MBq/kg values are respected to limit peak dead-time losses during the bolus first-pass transit. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Should Studies of Diabetes Treatment Stratification Correct for Baseline HbA1c?
Jones, Angus G.; Lonergan, Mike; Henley, William E.; Pearson, Ewan R.; Hattersley, Andrew T.; Shields, Beverley M.
2016-01-01
Aims Baseline HbA1c is a major predictor of response to glucose lowering therapy and therefore a potential confounder in studies aiming to identify other predictors. However, baseline adjustment may introduce error if the association between baseline HbA1c and response is substantially due to measurement error and regression to the mean. We aimed to determine whether studies of predictors of response should adjust for baseline HbA1c. Methods We assessed the relationship between baseline HbA1c and glycaemic response in 257 participants treated with GLP-1R agonists and assessed whether it reflected measurement error and regression to the mean using duplicate ‘pre-baseline’ HbA1c measurements not included in the response variable. In this cohort and an additional 2659 participants treated with sulfonylureas we assessed the relationship between covariates associated with baseline HbA1c and treatment response with and without baseline adjustment, and with a bias correction using pre-baseline HbA1c to adjust for the effects of error in baseline HbA1c. Results Baseline HbA1c was a major predictor of response (R2 = 0.19,β = -0.44,p<0.001).The association between pre-baseline and response was similar suggesting the greater response at higher baseline HbA1cs is not mainly due to measurement error and subsequent regression to the mean. In unadjusted analysis in both cohorts, factors associated with baseline HbA1c were associated with response, however these associations were weak or absent after adjustment for baseline HbA1c. Bias correction did not substantially alter associations. Conclusions Adjustment for the baseline HbA1c measurement is a simple and effective way to reduce bias in studies of predictors of response to glucose lowering therapy. PMID:27050911
Use of regional climate model output for hydrologic simulations
Hay, L.E.; Clark, M.P.; Wilby, R.L.; Gutowski, W.J.; Leavesley, G.H.; Pan, Z.; Arritt, R.W.; Takle, E.S.
2002-01-01
Daily precipitation and maximum and minimum temperature time series from a regional climate model (RegCM2) configured using the continental United States as a domain and run on a 52-km (approximately) spatial resolution were used as input to a distributed hydrologic model for one rainfall-dominated basin (Alapaha River at Statenville, Georgia) and three snowmelt-dominated basins (Animas River at Durango. Colorado; east fork of the Carson River near Gardnerville, Nevada: and Cle Elum River near Roslyn, Washington). For comparison purposes, spatially averaged daily datasets of precipitation and maximum and minimum temperature were developed from measured data for each basin. These datasets included precipitation and temperature data for all stations (hereafter, All-Sta) located within the area of the RegCM2 output used for each basin, but excluded station data used to calibrate the hydrologic model. Both the RegCM2 output and All-Sta data capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all four basins, the RegCM2- and All-Sta-based simulations of runoff show little skill on a daily basis [Nash-Sutcliffe (NS) values range from 0.05 to 0.37 for RegCM2 and -0.08 to 0.65 for All-Sta]. When the precipitation and temperature biases are corrected in the RegCM2 output and All-Sta data (Bias-RegCM2 and Bias-All, respectively) the accuracy of the daily runoff simulations improve dramatically for the snowmelt-dominated basins (NS values range from 0.41 to 0.66 for RegCM2 and 0.60 to 0.76 for All-Sta). In the rainfall-dominated basin, runoff simulations based on the Bias-RegCM2 output show no skill (NS value of 0.09) whereas Bias-All simulated runoff improves (NS value improved from - 0.08 to 0.72). These results indicate that measured data at the coarse resolution of the RegCM2 output can be made appropriate for basin-scale modeling through bias correction (essentially a magnitude correction). However, RegCM2 output, even when bias corrected, does not contain the day-to-day variability present in the All-Sta dataset that is necessary for basin-scale modeling. Future work is warranted to identify the causes for systematic biases in RegCM2 simulations, develop methods to remove the biases, and improve RegCM2 simulations of daily variability in local climate.
Iterative Magnetometer Calibration
NASA Technical Reports Server (NTRS)
Sedlak, Joseph
2006-01-01
This paper presents an iterative method for three-axis magnetometer (TAM) calibration that makes use of three existing utilities recently incorporated into the attitude ground support system used at NASA's Goddard Space Flight Center. The method combines attitude-independent and attitude-dependent calibration algorithms with a new spinning spacecraft Kalman filter to solve for biases, scale factors, nonorthogonal corrections to the alignment, and the orthogonal sensor alignment. The method is particularly well-suited to spin-stabilized spacecraft, but may also be useful for three-axis stabilized missions given sufficient data to provide observability.
Janmaat, Cynthia J; van Diepen, Merel; Krediet, Raymond T; Hemmelder, Marc H; Dekker, Friedo W
2017-01-01
Purpose Current clinical guidelines recommend to initiate dialysis in the presence of symptoms or signs attributable to kidney failure, often with a glomerular filtration rate (GFR) of 5–10 mL/min/1.73 m2. Little evidence exists about the optimal kidney function to start dialysis. Thus far, most observational studies have been limited by lead-time bias. Only a few studies have accounted for lead-time bias, and showed contradictory results. We examined the effect of GFR at dialysis initiation on survival in chronic kidney disease patients, and the role of lead-time bias therein. We used both kidney function based on 24-hour urine collection (measured GFR [mGFR]) and estimated GFR (eGFR). Materials and methods A total of 1,143 patients with eGFR data at dialysis initiation and 852 patients with mGFR data were included from the NECOSAD cohort. Cox regression was used to adjust for potential confounders. To examine the effect of lead-time bias, survival was counted from the time of dialysis initiation or from a common starting point (GFR 20 mL/min/1.73 m2), using linear interpolation models. Results Without lead-time correction, no difference between early and late starters was present based on eGFR (hazard ratio [HR] 1.03, 95% confidence interval [CI] 0.81–1.3). However, after lead-time correction, early initiation showed a survival disadvantage (HR between 1.1 [95% CI 0.82–1.48] and 1.33 [95% CI 1.05–1.68]). Based on mGFR, the potential survival benefit for early starters without lead-time correction (HR 0.8, 95% CI 0.62–1.03) completely disappeared after lead-time correction (HR between 0.94 [95% CI 0.65–1.34] and 1.21 [95% CI 0.95–1.56]). Dialysis start time differed about a year between early and late initiation. Conclusion Lead-time bias is not only a methodological problem but also has clinical impact when assessing the optimal kidney function to start dialysis. Therefore, lead-time bias is extremely important to correct for. Taking account of lead-time bias, this controlled study showed that early dialysis initiation (eGFR >7.9, mGFR >6.6 mL/min/1.73 m2) was not associated with an improvement in survival. Based on kidney function, this study suggests that in some patients, dialysis could be started even later than an eGFR <5.7 and mGFR <4.3 mL/min/1.73 m2. PMID:28442934
A model-based correction for outcome reporting bias in meta-analysis.
Copas, John; Dwan, Kerry; Kirkham, Jamie; Williamson, Paula
2014-04-01
It is often suspected (or known) that outcomes published in medical trials are selectively reported. A systematic review for a particular outcome of interest can only include studies where that outcome was reported and so may omit, for example, a study that has considered several outcome measures but only reports those giving significant results. Using the methodology of the Outcome Reporting Bias (ORB) in Trials study of (Kirkham and others, 2010. The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews. British Medical Journal 340, c365), we suggest a likelihood-based model for estimating the effect of ORB on confidence intervals and p-values in meta-analysis. Correcting for bias has the effect of moving estimated treatment effects toward the null and hence more cautious assessments of significance. The bias can be very substantial, sometimes sufficient to completely overturn previous claims of significance. We re-analyze two contrasting examples, and derive a simple fixed effects approximation that can be used to give an initial estimate of the effect of ORB in practice.
Nelson, Jon P
2014-01-01
Precise estimates of price elasticities are important for alcohol tax policy. Using meta-analysis, this paper corrects average beer elasticities for heterogeneity, dependence, and publication selection bias. A sample of 191 estimates is obtained from 114 primary studies. Simple and weighted means are reported. Dependence is addressed by restricting number of estimates per study, author-restricted samples, and author-specific variables. Publication bias is addressed using funnel graph, trim-and-fill, and Egger's intercept model. Heterogeneity and selection bias are examined jointly in meta-regressions containing moderator variables for econometric methodology, primary data, and precision of estimates. Results for fixed- and random-effects regressions are reported. Country-specific effects and sample time periods are unimportant, but several methodology variables help explain the dispersion of estimates. In models that correct for selection bias and heterogeneity, the average beer price elasticity is about -0.20, which is less elastic by 50% compared to values commonly used in alcohol tax policy simulations. Copyright © 2013 Elsevier B.V. All rights reserved.
Electrostatic focal spot correction for x-ray tubes operating in strong magnetic fields.
Lillaney, Prasheel; Shin, Mihye; Hinshaw, Waldo; Fahrig, Rebecca
2014-11-01
A close proximity hybrid x-ray/magnetic resonance (XMR) imaging system offers several critical advantages over current XMR system installations that have large separation distances (∼5 m) between the imaging fields of view. The two imaging systems can be placed in close proximity to each other if an x-ray tube can be designed to be immune to the magnetic fringe fields outside of the MR bore. One of the major obstacles to robust x-ray tube design is correcting for the effects of the MR fringe field on the x-ray tube focal spot. Any fringe field component orthogonal to the x-ray tube electric field leads to electron drift altering the path of the electron trajectories. The method proposed in this study to correct for the electron drift utilizes an external electric field in the direction of the drift. The electric field is created using two electrodes that are positioned adjacent to the cathode. These electrodes are biased with positive and negative potential differences relative to the cathode. The design of the focusing cup assembly is constrained primarily by the strength of the MR fringe field and high voltage standoff distances between the anode, cathode, and the bias electrodes. From these constraints, a focusing cup design suitable for the close proximity XMR system geometry is derived, and a finite element model of this focusing cup geometry is simulated to demonstrate efficacy. A Monte Carlo simulation is performed to determine any effects of the modified focusing cup design on the output x-ray energy spectrum. An orthogonal fringe field magnitude of 65 mT can be compensated for using bias voltages of +15 and -20 kV. These bias voltages are not sufficient to completely correct for larger orthogonal field magnitudes. Using active shielding coils in combination with the bias electrodes provides complete correction at an orthogonal field magnitude of 88.1 mT. Introducing small fields (<10 mT) parallel to the x-ray tube electric field in addition to the orthogonal field does not affect the electrostatic correction technique. However, rotation of the x-ray tube by 30° toward the MR bore increases the parallel magnetic field magnitude (∼72 mT). The presence of this larger parallel field along with the orthogonal field leads to incomplete correction. Monte Carlo simulations demonstrate that the mean energy of the x-ray spectrum is not noticeably affected by the electrostatic correction, but the output flux is reduced by 7.5%. The maximum orthogonal magnetic field magnitude that can be compensated for using the proposed design is 65 mT. Larger orthogonal field magnitudes cannot be completely compensated for because a pure electrostatic approach is limited by the dielectric strength of the vacuum inside the x-ray tube insert. The electrostatic approach also suffers from limitations when there are strong magnetic fields in both the orthogonal and parallel directions because the electrons prefer to stay aligned with the parallel magnetic field. These challenging field conditions can be addressed by using a hybrid correction approach that utilizes both active shielding coils and biasing electrodes.
Electrostatic focal spot correction for x-ray tubes operating in strong magnetic fields
Lillaney, Prasheel; Shin, Mihye; Hinshaw, Waldo; Fahrig, Rebecca
2014-01-01
Purpose: A close proximity hybrid x-ray/magnetic resonance (XMR) imaging system offers several critical advantages over current XMR system installations that have large separation distances (∼5 m) between the imaging fields of view. The two imaging systems can be placed in close proximity to each other if an x-ray tube can be designed to be immune to the magnetic fringe fields outside of the MR bore. One of the major obstacles to robust x-ray tube design is correcting for the effects of the MR fringe field on the x-ray tube focal spot. Any fringe field component orthogonal to the x-ray tube electric field leads to electron drift altering the path of the electron trajectories. Methods: The method proposed in this study to correct for the electron drift utilizes an external electric field in the direction of the drift. The electric field is created using two electrodes that are positioned adjacent to the cathode. These electrodes are biased with positive and negative potential differences relative to the cathode. The design of the focusing cup assembly is constrained primarily by the strength of the MR fringe field and high voltage standoff distances between the anode, cathode, and the bias electrodes. From these constraints, a focusing cup design suitable for the close proximity XMR system geometry is derived, and a finite element model of this focusing cup geometry is simulated to demonstrate efficacy. A Monte Carlo simulation is performed to determine any effects of the modified focusing cup design on the output x-ray energy spectrum. Results: An orthogonal fringe field magnitude of 65 mT can be compensated for using bias voltages of +15 and −20 kV. These bias voltages are not sufficient to completely correct for larger orthogonal field magnitudes. Using active shielding coils in combination with the bias electrodes provides complete correction at an orthogonal field magnitude of 88.1 mT. Introducing small fields (<10 mT) parallel to the x-ray tube electric field in addition to the orthogonal field does not affect the electrostatic correction technique. However, rotation of the x-ray tube by 30° toward the MR bore increases the parallel magnetic field magnitude (∼72 mT). The presence of this larger parallel field along with the orthogonal field leads to incomplete correction. Monte Carlo simulations demonstrate that the mean energy of the x-ray spectrum is not noticeably affected by the electrostatic correction, but the output flux is reduced by 7.5%. Conclusions: The maximum orthogonal magnetic field magnitude that can be compensated for using the proposed design is 65 mT. Larger orthogonal field magnitudes cannot be completely compensated for because a pure electrostatic approach is limited by the dielectric strength of the vacuum inside the x-ray tube insert. The electrostatic approach also suffers from limitations when there are strong magnetic fields in both the orthogonal and parallel directions because the electrons prefer to stay aligned with the parallel magnetic field. These challenging field conditions can be addressed by using a hybrid correction approach that utilizes both active shielding coils and biasing electrodes. PMID:25370658
Sucunza, Federico; Danilewicz, Daniel; Cremer, Marta; Andriolo, Artur; Zerbini, Alexandre N
2018-01-01
Estimation of visibility bias is critical to accurately compute abundance of wild populations. The franciscana, Pontoporia blainvillei, is considered the most threatened small cetacean in the southwestern Atlantic Ocean. Aerial surveys are considered the most effective method to estimate abundance of this species, but many existing estimates have been considered unreliable because they lack proper estimation of correction factors for visibility bias. In this study, helicopter surveys were conducted to determine surfacing-diving intervals of franciscanas and to estimate availability for aerial platforms. Fifteen hours were flown and 101 groups of 1 to 7 franciscanas were monitored, resulting in a sample of 248 surface-dive cycles. The mean surfacing interval and diving interval times were 16.10 seconds (SE = 9.74) and 39.77 seconds (SE = 29.06), respectively. Availability was estimated at 0.39 (SE = 0.01), a value 16-46% greater than estimates computed from diving parameters obtained from boats or from land. Generalized mixed-effects models were used to investigate the influence of biological and environmental predictors on the proportion of time franciscana groups are visually available to be seen from an aerial platform. These models revealed that group size was the main factor influencing the proportion at surface. The use of negatively biased estimates of availability results in overestimation of abundance, leads to overly optimistic assessments of extinction probabilities and to potentially ineffective management actions. This study demonstrates that estimates of availability must be computed from suitable platforms to ensure proper conservation decisions are implemented to protect threatened species such as the franciscana.
Danilewicz, Daniel; Cremer, Marta; Andriolo, Artur; Zerbini, Alexandre N.
2018-01-01
Estimation of visibility bias is critical to accurately compute abundance of wild populations. The franciscana, Pontoporia blainvillei, is considered the most threatened small cetacean in the southwestern Atlantic Ocean. Aerial surveys are considered the most effective method to estimate abundance of this species, but many existing estimates have been considered unreliable because they lack proper estimation of correction factors for visibility bias. In this study, helicopter surveys were conducted to determine surfacing-diving intervals of franciscanas and to estimate availability for aerial platforms. Fifteen hours were flown and 101 groups of 1 to 7 franciscanas were monitored, resulting in a sample of 248 surface-dive cycles. The mean surfacing interval and diving interval times were 16.10 seconds (SE = 9.74) and 39.77 seconds (SE = 29.06), respectively. Availability was estimated at 0.39 (SE = 0.01), a value 16–46% greater than estimates computed from diving parameters obtained from boats or from land. Generalized mixed-effects models were used to investigate the influence of biological and environmental predictors on the proportion of time franciscana groups are visually available to be seen from an aerial platform. These models revealed that group size was the main factor influencing the proportion at surface. The use of negatively biased estimates of availability results in overestimation of abundance, leads to overly optimistic assessments of extinction probabilities and to potentially ineffective management actions. This study demonstrates that estimates of availability must be computed from suitable platforms to ensure proper conservation decisions are implemented to protect threatened species such as the franciscana. PMID:29534086
Biases in cost measurement for economic evaluation studies in health care.
Jacobs, P; Baladi, J F
1996-01-01
This paper addresses the issue of biases in cost measures which used in economic evaluation studies. The basic measure of hospital costs which is used by most investigators is unit cost. Focusing on this measure, a set of criteria which the basic measures must fulfil in order to approximate the marginal cost (MC) of a service for the relevant product, in the representative site, was identified. Then four distinct biases--a scale bias, a case mix bias, a methods bias and a site selection bias--each of which reflects the divergence of the unit cost measure from the desired MC measure, were identified. Measures are proposed for several of these biases and it is suggested how they can be corrected.
NASA Astrophysics Data System (ADS)
Wang, K.; Luo, Y.; Yang, Y.
2016-12-01
We collect two months of ambient noise data recorded by 35 broadband seismic stations in a 9×11 km area near Karamay, China, and do cross-correlation of noise data between all station pairs. Array beamforming analysis of the ambient noise data shows that ambient noise sources are unevenly distributed and the most energetic ambient noise mainly comes from azimuths of 40o-70o. As a consequence of the strong directional noise sources, surface wave waveforms of the cross-correlations at 1-5 Hz show clearly azimuthal dependence, and direct dispersion measurements from cross-correlations are strongly biased by the dominant noise energy. This bias renders that the dispersion measurements from cross-correlations do not accurately reflect the interstation velocities of surface waves propagating directly from one station to the other, that is, the cross-correlation functions do not retrieve Empirical Green's Functions accurately. To correct the bias caused by unevenly distributed noise sources, we adopt an iterative inversion procedure. The iterative inversion procedure, based on plane-wave modeling, includes three steps: (1) surface wave tomography, (2) estimation of ambient noise energy and (3) phase velocities correction. First, we use synthesized data to test efficiency and stability of the iterative procedure for both homogeneous and heterogeneous media. The testing results show that: (1) the amplitudes of phase velocity bias caused by directional noise sources are significant, reaching 2% and 10% for homogeneous and heterogeneous media, respectively; (2) phase velocity bias can be corrected by the iterative inversion procedure and the convergences of inversion depend on the starting phase velocity map and the complexity of the media. By applying the iterative approach to the real data in Karamay, we further show that phase velocity maps converge after ten iterations and the phase velocity map based on corrected interstation dispersion measurements are more consistent with results from geology surveys than those based on uncorrected ones. As ambient noise in high frequency band (>1Hz) is mostly related to human activities or climate events, both of which have strong directivity, the iterative approach demonstrated here helps improve the accuracy and resolution of ANT in imaging shallow earth structures.
Pérez, Adriana; Gabriel, Kelley; Nehme, Eileen K; Mandell, Dorothy J; Hoelscher, Deanna M
2015-07-27
Evidence regarding bias, precision, and accuracy in adolescent self-reported height and weight across demographic subpopulations is lacking. The bias, precision, and accuracy of adolescent self-reported height and weight across subpopulations were examined using a large, diverse and representative sample of adolescents. A second objective was to develop correction equations for self-reported height and weight to provide more accurate estimates of body mass index (BMI) and weight status. A total of 24,221 students from 8th and 11th grade in Texas participated in the School Physical Activity and Nutrition (SPAN) surveillance system in years 2000-2002 and 2004-2005. To assess bias, the differences between the self-reported and objective measures, for height and weight were estimated. To assess precision and accuracy, the Lin's concordance correlation coefficient was used. BMI was estimated for self-reported and objective measures. The prevalence of students' weight status was estimated using self-reported and objective measures; absolute (bias) and relative error (relative bias) were assessed subsequently. Correction equations for sex and race/ethnicity subpopulations were developed to estimate objective measures of height, weight and BMI from self-reported measures using weighted linear regression. Sensitivity, specificity and positive predictive values of weight status classification using self-reported measures and correction equations are assessed by sex and grade. Students in 8th- and 11th-grade overestimated their height from 0.68cm (White girls) to 2.02 cm (African-American boys), and underestimated their weight from 0.4 kg (Hispanic girls) to 0.98 kg (African-American girls). The differences in self-reported versus objectively-measured height and weight resulted in underestimation of BMI ranging from -0.23 kg/m2 (White boys) to -0.7 kg/m2 (African-American girls). The sensitivity of self-reported measures to classify weight status as obese was 70.8% and 81.9% for 8th- and 11th-graders, respectively. These estimates increased when using the correction equations to 77.4% and 84.4% for 8th- and 11th-graders, respectively. When direct measurement is not practical, self-reported measurements provide a reliable proxy measure across grade, sex and race/ethnicity subpopulations of adolescents. Correction equations increase the sensitivity of self-report measures to identify prevalence of overall overweight/obesity status.
Parametric study of statistical bias in laser Doppler velocimetry
NASA Technical Reports Server (NTRS)
Gould, Richard D.; Stevenson, Warren H.; Thompson, H. Doyle
1989-01-01
Analytical studies have often assumed that LDV velocity bias depends on turbulence intensity in conjunction with one or more characteristic time scales, such as the time between validated signals, the time between data samples, and the integral turbulence time-scale. These parameters are presently varied independently, in an effort to quantify the biasing effect. Neither of the post facto correction methods employed is entirely accurate. The mean velocity bias error is found to be nearly independent of data validation rate.
Eliminating Bias In Acousto-Optical Spectrum Analysis
NASA Technical Reports Server (NTRS)
Ansari, Homayoon; Lesh, James R.
1992-01-01
Scheme for digital processing of video signals in acousto-optical spectrum analyzer provides real-time correction for signal-dependent spectral bias. Spectrum analyzer described in "Two-Dimensional Acousto-Optical Spectrum Analyzer" (NPO-18092), related apparatus described in "Three-Dimensional Acousto-Optical Spectrum Analyzer" (NPO-18122). Essence of correction is to average over digitized outputs of pixels in each CCD row and to subtract this from the digitized output of each pixel in row. Signal processed electro-optically with reference-function signals to form two-dimensional spectral image in CCD camera.
Some comments on Anderson and Pospahala's correction of bias in line transect sampling
Anderson, D.R.; Burnham, K.P.; Chain, B.R.
1980-01-01
ANDERSON and POSPAHALA (1970) investigated the estimation of wildlife population size using the belt or line transect sampling method and devised a correction for bias, thus leading to an estimator with interesting characteristics. This work was given a uniform mathematical framework in BURNHAM and ANDERSON (1976). In this paper we show that the ANDERSON-POSPAHALA estimator is optimal in the sense of being the (unique) best linear unbiased estimator within the class of estimators which are linear combinations of cell frequencies, provided certain assumptions are met.
Lamadrid-Figueroa, Héctor; Téllez-Rojo, Martha M; Angeles, Gustavo; Hernández-Ávila, Mauricio; Hu, Howard
2011-01-01
In-vivo measurement of bone lead by means of K-X-ray fluorescence (KXRF) is the preferred biological marker of chronic exposure to lead. Unfortunately, considerable measurement error associated with KXRF estimations can introduce bias in estimates of the effect of bone lead when this variable is included as the exposure in a regression model. Estimates of uncertainty reported by the KXRF instrument reflect the variance of the measurement error and, although they can be used to correct the measurement error bias, they are seldom used in epidemiological statistical analyzes. Errors-in-variables regression (EIV) allows for correction of bias caused by measurement error in predictor variables, based on the knowledge of the reliability of such variables. The authors propose a way to obtain reliability coefficients for bone lead measurements from uncertainty data reported by the KXRF instrument and compare, by the use of Monte Carlo simulations, results obtained using EIV regression models vs. those obtained by the standard procedures. Results of the simulations show that Ordinary Least Square (OLS) regression models provide severely biased estimates of effect, and that EIV provides nearly unbiased estimates. Although EIV effect estimates are more imprecise, their mean squared error is much smaller than that of OLS estimates. In conclusion, EIV is a better alternative than OLS to estimate the effect of bone lead when measured by KXRF. Copyright © 2010 Elsevier Inc. All rights reserved.
Correcting C IV-based virial black hole masses
NASA Astrophysics Data System (ADS)
Coatman, Liam; Hewett, Paul C.; Banerji, Manda; Richards, Gordon T.; Hennawi, Joseph F.; Prochaska, J. Xavier
2017-02-01
The C IVλλ1498,1501 broad emission line is visible in optical spectra to redshifts exceeding z ˜ 5. C IV has long been known to exhibit significant displacements to the blue and these `blueshifts' almost certainly signal the presence of strong outflows. As a consequence, single-epoch virial black hole (BH) mass estimates derived from C IV velocity widths are known to be systematically biased compared to masses from the hydrogen Balmer lines. Using a large sample of 230 high-luminosity (LBol = 1045.5-1048 erg s-1), redshift 1.5 < z < 4.0 quasars with both C IV and Balmer line spectra, we have quantified the bias in C IV BH masses as a function of the C IV blueshift. C IV BH masses are shown to be a factor of 5 larger than the corresponding Balmer-line masses at C IV blueshifts of 3000 km s-1and are overestimated by almost an order of magnitude at the most extreme blueshifts, ≳5000 km s-1. Using the monotonically increasing relationship between the C IV blueshift and the mass ratio BH(C IV)/BH(Hα), we derive an empirical correction to all C IV BH masses. The scatter between the corrected C IV masses and the Balmer masses is 0.24 dex at low C IV blueshifts (˜0 km s-1) and just 0.10 dex at high blueshifts (˜3000 km s-1), compared to 0.40 dex before the correction. The correction depends only on the C IV line properties - i.e. full width at half-maximum and blueshift - and can therefore be applied to all quasars where C IV emission line properties have been measured, enabling the derivation of unbiased virial BH-mass estimates for the majority of high-luminosity, high-redshift, spectroscopically confirmed quasars in the literature.
Mirzazadeh, Ali; Nedjat, Saharnaz; Navadeh, Soodabeh; Haghdoost, Aliakbar; Mansournia, Mohammad-Ali; McFarland, Willi; Mohammad, Kazem
2014-01-01
In a national, facility-based survey of female sex workers in 14 cities of Iran (N = 872), HIV prevalence was measured at 4.5 % (95 % CI, 2.4-8.3) overall and at 11.2 % (95 % CI, 3.4-18.9) for FSW with a history of injection drug use. Using methods to correct for biases in reporting sensitive information, the estimate of unprotected sex in last act was 35.8 %, ever injecting drugs was 37.6 %, sexually transmitted disease symptoms was 82.1 %, and not testing for HIV in the last year was 64.0 %. The amount of bias correction ranged from <1 to >30 %, in parallel with the level of stigma associated with each behavior. Considering the current upward trajectory of HIV infection in the Middle East and North Africa region, as well as the ongoing high level of risky behaviors and considerable underreporting of many such behaviors in surveys, bias corrections may be needed, especially in the context of Iran, to obtain more accurate information to guide prevention and care responses to stop the growing HIV epidemic in this vulnerable group of women.
Restoration of MRI Data for Field Nonuniformities using High Order Neighborhood Statistics
Hadjidemetriou, Stathis; Studholme, Colin; Mueller, Susanne; Weiner, Michael; Schuff, Norbert
2007-01-01
MRI at high magnetic fields (> 3.0 T ) is complicated by strong inhomogeneous radio-frequency fields, sometimes termed the “bias field”. These lead to nonuniformity of image intensity, greatly complicating further analysis such as registration and segmentation. Existing methods for bias field correction are effective for 1.5 T or 3.0 T MRI, but are not completely satisfactory for higher field data. This paper develops an effective bias field correction for high field MRI based on the assumption that the nonuniformity is smoothly varying in space. Also, nonuniformity is quantified and unmixed using high order neighborhood statistics of intensity cooccurrences. They are computed within spherical windows of limited size over the entire image. The restoration is iterative and makes use of a novel stable stopping criterion that depends on the scaled entropy of the cooccurrence statistics, which is a non monotonic function of the iterations; the Shannon entropy of the cooccurrence statistics normalized to the effective dynamic range of the image. The algorithm restores whole head data, is robust to intense nonuniformities present in high field acquisitions, and is robust to variations in anatomy. This algorithm significantly improves bias field correction in comparison to N3 on phantom 1.5 T head data and high field 4 T human head data. PMID:18193095
Evaluating NMME Seasonal Forecast Skill for use in NASA SERVIR Hub Regions
NASA Technical Reports Server (NTRS)
Roberts, J. Brent; Roberts, Franklin R.
2013-01-01
The U.S. National Multi-Model Ensemble seasonal forecasting system is providing hindcast and real-time data streams to be used in assessing and improving seasonal predictive capacity. The coupled forecasts have numerous potential applications, both national and international in scope. The NASA / USAID SERVIR project, which leverages satellite and modeling-based resources for environmental decision making in developing nations, is focusing on the evaluation of NMME forecasts specifically for use in driving applications models in hub regions including East Africa, the Hindu Kush- Himalayan (HKH) region and Mesoamerica. A prerequisite for seasonal forecast use in application modeling (e.g. hydrology, agriculture) is bias correction and skill assessment. Efforts to address systematic biases and multi-model combination in support of NASA SERVIR impact modeling requirements will be highlighted. Specifically, quantilequantile mapping for bias correction has been implemented for all archived NMME hindcasts. Both deterministic and probabilistic skill estimates for raw, bias-corrected, and multi-model ensemble forecasts as a function of forecast lead will be presented for temperature and precipitation. Complementing this statistical assessment will be case studies of significant events, for example, the ability of the NMME forecasts suite to anticipate the 2010/2011 drought in the Horn of Africa and its relationship to evolving SST patterns.
Jeon, Jihyoun; Hsu, Li; Gorfine, Malka
2012-07-01
Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.
NASA Astrophysics Data System (ADS)
Huang, Zhijiong; Hu, Yongtao; Zheng, Junyu; Zhai, Xinxin; Huang, Ran
2018-05-01
Lateral boundary conditions (LBCs) are essential for chemical transport models to simulate regional transport; however they often contain large uncertainties. This study proposes an optimized data fusion approach to reduce the bias of LBCs by fusing gridded model outputs, from which the daughter domain's LBCs are derived, with ground-level measurements. The optimized data fusion approach follows the framework of a previous interpolation-based fusion method but improves it by using a bias kriging method to correct the spatial bias in gridded model outputs. Cross-validation shows that the optimized approach better estimates fused fields in areas with a large number of observations compared to the previous interpolation-based method. The optimized approach was applied to correct LBCs of PM2.5 concentrations for simulations in the Pearl River Delta (PRD) region as a case study. Evaluations show that the LBCs corrected by data fusion improve in-domain PM2.5 simulations in terms of the magnitude and temporal variance. Correlation increases by 0.13-0.18 and fractional bias (FB) decreases by approximately 3%-15%. This study demonstrates the feasibility of applying data fusion to improve regional air quality modeling.
First Impressions of CARTOSAT-1
NASA Technical Reports Server (NTRS)
Lutes, James
2007-01-01
CARTOSAT-1 RPCs need special handling. Absolute accuracy of uncontrolled scenes is poor (biases > 300 m). Noticeable cross-track scale error (+/- 3-4 m across stereo pair). Most errors are either biases or linear in line/sample (These are easier to correct with ground control).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Czarnecki, D; Voigts-Rhetz, P von; Zink, K
2016-06-15
Purpose: The impact of removing the flattening filter on absolute dosimetry based on IAEA’s TPR-398 and AAPM’s TG-51 was investigated in this study using Monte Carlo simulations. Methods: The EGSnrc software package was used for all Monte Carlo simulations performed in this work. Five different ionization chambers and nine linear accelerator heads have been modeled according to technical drawings. To generate a flattening filter free radiation field the flattening filter was replaced by a 2 mm thick aluminum layer. Dose calculation in a water phantom were performed to calculate the beam quality correction factor k{sub Q} as a function ofmore » the beam quality specifiers %dd(10){sub x}, TPR{sub 20,10} and mean photon and electron energies at the point of measurement in photon fields with (WFF) and without flattening filter (FFF). Results: The beam quality correction factor as a function of %dd(10){sub x} differs systematically between FFF and WFF beams for all investigated ionization chambers. The largest difference of 1.8% was observed for the largest investigated Farmer-type ionization chamber with a sensitive volume of 0.69 cm{sup 3}. For ionization chambers with a smaller nominal sensitive volume (0.015 – 0.3 cm{sup 3}) the deviation was less than 0.4% between WFF and FFF beams for %dd(10){sub x} > 62%. The specifier TPR{sub 20,10} revealed only a good correlation between WFF and FFF beams (< 0.3%) for low energies. Conclusion: The results confirm that %dd(10){sub x} is a suitable beam quality specifier for FFF beams with an acceptable bias. The deviation depends on the volume of the ionization chamber. Using %dd(10){sub x} to predict k{sub Q} for a large volume chamber in a FFF photon field may lead to not acceptable errors according to the results of this study. This bias may be caused by the volume effect due to the inhomogeneous photon fields of FFF linear accelerators.« less
Error Detection and Correction in Spelling.
ERIC Educational Resources Information Center
Lydiatt, Steve
1984-01-01
Teachers can discover students' means of dealing with spelling as a problem through investigations of their error detection and correction skills. Approaches for measuring sensitivity and bias are described, as are means of developing appropriate instructional activities. (CL)
Miller, Chad S
2013-01-01
Nearly half of medical errors can be attributed to an error of clinical reasoning or decision making. It is estimated that the correct diagnosis is missed or delayed in between 5% and 14% of acute hospital admissions. Through understanding why and how physicians make these errors, it is hoped that strategies can be developed to decrease the number of these errors. In the present case, a patient presented with dyspnea, gastrointestinal symptoms and weight loss; the diagnosis was initially missed when the treating physicians took mental short cuts and used heuristics as in this case. Heuristics have an inherent bias that can lead to faulty reasoning or conclusions, especially in complex or difficult cases. Affective bias, which is the overinvolvement of emotion in clinical decision making, limited the available information for diagnosis because of the hesitancy to acquire a full history and perform a complete physical examination in this patient. Zebra retreat, another type of bias, is when a rare diagnosis figures prominently on the differential diagnosis but the physician retreats for various reasons. Zebra retreat also factored in the delayed diagnosis. Through the description of these clinical reasoning errors in an actual case, it is hoped that future errors can be prevented or inspiration for additional research in this area will develop.
Assessment of Biases in MODIS Surface Reflectance Due to Lambertian Approximation
NASA Technical Reports Server (NTRS)
Wang, Yujie; Lyapustin, Alexei I.; Privette, Jeffrey L.; Cook, Robert B.; SanthanaVannan, Suresh K.; Vermote, Eric F.; Schaaf, Crystal
2010-01-01
Using MODIS data and the AERONET-based Surface Reflectance Validation Network (ASRVN), this work studies errors of MODIS atmospheric correction caused by the Lambertian approximation. On one hand, this approximation greatly simplifies the radiative transfer model, reduces the size of the look-up tables, and makes operational algorithm faster. On the other hand, uncompensated atmospheric scattering caused by Lambertian model systematically biases the results. For example, for a typical bowl-shaped bidirectional reflectance distribution function (BRDF), the derived reflectance is underestimated at high solar or view zenith angles, where BRDF is high, and is overestimated at low zenith angles where BRDF is low. The magnitude of biases grows with the amount of scattering in the atmosphere, i.e., at shorter wavelengths and at higher aerosol concentration. The slope of regression of Lambertian surface reflectance vs. ASRVN bidirectional reflectance factor (BRF) is about 0.85 in the red and 0.6 in the green bands. This error propagates into the MODIS BRDF/albedo algorithm, slightly reducing the magnitude of overall reflectance and anisotropy of BRDF. This results in a small negative bias of spectral surface albedo. An assessment for the GSFC (Greenbelt, USA) validation site shows the albedo reduction by 0.004 in the near infrared, 0.005 in the red, and 0.008 in the green MODIS bands.
Cognitive Reflection, Decision Biases, and Response Times
Alós-Ferrer, Carlos; Garagnani, Michele; Hügelschäfer, Sabine
2016-01-01
We present novel evidence on response times and personality traits in standard questions from the decision-making literature where responses are relatively slow (medians around half a minute or above). To this end, we measured response times in a number of incentivized, framed items (decisions from description) including the Cognitive Reflection Test, two additional questions following the same logic, and a number of classic questions used to study decision biases in probability judgments (base-rate neglect, the conjunction fallacy, and the ratio bias). All questions create a conflict between an intuitive process and more deliberative thinking. For each item, we then created a non-conflict version by either making the intuitive impulse correct (resulting in an alignment question), shutting it down (creating a neutral question), or making it dominant (creating a heuristic question). For CRT questions, the differences in response times are as predicted by dual-process theories, with alignment and heuristic variants leading to faster responses and neutral questions to slower responses than the original, conflict questions. For decision biases (where responses are slower), evidence is mixed. To explore the possible influence of personality factors on both choices and response times, we used standard personality scales including the Rational-Experiential Inventory and the Big Five, and used them as controls in regression analysis. PMID:27713710
Cognitive Reflection, Decision Biases, and Response Times.
Alós-Ferrer, Carlos; Garagnani, Michele; Hügelschäfer, Sabine
2016-01-01
We present novel evidence on response times and personality traits in standard questions from the decision-making literature where responses are relatively slow (medians around half a minute or above). To this end, we measured response times in a number of incentivized, framed items (decisions from description) including the Cognitive Reflection Test, two additional questions following the same logic, and a number of classic questions used to study decision biases in probability judgments (base-rate neglect, the conjunction fallacy, and the ratio bias). All questions create a conflict between an intuitive process and more deliberative thinking. For each item, we then created a non-conflict version by either making the intuitive impulse correct (resulting in an alignment question), shutting it down (creating a neutral question), or making it dominant (creating a heuristic question). For CRT questions, the differences in response times are as predicted by dual-process theories, with alignment and heuristic variants leading to faster responses and neutral questions to slower responses than the original, conflict questions. For decision biases (where responses are slower), evidence is mixed. To explore the possible influence of personality factors on both choices and response times, we used standard personality scales including the Rational-Experiential Inventory and the Big Five, and used them as controls in regression analysis.
Evaluation of the Klobuchar model in TaiWan
NASA Astrophysics Data System (ADS)
Li, Jinghua; Wan, Qingtao; Ma, Guanyi; Zhang, Jie; Wang, Xiaolan; Fan, Jiangtao
2017-09-01
Ionospheric delay is the mainly error source in Global Navigation Satellite System (GNSS). Ionospheric model is one of the ways to correct the ionospheric delay. The single-frequency GNSS users modify the ionospheric delay by receiving the correction parameters broadcasted by satellites. Klobuchar model is widely used in Global Positioning System (GPS) and COMPASS because it is simple and convenient for real-time calculation. This model is established on the observations mainly from Europe and USA. It does not describe the equatorial anomaly region. South of China is located near the north crest of the equatorial anomaly, where the ionosphere has complex spatial and temporal variation. The assessment on the validation of Klobuchar model in this area is important to improve this model. Eleven years (2003-2014) data from one GPS receiver located at Taoyuan Taiwan (121°E, 25°N) are used to assess the validation of Klobuchar model in Taiwan. Total electron content (TEC) from the dual-frequency GPS observations is calculated and used as the reference, and TEC based on the Klobuchar model is compared with the reference. The residual is defined as the difference between the TEC from Klobuchar model and the reference. It is a parameter to reflect the absolute correction of the model. RMS correction percentage presents the validation of the model relative to the observations. The residuals' long-term variation, the RMS correction percentage, and their changes with the latitudes are analyzed respectively to access the model. In some months the RMS correction did not reach the goal of 50% purposed by Klobuchar, especially in the winter of the low solar activity years and at nighttime. RMS correction did not depend on the 11-years solar activity, neither the latitudes. Different from RMS correction, the residuals changed with the solar activity, similar to the variation of TEC. The residuals were large in the daytime, during the equinox seasons and in the high solar activity years; they are small at night, during the solstice seasons, and in the low activity years. During 1300-1500 BJT in the high solar activity years, the mean bias was negative, implying the model underestimated TEC on average. The maximum mean bias was 33TECU in April 2014, and the maximum underestimation reached 97TECU in October 2011. During 0000-0200 BJT, the residuals had small mean bias, small variation range and small standard deviation. It suggested that the model could describe the TEC of the ionosphere better than that in the daytime. Besides the variation with the solar activity, the residuals also vary with the latitudes. The means bias reached the maximum at 20-22°N, corresponding to the north crest of the equatorial anomaly. At this latitude, the maximum mean bias was 47TECU lower than the observation in the high activity years, and 12TECU lower in the low activity years. The minimum variation range appeared at 30-32°N in high and low activity years. But the minimum mean bias was at different latitudes in the high and low activity years. In the high activity years, it appeared at 30-32°N, and in the low years it was at 24-26°N. For an ideal model, the residuals should have small mean bias and small variation range. Further study is needed to learn the distribution of the residuals and to improve the model.
An algebraic algorithm for nonuniformity correction in focal-plane arrays.
Ratliff, Bradley M; Hayat, Majeed M; Hardie, Russell C
2002-09-01
A scene-based algorithm is developed to compensate for bias nonuniformity in focal-plane arrays. Nonuniformity can be extremely problematic, especially for mid- to far-infrared imaging systems. The technique is based on use of estimates of interframe subpixel shifts in an image sequence, in conjunction with a linear-interpolation model for the motion, to extract information on the bias nonuniformity algebraically. The performance of the proposed algorithm is analyzed by using real infrared and simulated data. One advantage of this technique is its simplicity; it requires relatively few frames to generate an effective correction matrix, thereby permitting the execution of frequent on-the-fly nonuniformity correction as drift occurs. Additionally, the performance is shown to exhibit considerable robustness with respect to lack of the common types of temporal and spatial irradiance diversity that are typically required by statistical scene-based nonuniformity correction techniques.
Correcting for particle counting bias error in turbulent flow
NASA Technical Reports Server (NTRS)
Edwards, R. V.; Baratuci, W.
1985-01-01
An ideal seeding device is proposed generating particles that exactly follow the flow out are still a major source of error, i.e., with a particle counting bias wherein the probability of measuring velocity is a function of velocity. The error in the measured mean can be as much as 25%. Many schemes have been put forward to correct for this error, but there is not universal agreement as to the acceptability of any one method. In particular it is sometimes difficult to know if the assumptions required in the analysis are fulfilled by any particular flow measurement system. To check various correction mechanisms in an ideal way and to gain some insight into how to correct with the fewest initial assumptions, a computer simulation is constructed to simulate laser anemometer measurements in a turbulent flow. That simulator and the results of its use are discussed.
NASA Technical Reports Server (NTRS)
Hayne, G. S.; Hancock, D. W., III
1990-01-01
Range estimates from a radar altimeter have biases which are a function of the significant wave height (SWH) and the satellite attitude angle (AA). Based on results of prelaunch Geosat modeling and simulation, a correction for SWH and AA was already applied to the sea-surface height estimates from Geosat's production data processing. By fitting a detailed model radar return waveform to Geosat waveform sampler data, it is possible to provide independent estimates of the height bias, the SWH, and the AA. The waveform fitting has been carried out for 10-sec averages of Geosat waveform sampler data over a wide range of SWH and AA values. The results confirm that Geosat sea-surface-height correction is good to well within the original dm-level specification, but that an additional height correction can be made at the level of several cm.
New Radiosonde Temperature Bias Adjustments for Potential NWP Applications Based on GPS RO Data
NASA Astrophysics Data System (ADS)
Sun, B.; Reale, A.; Ballish, B.; Seidel, D. J.
2014-12-01
Conventional radiosonde observations (RAOBs), along with satellite and other in situ data, are assimilated in numerical weather prediction (NWP) models to generate a forecast. Radiosonde temperature observations, however, have solar and thermal radiation induced biases (typically a warm daytime bias from sunlight heating the sensor and a cold bias at night as the sensor emits longwave radiation). Radiation corrections made at stations based on algorithms provided by radiosonde manufacturers or national meteorological agencies may not be adequate, so biases remain. To adjust these biases, NWP centers may make additional adjustments to radiosonde data. However, the radiation correction (RADCOR) schemes used in the NOAA NCEP data assimilation and forecasting system is outdated and does not cover several widely-used contemporary radiosonde types. This study focuses on work whose objective is to improve these corrections and test their impacts on the NWP forecasting and analysis. GPS Radio Occultation (RO) dry temperature (Tdry) is considered to be highly accurate in the upper troposphere and low stratosphere where atmospheric water vapor is negligible. This study uses GPS RO Tdry from the Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) as the reference to quantify the radiation induced RAOB temperature errors by analyzing ~ 3-yr collocated RAOB and COSMIC GPS RO data compile by the NOAA Products Validation System (NPROVS). The new radiation adjustments are developed for different solar angle categories and for all common sonde types flown in the WMO global operational upper air network. Results for global and several commonly used sondes are presented in the context of NCEP Global Forecast System observation-minus-background analysis, indicating projected impacts in reducing forecast error. Dedicated NWP impact studies to quantify the impact of the new RADCOR schemes on the NCEP analyses and forecast are under consideration.
Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette
2018-03-01
In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Mamtani, Manju; Jawahirani, Anil; Das, Kishor; Rughwani, Vinky; Kulkarni, Hemant
2006-08-01
It is being increasingly recognized that a majority of the countries in the thalassemia-belt need a cost-effective screening program as the first step towards control of thalassemia. Although the naked eye single tube red cell osmotic fragility test (NESTROFT) has been considered to be a very effective screening tool for beta-thalassemia trait, assessment of its diagnostic performance has been affected with the reference test- and verification-bias. Here, we set out to provide estimates of sensitivity and specificity of NESTROFT corrected for these potential biases. We conducted a cross-sectional diagnostic test evaluation study using data from 1563 subjects from Central India with a high prevalence of beta-thalassemia. We used latent class modelling after ensuring its validity to account for the reference test bias and global sensitivity analysis to control the verification bias. We also compared the results of latent class modelling with those of five discriminant indexes. We observed that across a range of cut-offs for the mean corpuscular volume (MCV) and the hemoglobin A2 (HbA2) concentration the average sensitivity and specificity of NESTROFT obtained from latent class modelling was 99.8 and 83.7%, respectively. These estimates were comparable to those characterizing the diagnostic performance of HbA2, which is considered by many as the reference test to detect beta-thalassemia. After correction for the verification bias these estimates were 93.4 and 97.2%, respectively. Combined with the inexpensive and quick disposition of NESTROFT, these results strongly support its candidature as a screening tool-especially in the resource-poor and high-prevalence settings.
SU-E-T-525: Ionization Chamber Perturbation in Flattening Filter Free Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Czarnecki, D; Voigts-Rhetz, P von; Zink, K
2015-06-15
Purpose: Changing the characteristic of a photon beam by mechanically removing the flattening filter may impact the dose response of ionization chambers. Thus, perturbation factors of cylindrical ionization chambers in conventional and flattening filter free photon beams were calculated by Monte Carlo simulations. Methods: The EGSnrc/BEAMnrc code system was used for all Monte Carlo calculations. BEAMnrc models of nine different linear accelerators with and without flattening filter were used to create realistic photon sources. Monte Carlo based calculations to determine the fluence perturbations due to the presens of the chambers components, the different material of the sensitive volume (air insteadmore » of water) as well as the volume effect were performed by the user code egs-chamber. Results: Stem, central electrode, wall, density and volume perturbation factors for linear accelerators with and without flattening filter were calculated as a function of the beam quality specifier TPR{sub 20/10}. A bias between the perturbation factors as a function of TPR{sub 20/10} for flattening filter free beams and conventional linear accelerators could not be observed for the perturbations caused by the components of the ionization chamber and the sensitive volume. Conclusion: The results indicate that the well-known small bias between the beam quality correction factor as a function of TPR20/10 for the flattening filter free and conventional linear accelerators is not caused by the geometry of the detector but rather by the material of the sensitive volume. This suggest that the bias for flattening filter free photon fields is only caused by the different material of the sensitive volume (air instead of water)« less
Asymptotics of empirical eigenstructure for high dimensional spiked covariance.
Wang, Weichen; Fan, Jianqing
2017-06-01
We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies.
Asymptotics of empirical eigenstructure for high dimensional spiked covariance
Wang, Weichen
2017-01-01
We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies. PMID:28835726