Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean
2014-01-01
MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.
Fourcade, Yoan; Engler, Jan O.; Rödder, Dennis; Secondi, Jean
2014-01-01
MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one “virtual” derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases. PMID:24818607
Darbani, Behrooz; Stewart, C Neal; Noeparvar, Shahin; Borg, Søren
2014-10-20
This report investigates for the first time the potential inter-treatment bias source of cell number for gene expression studies. Cell-number bias can affect gene expression analysis when comparing samples with unequal total cellular RNA content or with different RNA extraction efficiencies. For maximal reliability of analysis, therefore, comparisons should be performed at the cellular level. This could be accomplished using an appropriate correction method that can detect and remove the inter-treatment bias for cell-number. Based on inter-treatment variations of reference genes, we introduce an analytical approach to examine the suitability of correction methods by considering the inter-treatment bias as well as the inter-replicate variance, which allows use of the best correction method with minimum residual bias. Analyses of RNA sequencing and microarray data showed that the efficiencies of correction methods are influenced by the inter-treatment bias as well as the inter-replicate variance. Therefore, we recommend inspecting both of the bias sources in order to apply the most efficient correction method. As an alternative correction strategy, sequential application of different correction approaches is also advised. Copyright © 2014 Elsevier B.V. All rights reserved.
How does bias correction of RCM precipitation affect modelled runoff?
NASA Astrophysics Data System (ADS)
Teng, J.; Potter, N. J.; Chiew, F. H. S.; Zhang, L.; Vaze, J.; Evans, J. P.
2014-09-01
Many studies bias correct daily precipitation from climate models to match the observed precipitation statistics, and the bias corrected data are then used for various modelling applications. This paper presents a review of recent methods used to bias correct precipitation from regional climate models (RCMs). The paper then assesses four bias correction methods applied to the weather research and forecasting (WRF) model simulated precipitation, and the follow-on impact on modelled runoff for eight catchments in southeast Australia. Overall, the best results are produced by either quantile mapping or a newly proposed two-state gamma distribution mapping method. However, the difference between the tested methods is small in the modelling experiments here (and as reported in the literature), mainly because of the substantial corrections required and inconsistent errors over time (non-stationarity). The errors remaining in bias corrected precipitation are typically amplified in modelled runoff. The tested methods cannot overcome limitation of RCM in simulating precipitation sequence, which affects runoff generation. Results further show that whereas bias correction does not seem to alter change signals in precipitation means, it can introduce additional uncertainty to change signals in high precipitation amounts and, consequently, in runoff. Future climate change impact studies need to take this into account when deciding whether to use raw or bias corrected RCM results. Nevertheless, RCMs will continue to improve and will become increasingly useful for hydrological applications as the bias in RCM simulations reduces.
Can quantile mapping improve precipitation extremes from regional climate models?
NASA Astrophysics Data System (ADS)
Tani, Satyanarayana; Gobiet, Andreas
2015-04-01
The ability of quantile mapping to accurately bias correct regard to precipitation extremes is investigated in this study. We developed new methods by extending standard quantile mapping (QMα) to improve the quality of bias corrected extreme precipitation events as simulated by regional climate model (RCM) output. The new QM version (QMβ) was developed by combining parametric and nonparametric bias correction methods. The new nonparametric method is tested with and without a controlling shape parameter (Qmβ1 and Qmβ0, respectively). Bias corrections are applied on hindcast simulations for a small ensemble of RCMs at six different locations over Europe. We examined the quality of the extremes through split sample and cross validation approaches of these three bias correction methods. This split-sample approach mimics the application to future climate scenarios. A cross validation framework with particular focus on new extremes was developed. Error characteristics, q-q plots and Mean Absolute Error (MAEx) skill scores are used for evaluation. We demonstrate the unstable behaviour of correction function at higher quantiles with QMα, whereas the correction functions with for QMβ0 and QMβ1 are smoother, with QMβ1 providing the most reasonable correction values. The result from q-q plots demonstrates that, all bias correction methods are capable of producing new extremes but QMβ1 reproduces new extremes with low biases in all seasons compared to QMα, QMβ0. Our results clearly demonstrate the inherent limitations of empirical bias correction methods employed for extremes, particularly new extremes, and our findings reveals that the new bias correction method (Qmß1) produces more reliable climate scenarios for new extremes. These findings present a methodology that can better capture future extreme precipitation events, which is necessary to improve regional climate change impact studies.
How does bias correction of regional climate model precipitation affect modelled runoff?
NASA Astrophysics Data System (ADS)
Teng, J.; Potter, N. J.; Chiew, F. H. S.; Zhang, L.; Wang, B.; Vaze, J.; Evans, J. P.
2015-02-01
Many studies bias correct daily precipitation from climate models to match the observed precipitation statistics, and the bias corrected data are then used for various modelling applications. This paper presents a review of recent methods used to bias correct precipitation from regional climate models (RCMs). The paper then assesses four bias correction methods applied to the weather research and forecasting (WRF) model simulated precipitation, and the follow-on impact on modelled runoff for eight catchments in southeast Australia. Overall, the best results are produced by either quantile mapping or a newly proposed two-state gamma distribution mapping method. However, the differences between the methods are small in the modelling experiments here (and as reported in the literature), mainly due to the substantial corrections required and inconsistent errors over time (non-stationarity). The errors in bias corrected precipitation are typically amplified in modelled runoff. The tested methods cannot overcome limitations of the RCM in simulating precipitation sequence, which affects runoff generation. Results further show that whereas bias correction does not seem to alter change signals in precipitation means, it can introduce additional uncertainty to change signals in high precipitation amounts and, consequently, in runoff. Future climate change impact studies need to take this into account when deciding whether to use raw or bias corrected RCM results. Nevertheless, RCMs will continue to improve and will become increasingly useful for hydrological applications as the bias in RCM simulations reduces.
NASA Astrophysics Data System (ADS)
Smitha, P. S.; Narasimhan, B.; Sudheer, K. P.; Annamalai, H.
2018-01-01
Regional climate models (RCMs) are used to downscale the coarse resolution General Circulation Model (GCM) outputs to a finer resolution for hydrological impact studies. However, RCM outputs often deviate from the observed climatological data, and therefore need bias correction before they are used for hydrological simulations. While there are a number of methods for bias correction, most of them use monthly statistics to derive correction factors, which may cause errors in the rainfall magnitude when applied on a daily scale. This study proposes a sliding window based daily correction factor derivations that help build reliable daily rainfall data from climate models. The procedure is applied to five existing bias correction methods, and is tested on six watersheds in different climatic zones of India for assessing the effectiveness of the corrected rainfall and the consequent hydrological simulations. The bias correction was performed on rainfall data downscaled using Conformal Cubic Atmospheric Model (CCAM) to 0.5° × 0.5° from two different CMIP5 models (CNRM-CM5.0, GFDL-CM3.0). The India Meteorological Department (IMD) gridded (0.25° × 0.25°) observed rainfall data was considered to test the effectiveness of the proposed bias correction method. The quantile-quantile (Q-Q) plots and Nash Sutcliffe efficiency (NSE) were employed for evaluation of different methods of bias correction. The analysis suggested that the proposed method effectively corrects the daily bias in rainfall as compared to using monthly factors. The methods such as local intensity scaling, modified power transformation and distribution mapping, which adjusted the wet day frequencies, performed superior compared to the other methods, which did not consider adjustment of wet day frequencies. The distribution mapping method with daily correction factors was able to replicate the daily rainfall pattern of observed data with NSE value above 0.81 over most parts of India. Hydrological simulations forced using the bias corrected rainfall (distribution mapping and modified power transformation methods that used the proposed daily correction factors) was similar to those simulated by the IMD rainfall. The results demonstrate that the methods and the time scales used for bias correction of RCM rainfall data have a larger impact on the accuracy of the daily rainfall and consequently the simulated streamflow. The analysis suggests that the distribution mapping with daily correction factors can be preferred for adjusting RCM rainfall data irrespective of seasons or climate zones for realistic simulation of streamflow.
NASA Astrophysics Data System (ADS)
Passow, Christian; Donner, Reik
2017-04-01
Quantile mapping (QM) is an established concept that allows to correct systematic biases in multiple quantiles of the distribution of a climatic observable. It shows remarkable results in correcting biases in historical simulations through observational data and outperforms simpler correction methods which relate only to the mean or variance. Since it has been shown that bias correction of future predictions or scenario runs with basic QM can result in misleading trends in the projection, adjusted, trend preserving, versions of QM were introduced in the form of detrended quantile mapping (DQM) and quantile delta mapping (QDM) (Cannon, 2015, 2016). Still, all previous versions and applications of QM based bias correction rely on the assumption of time-independent quantiles over the investigated period, which can be misleading in the context of a changing climate. Here, we propose a novel combination of linear quantile regression (QR) with the classical QM method to introduce a consistent, time-dependent and trend preserving approach of bias correction for historical and future projections. Since QR is a regression method, it is possible to estimate quantiles in the same resolution as the given data and include trends or other dependencies. We demonstrate the performance of the new method of linear regression quantile mapping (RQM) in correcting biases of temperature and precipitation products from historical runs (1959 - 2005) of the COSMO model in climate mode (CCLM) from the Euro-CORDEX ensemble relative to gridded E-OBS data of the same spatial and temporal resolution. A thorough comparison with established bias correction methods highlights the strengths and potential weaknesses of the new RQM approach. References: A.J. Cannon, S.R. Sorbie, T.Q. Murdock: Bias Correction of GCM Precipitation by Quantile Mapping - How Well Do Methods Preserve Changes in Quantiles and Extremes? Journal of Climate, 28, 6038, 2015 A.J. Cannon: Multivariate Bias Correction of Climate Model Outputs - Matching Marginal Distributions and Inter-variable Dependence Structure. Journal of Climate, 29, 7045, 2016
RELIC: a novel dye-bias correction method for Illumina Methylation BeadChip.
Xu, Zongli; Langie, Sabine A S; De Boever, Patrick; Taylor, Jack A; Niu, Liang
2017-01-03
The Illumina Infinium HumanMethylation450 BeadChip and its successor, Infinium MethylationEPIC BeadChip, have been extensively utilized in epigenome-wide association studies. Both arrays use two fluorescent dyes (Cy3-green/Cy5-red) to measure methylation level at CpG sites. However, performance difference between dyes can result in biased estimates of methylation levels. Here we describe a novel method, called REgression on Logarithm of Internal Control probes (RELIC) to correct for dye bias on whole array by utilizing the intensity values of paired internal control probes that monitor the two color channels. We evaluate the method in several datasets against other widely used dye-bias correction methods. Results on data quality improvement showed that RELIC correction statistically significantly outperforms alternative dye-bias correction methods. We incorporated the method into the R package ENmix, which is freely available from the Bioconductor website ( https://www.bioconductor.org/packages/release/bioc/html/ENmix.html ). RELIC is an efficient and robust method to correct for dye-bias in Illumina Methylation BeadChip data. It outperforms other alternative methods and conveniently implemented in R package ENmix to facilitate DNA methylation studies.
A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield
Ringard, Justine; Seyler, Frederique; Linguet, Laurent
2017-01-01
Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale. PMID:28621723
Ringard, Justine; Seyler, Frederique; Linguet, Laurent
2017-06-16
Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale.
Towards process-informed bias correction of climate change simulations
NASA Astrophysics Data System (ADS)
Maraun, Douglas; Shepherd, Theodore G.; Widmann, Martin; Zappa, Giuseppe; Walton, Daniel; Gutiérrez, José M.; Hagemann, Stefan; Richter, Ingo; Soares, Pedro M. M.; Hall, Alex; Mearns, Linda O.
2017-11-01
Biases in climate model simulations introduce biases in subsequent impact simulations. Therefore, bias correction methods are operationally used to post-process regional climate projections. However, many problems have been identified, and some researchers question the very basis of the approach. Here we demonstrate that a typical cross-validation is unable to identify improper use of bias correction. Several examples show the limited ability of bias correction to correct and to downscale variability, and demonstrate that bias correction can cause implausible climate change signals. Bias correction cannot overcome major model errors, and naive application might result in ill-informed adaptation decisions. We conclude with a list of recommendations and suggestions for future research to reduce, post-process, and cope with climate model biases.
NASA Astrophysics Data System (ADS)
Chen, Jie; Li, Chao; Brissette, François P.; Chen, Hua; Wang, Mingna; Essou, Gilles R. C.
2018-05-01
Bias correction is usually implemented prior to using climate model outputs for impact studies. However, bias correction methods that are commonly used treat climate variables independently and often ignore inter-variable dependencies. The effects of ignoring such dependencies on impact studies need to be investigated. This study aims to assess the impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling. To this end, a joint bias correction (JBC) method which corrects the joint distribution of two variables as a whole is compared with an independent bias correction (IBC) method; this is considered in terms of correcting simulations of precipitation and temperature from 26 climate models for hydrological modeling over 12 watersheds located in various climate regimes. The results show that the simulated precipitation and temperature are considerably biased not only in the individual distributions, but also in their correlations, which in turn result in biased hydrological simulations. In addition to reducing the biases of the individual characteristics of precipitation and temperature, the JBC method can also reduce the bias in precipitation-temperature (P-T) correlations. In terms of hydrological modeling, the JBC method performs significantly better than the IBC method for 11 out of the 12 watersheds over the calibration period. For the validation period, the advantages of the JBC method are greatly reduced as the performance becomes dependent on the watershed, GCM and hydrological metric considered. For arid/tropical and snowfall-rainfall-mixed watersheds, JBC performs better than IBC. For snowfall- or rainfall-dominated watersheds, however, the two methods behave similarly, with IBC performing somewhat better than JBC. Overall, the results emphasize the advantages of correcting the P-T correlation when using climate model-simulated precipitation and temperature to assess the impact of climate change on watershed hydrology. However, a thorough validation and a comparison with other methods are recommended before using the JBC method, since it may perform worse than the IBC method for some cases due to bias nonstationarity of climate model outputs.
Hypothesis Testing Using Factor Score Regression
Devlieger, Ines; Mayer, Axel; Rosseel, Yves
2015-01-01
In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886
Comparing State SAT Scores: Problems, Biases, and Corrections.
ERIC Educational Resources Information Center
Gohmann, Stephen F.
1988-01-01
One method to correct for selection bias in comparing Scholastic Aptitude Test (SAT) scores among states is presented, which is a modification of J. J. Heckman's Selection Bias Correction (1976, 1979). Empirical results suggest that sample selection bias is present in SAT score regressions. (SLD)
Correction factors for self-selection when evaluating screening programmes.
Spix, Claudia; Berthold, Frank; Hero, Barbara; Michaelis, Jörg; Schilling, Freimut H
2016-03-01
In screening programmes there is recognized bias introduced through participant self-selection (the healthy screenee bias). Methods used to evaluate screening programmes include Intention-to-screen, per-protocol, and the "post hoc" approach in which, after introducing screening for everyone, the only evaluation option is participants versus non-participants. All methods are prone to bias through self-selection. We present an overview of approaches to correct for this bias. We considered four methods to quantify and correct for self-selection bias. Simple calculations revealed that these corrections are actually all identical, and can be converted into each other. Based on this, correction factors for further situations and measures were derived. The application of these correction factors requires a number of assumptions. Using as an example the German Neuroblastoma Screening Study, no relevant reduction in mortality or stage 4 incidence due to screening was observed. The largest bias (in favour of screening) was observed when comparing participants with non-participants. Correcting for bias is particularly necessary when using the post hoc evaluation approach, however, in this situation not all required data are available. External data or further assumptions may be required for estimation. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish
2018-02-01
Conventional bias correction is usually applied on a grid-by-grid basis, meaning that the resulting corrections cannot address biases in the spatial distribution of climate variables. To solve this problem, a two-step bias correction method is proposed here to correct time series at multiple locations conjointly. The first step transforms the data to a set of statistically independent univariate time series, using a technique known as independent component analysis (ICA). The mutually independent signals can then be bias corrected as univariate time series and back-transformed to improve the representation of spatial dependence in the data. The spatially corrected data are then bias corrected at the grid scale in the second step. The method has been applied to two CMIP5 General Circulation Model simulations for six different climate regions of Australia for two climate variables—temperature and precipitation. The results demonstrate that the ICA-based technique leads to considerable improvements in temperature simulations with more modest improvements in precipitation. Overall, the method results in current climate simulations that have greater equivalency in space and time with observational data.
Hypothesis Testing Using Factor Score Regression: A Comparison of Four Methods
ERIC Educational Resources Information Center
Devlieger, Ines; Mayer, Axel; Rosseel, Yves
2016-01-01
In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and…
A brain MRI bias field correction method created in the Gaussian multi-scale space
NASA Astrophysics Data System (ADS)
Chen, Mingsheng; Qin, Mingxin
2017-07-01
A pre-processing step is needed to correct for the bias field signal before submitting corrupted MR images to such image-processing algorithms. This study presents a new bias field correction method. The method creates a Gaussian multi-scale space by the convolution of the inhomogeneous MR image with a two-dimensional Gaussian function. In the multi-Gaussian space, the method retrieves the image details from the differentiation of the original image and convolution image. Then, it obtains an image whose inhomogeneity is eliminated by the weighted sum of image details in each layer in the space. Next, the bias field-corrected MR image is retrieved after the Υ correction, which enhances the contrast and brightness of the inhomogeneity-eliminated MR image. We have tested the approach on T1 MRI and T2 MRI with varying bias field levels and have achieved satisfactory results. Comparison experiments with popular software have demonstrated superior performance of the proposed method in terms of quantitative indices, especially an improvement in subsequent image segmentation.
Good, Nicholas; Mölter, Anna; Peel, Jennifer L; Volckens, John
2017-07-01
The AE51 micro-Aethalometer (microAeth) is a popular and useful tool for assessing personal exposure to particulate black carbon (BC). However, few users of the AE51 are aware that its measurements are biased low (by up to 70%) due to the accumulation of BC on the filter substrate over time; previous studies of personal black carbon exposure are likely to have suffered from this bias. Although methods to correct for bias in micro-Aethalometer measurements of particulate black carbon have been proposed, these methods have not been verified in the context of personal exposure assessment. Here, five Aethalometer loading correction equations based on published methods were evaluated. Laboratory-generated aerosols of varying black carbon content (ammonium sulfate, Aquadag and NIST diesel particulate matter) were used to assess the performance of these methods. Filters from a personal exposure assessment study were also analyzed to determine how the correction methods performed for real-world samples. Standard correction equations produced correction factors with root mean square errors of 0.10 to 0.13 and mean bias within ±0.10. An optimized correction equation is also presented, along with sampling recommendations for minimizing bias when assessing personal exposure to BC using the AE51 micro-Aethalometer.
Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q.; Ducote, Justin L.; Su, Min-Ying; Molloi, Sabee
2013-01-01
Purpose: Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. Methods: T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left–right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. Results: The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left–right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left–right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. Conclusions: The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications. PMID:24320536
Bias-correction of CORDEX-MENA projections using the Distribution Based Scaling method
NASA Astrophysics Data System (ADS)
Bosshard, Thomas; Yang, Wei; Sjökvist, Elin; Arheimer, Berit; Graham, L. Phil
2014-05-01
Within the Regional Initiative for the Assessment of the Impact of Climate Change on Water Resources and Socio-Economic Vulnerability in the Arab Region (RICCAR) lead by UN ESCWA, CORDEX RCM projections for the Middle East Northern Africa (MENA) domain are used to drive hydrological impacts models. Bias-correction of newly available CORDEX-MENA projections is a central part of this project. In this study, the distribution based scaling (DBS) method has been applied to 6 regional climate model projections driven by 2 RCP emission scenarios. The DBS method uses a quantile mapping approach and features a conditional temperature correction dependent on the wet/dry state in the climate model data. The CORDEX-MENA domain is particularly challenging for bias-correction as it spans very diverse climates showing pronounced dry and wet seasons. Results show that the regional climate models simulate too low temperatures and often have a displaced rainfall band compared to WATCH ERA-Interim forcing data in the reference period 1979-2008. DBS is able to correct the temperature biases as well as some aspects of the precipitation biases. Special focus is given to the analysis of the influence of the dry-frequency bias (i.e. climate models simulating too few rain days) on the bias-corrected projections and on the modification of the climate change signal by the DBS method.
Zeng, Chan; Newcomer, Sophia R; Glanz, Jason M; Shoup, Jo Ann; Daley, Matthew F; Hambidge, Simon J; Xu, Stanley
2013-12-15
The self-controlled case series (SCCS) method is often used to examine the temporal association between vaccination and adverse events using only data from patients who experienced such events. Conditional Poisson regression models are used to estimate incidence rate ratios, and these models perform well with large or medium-sized case samples. However, in some vaccine safety studies, the adverse events studied are rare and the maximum likelihood estimates may be biased. Several bias correction methods have been examined in case-control studies using conditional logistic regression, but none of these methods have been evaluated in studies using the SCCS design. In this study, we used simulations to evaluate 2 bias correction approaches-the Firth penalized maximum likelihood method and Cordeiro and McCullagh's bias reduction after maximum likelihood estimation-with small sample sizes in studies using the SCCS design. The simulations showed that the bias under the SCCS design with a small number of cases can be large and is also sensitive to a short risk period. The Firth correction method provides finite and less biased estimates than the maximum likelihood method and Cordeiro and McCullagh's method. However, limitations still exist when the risk period in the SCCS design is short relative to the entire observation period.
NASA Astrophysics Data System (ADS)
Fang, G. H.; Yang, J.; Chen, Y. N.; Zammit, C.
2015-06-01
Water resources are essential to the ecosystem and social economy in the desert and oasis of the arid Tarim River basin, northwestern China, and expected to be vulnerable to climate change. It has been demonstrated that regional climate models (RCMs) provide more reliable results for a regional impact study of climate change (e.g., on water resources) than general circulation models (GCMs). However, due to their considerable bias it is still necessary to apply bias correction before they are used for water resources research. In this paper, after a sensitivity analysis on input meteorological variables based on the Sobol' method, we compared five precipitation correction methods and three temperature correction methods in downscaling RCM simulations applied over the Kaidu River basin, one of the headwaters of the Tarim River basin. Precipitation correction methods applied include linear scaling (LS), local intensity scaling (LOCI), power transformation (PT), distribution mapping (DM) and quantile mapping (QM), while temperature correction methods are LS, variance scaling (VARI) and DM. The corrected precipitation and temperature were compared to the observed meteorological data, prior to being used as meteorological inputs of a distributed hydrologic model to study their impacts on streamflow. The results show (1) streamflows are sensitive to precipitation, temperature and solar radiation but not to relative humidity and wind speed; (2) raw RCM simulations are heavily biased from observed meteorological data, and its use for streamflow simulations results in large biases from observed streamflow, and all bias correction methods effectively improved these simulations; (3) for precipitation, PT and QM methods performed equally best in correcting the frequency-based indices (e.g., standard deviation, percentile values) while the LOCI method performed best in terms of the time-series-based indices (e.g., Nash-Sutcliffe coefficient, R2); (4) for temperature, all correction methods performed equally well in correcting raw temperature; and (5) for simulated streamflow, precipitation correction methods have more significant influence than temperature correction methods and the performances of streamflow simulations are consistent with those of corrected precipitation; i.e., the PT and QM methods performed equally best in correcting flow duration curve and peak flow while the LOCI method performed best in terms of the time-series-based indices. The case study is for an arid area in China based on a specific RCM and hydrologic model, but the methodology and some results can be applied to other areas and models.
A method of bias correction for maximal reliability with dichotomous measures.
Penev, Spiridon; Raykov, Tenko
2010-02-01
This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.
NASA Astrophysics Data System (ADS)
Moghim, S.; Hsu, K.; Bras, R. L.
2013-12-01
General Circulation Models (GCMs) are used to predict circulation and energy transfers between the atmosphere and the land. It is known that these models produce biased results that will have impact on their uses. This work proposes a new method for bias correction: the equidistant cumulative distribution function-artificial neural network (EDCDFANN) procedure. The method uses artificial neural networks (ANNs) as a surrogate model to estimate bias-corrected temperature, given an identification of the system derived from GCM models output variables. A two-layer feed forward neural network is trained with observations during a historical period and then the adjusted network can be used to predict bias-corrected temperature for future periods. To capture the extreme values this method is combined with the equidistant CDF matching method (EDCDF, Li et al. 2010). The proposed method is tested with the Community Climate System Model (CCSM3) outputs using air and skin temperature, specific humidity, shortwave and longwave radiation as inputs to the ANN. This method decreases the mean square error and increases the spatial correlation between the modeled temperature and the observed one. The results indicate the EDCDFANN has potential to remove the biases of the model outputs.
NASA Astrophysics Data System (ADS)
Prasanna, V.
2018-01-01
This study makes use of temperature and precipitation from CMIP5 climate model output for climate change application studies over the Indian region during the summer monsoon season (JJAS). Bias correction of temperature and precipitation from CMIP5 GCM simulation results with respect to observation is discussed in detail. The non-linear statistical bias correction is a suitable bias correction method for climate change data because it is simple and does not add up artificial uncertainties to the impact assessment of climate change scenarios for climate change application studies (agricultural production changes) in the future. The simple statistical bias correction uses observational constraints on the GCM baseline, and the projected results are scaled with respect to the changing magnitude in future scenarios, varying from one model to the other. Two types of bias correction techniques are shown here: (1) a simple bias correction using a percentile-based quantile-mapping algorithm and (2) a simple but improved bias correction method, a cumulative distribution function (CDF; Weibull distribution function)-based quantile-mapping algorithm. This study shows that the percentile-based quantile mapping method gives results similar to the CDF (Weibull)-based quantile mapping method, and both the methods are comparable. The bias correction is applied on temperature and precipitation variables for present climate and future projected data to make use of it in a simple statistical model to understand the future changes in crop production over the Indian region during the summer monsoon season. In total, 12 CMIP5 models are used for Historical (1901-2005), RCP4.5 (2005-2100), and RCP8.5 (2005-2100) scenarios. The climate index from each CMIP5 model and the observed agricultural yield index over the Indian region are used in a regression model to project the changes in the agricultural yield over India from RCP4.5 and RCP8.5 scenarios. The results revealed a better convergence of model projections in the bias corrected data compared to the uncorrected data. The study can be extended to localized regional domains aimed at understanding the changes in the agricultural productivity in the future with an agro-economy or a simple statistical model. The statistical model indicated that the total food grain yield is going to increase over the Indian region in the future, the increase in the total food grain yield is approximately 50 kg/ ha for the RCP4.5 scenario from 2001 until the end of 2100, and the increase in the total food grain yield is approximately 90 kg/ha for the RCP8.5 scenario from 2001 until the end of 2100. There are many studies using bias correction techniques, but this study applies the bias correction technique to future climate scenario data from CMIP5 models and applied it to crop statistics to find future crop yield changes over the Indian region.
Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q; Ducote, Justin L; Su, Min-Ying; Molloi, Sabee
2013-12-01
Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left-right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left-right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left-right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications.
Streamflow Bias Correction for Climate Change Impact Studies: Harmless Correction or Wrecking Ball?
NASA Astrophysics Data System (ADS)
Nijssen, B.; Chegwidden, O.
2017-12-01
Projections of the hydrologic impacts of climate change rely on a modeling chain that includes estimates of future greenhouse gas emissions, global climate models, and hydrologic models. The resulting streamflow time series are used in turn as input to impact studies. While these flows can sometimes be used directly in these impact studies, many applications require additional post-processing to remove model errors. Water resources models and regulation studies are a prime example of this type of application. These models rely on specific flows and reservoir levels to trigger reservoir releases and diversions and do not function well if the unregulated streamflow inputs are significantly biased in time and/or amount. This post-processing step is typically referred to as bias-correction, even though this step corrects not just the mean but the entire distribution of flows. Various quantile-mapping approaches have been developed that adjust the modeled flows to match a reference distribution for some historic period. Simulations of future flows are then post-processed using this same mapping to remove hydrologic model errors. These streamflow bias-correction methods have received far less scrutiny than the downscaling and bias-correction methods that are used for climate model output, mostly because they are less widely used. However, some of these methods introduce large artifacts in the resulting flow series, in some cases severely distorting the climate change signal that is present in future flows. In this presentation, we discuss our experience with streamflow bias-correction methods as part of a climate change impact study in the Columbia River basin in the Pacific Northwest region of the United States. To support this discussion, we present a novel way to assess whether a streamflow bias-correction method is merely a harmless correction or is more akin to taking a wrecking ball to the climate change signal.
Vos, Janet R; Hsu, Li; Brohet, Richard M; Mourits, Marian J E; de Vries, Jakob; Malone, Kathleen E; Oosterwijk, Jan C; de Bock, Geertruida H
2015-08-10
Recommendations for treating patients who carry a BRCA1/2 gene are mainly based on cumulative lifetime risks (CLTRs) of breast cancer determined from retrospective cohorts. These risks vary widely (27% to 88%), and it is important to understand why. We analyzed the effects of methods of risk estimation and bias correction and of population factors on CLTRs in this retrospective clinical cohort of BRCA1/2 carriers. The following methods to estimate the breast cancer risk of BRCA1/2 carriers were identified from the literature: Kaplan-Meier, frailty, and modified segregation analyses with bias correction consisting of including or excluding index patients combined with including or excluding first-degree relatives (FDRs) or different conditional likelihoods. These were applied to clinical data of BRCA1/2 families derived from our family cancer clinic for whom a simulation was also performed to evaluate the methods. CLTRs and 95% CIs were estimated and compared with the reference CLTRs. CLTRs ranged from 35% to 83% for BRCA1 and 41% to 86% for BRCA2 carriers at age 70 years width of 95% CIs: 10% to 35% and 13% to 46%, respectively). Relative bias varied from -38% to +16%. Bias correction with inclusion of index patients and untested FDRs gave the smallest bias: +2% (SD, 2%) in BRCA1 and +0.9% (SD, 3.6%) in BRCA2. Much of the variation in breast cancer CLTRs in retrospective clinical BRCA1/2 cohorts is due to the bias-correction method, whereas a smaller part is due to population differences. Kaplan-Meier analyses with bias correction that includes index patients and a proportion of untested FDRs provide suitable CLTRs for carriers counseled in the clinic. © 2015 by American Society of Clinical Oncology.
Explanation of Two Anomalous Results in Statistical Mediation Analysis
ERIC Educational Resources Information Center
Fritz, Matthew S.; Taylor, Aaron B.; MacKinnon, David P.
2012-01-01
Previous studies of different methods of testing mediation models have consistently found two anomalous results. The first result is elevated Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap tests not found in nonresampling tests or in resampling tests that did not include a bias correction. This is of special…
Detecting and removing multiplicative spatial bias in high-throughput screening technologies.
Caraus, Iurie; Mazoure, Bogdan; Nadon, Robert; Makarenkov, Vladimir
2017-10-15
Considerable attention has been paid recently to improve data quality in high-throughput screening (HTS) and high-content screening (HCS) technologies widely used in drug development and chemical toxicity research. However, several environmentally- and procedurally-induced spatial biases in experimental HTS and HCS screens decrease measurement accuracy, leading to increased numbers of false positives and false negatives in hit selection. Although effective bias correction methods and software have been developed over the past decades, almost all of these tools have been designed to reduce the effect of additive bias only. Here, we address the case of multiplicative spatial bias. We introduce three new statistical methods meant to reduce multiplicative spatial bias in screening technologies. We assess the performance of the methods with synthetic and real data affected by multiplicative spatial bias, including comparisons with current bias correction methods. We also describe a wider data correction protocol that integrates methods for removing both assay and plate-specific spatial biases, which can be either additive or multiplicative. The methods for removing multiplicative spatial bias and the data correction protocol are effective in detecting and cleaning experimental data generated by screening technologies. As our protocol is of a general nature, it can be used by researchers analyzing current or next-generation high-throughput screens. The AssayCorrector program, implemented in R, is available on CRAN. makarenkov.vladimir@uqam.ca. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Quantile Mapping Bias correction for daily precipitation over Vietnam in a regional climate model
NASA Astrophysics Data System (ADS)
Trinh, L. T.; Matsumoto, J.; Ngo-Duc, T.
2017-12-01
In the past decades, Regional Climate Models (RCMs) have been developed significantly, allowing climate simulation to be conducted at a higher resolution. However, RCMs often contained biases when comparing with observations. Therefore, statistical correction methods were commonly employed to reduce/minimize the model biases. In this study, outputs of the Regional Climate Model (RegCM) version 4.3 driven by the CNRM-CM5 global products were evaluated with and without the Quantile Mapping (QM) bias correction method. The model domain covered the area from 90oE to 145oE and from 15oS to 40oN with a horizontal resolution of 25km. The QM bias correction processes were implemented by using the Vietnam Gridded precipitation dataset (VnGP) and the outputs of RegCM historical run in the period 1986-1995 and then validated for the period 1996-2005. Based on the statistical quantity of spatial correlation and intensity distributions, the QM method showed a significant improvement in rainfall compared to the non-bias correction method. The improvements both in time and space were recognized in all seasons and all climatic sub-regions of Vietnam. Moreover, not only the rainfall amount but also some extreme indices such as R10m, R20mm, R50m, CDD, CWD, R95pTOT, R99pTOT were much better after the correction. The results suggested that the QM correction method should be taken into practice for the projections of the future precipitation over Vietnam.
Correction of stream quality trends for the effects of laboratory measurement bias
Alexander, Richard B.; Smith, Richard A.; Schwarz, Gregory E.
1993-01-01
We present a statistical model relating measurements of water quality to associated errors in laboratory methods. Estimation of the model allows us to correct trends in water quality for long-term and short-term variations in laboratory measurement errors. An illustration of the bias correction method for a large national set of stream water quality and quality assurance data shows that reductions in the bias of estimates of water quality trend slopes are achieved at the expense of increases in the variance of these estimates. Slight improvements occur in the precision of estimates of trend in bias by using correlative information on bias and water quality to estimate random variations in measurement bias. The results of this investigation stress the need for reliable, long-term quality assurance data and efficient statistical methods to assess the effects of measurement errors on the detection of water quality trends.
Bias correction for magnetic resonance images via joint entropy regularization.
Wang, Shanshan; Xia, Yong; Dong, Pei; Luo, Jianhua; Huang, Qiu; Feng, Dagan; Li, Yuanxiang
2014-01-01
Due to the imperfections of the radio frequency (RF) coil or object-dependent electrodynamic interactions, magnetic resonance (MR) images often suffer from a smooth and biologically meaningless bias field, which causes severe troubles for subsequent processing and quantitative analysis. To effectively restore the original signal, this paper simultaneously exploits the spatial and gradient features of the corrupted MR images for bias correction via the joint entropy regularization. With both isotropic and anisotropic total variation (TV) considered, two nonparametric bias correction algorithms have been proposed, namely IsoTVBiasC and AniTVBiasC. These two methods have been applied to simulated images under various noise levels and bias field corruption and also tested on real MR data. The test results show that the proposed two methods can effectively remove the bias field and also present comparable performance compared to the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Moise Famien, Adjoua; Janicot, Serge; Delfin Ochou, Abe; Vrac, Mathieu; Defrance, Dimitri; Sultan, Benjamin; Noël, Thomas
2018-03-01
The objective of this paper is to present a new dataset of bias-corrected CMIP5 global climate model (GCM) daily data over Africa. This dataset was obtained using the cumulative distribution function transform (CDF-t) method, a method that has been applied to several regions and contexts but never to Africa. Here CDF-t has been applied over the period 1950-2099 combining Historical runs and climate change scenarios for six variables: precipitation, mean near-surface air temperature, near-surface maximum air temperature, near-surface minimum air temperature, surface downwelling shortwave radiation, and wind speed, which are critical variables for agricultural purposes. WFDEI has been used as the reference dataset to correct the GCMs. Evaluation of the results over West Africa has been carried out on a list of priority user-based metrics that were discussed and selected with stakeholders. It includes simulated yield using a crop model simulating maize growth. These bias-corrected GCM data have been compared with another available dataset of bias-corrected GCMs using WATCH Forcing Data as the reference dataset. The impact of WFD, WFDEI, and also EWEMBI reference datasets has been also examined in detail. It is shown that CDF-t is very effective at removing the biases and reducing the high inter-GCM scattering. Differences with other bias-corrected GCM data are mainly due to the differences among the reference datasets. This is particularly true for surface downwelling shortwave radiation, which has a significant impact in terms of simulated maize yields. Projections of future yields over West Africa are quite different, depending on the bias-correction method used. However all these projections show a similar relative decreasing trend over the 21st century.
NASA Astrophysics Data System (ADS)
Iizumi, Toshichika; Takikawa, Hiroki; Hirabayashi, Yukiko; Hanasaki, Naota; Nishimori, Motoki
2017-08-01
The use of different bias-correction methods and global retrospective meteorological forcing data sets as the reference climatology in the bias correction of general circulation model (GCM) daily data is a known source of uncertainty in projected climate extremes and their impacts. Despite their importance, limited attention has been given to these uncertainty sources. We compare 27 projected temperature and precipitation indices over 22 regions of the world (including the global land area) in the near (2021-2060) and distant future (2061-2100), calculated using four Representative Concentration Pathways (RCPs), five GCMs, two bias-correction methods, and three reference forcing data sets. To widen the variety of forcing data sets, we developed a new forcing data set, S14FD, and incorporated it into this study. The results show that S14FD is more accurate than other forcing data sets in representing the observed temperature and precipitation extremes in recent decades (1961-2000 and 1979-2008). The use of different bias-correction methods and forcing data sets contributes more to the total uncertainty in the projected precipitation index values in both the near and distant future than the use of different GCMs and RCPs. However, GCM appears to be the most dominant uncertainty source for projected temperature index values in the near future, and RCP is the most dominant source in the distant future. Our findings encourage climate risk assessments, especially those related to precipitation extremes, to employ multiple bias-correction methods and forcing data sets in addition to using different GCMs and RCPs.
Fletcher, E; Carmichael, O; Decarli, C
2012-01-01
We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer's disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions.
Fletcher, E.; Carmichael, O.; DeCarli, C.
2013-01-01
We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer’s disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions. PMID:23365843
Effect of Malmquist bias on correlation studies with IRAS data base
NASA Technical Reports Server (NTRS)
Verter, Frances
1993-01-01
The relationships between galaxy properties in the sample of Trinchieri et al. (1989) are reexamined with corrections for Malmquist bias. The linear correlations are tested and linear regressions are fit for log-log plots of L(FIR), L(H-alpha), and L(B) as well as ratios of these quantities. The linear correlations for Malmquist bias are corrected using the method of Verter (1988), in which each galaxy observation is weighted by the inverse of its sampling volume. The linear regressions are corrected for Malmquist bias by a new method invented here in which each galaxy observation is weighted by its sampling volume. The results of correlation and regressions among the sample are significantly changed in the anticipated sense that the corrected correlation confidences are lower and the corrected slopes of the linear regressions are lower. The elimination of Malmquist bias eliminates the nonlinear rise in luminosity that has caused some authors to hypothesize additional components of FIR emission.
HESS Opinions "Should we apply bias correction to global and regional climate model data?"
NASA Astrophysics Data System (ADS)
Ehret, U.; Zehe, E.; Wulfmeyer, V.; Warrach-Sagi, K.; Liebert, J.
2012-04-01
Despite considerable progress in recent years, output of both Global and Regional Circulation Models is still afflicted with biases to a degree that precludes its direct use, especially in climate change impact studies. This is well known, and to overcome this problem bias correction (BC), i.e. the correction of model output towards observations in a post processing step for its subsequent application in climate change impact studies has now become a standard procedure. In this paper we argue that bias correction, which has a considerable influence on the results of impact studies, is not a valid procedure in the way it is currently used: it impairs the advantages of Circulation Models which are based on established physical laws by altering spatiotemporal field consistency, relations among variables and by violating conservation principles. Bias correction largely neglects feedback mechanisms and it is unclear whether bias correction methods are time-invariant under climate change conditions. Applying bias correction increases agreement of Climate Model output with observations in hind casts and hence narrows the uncertainty range of simulations and predictions without, however, providing a satisfactory physical justification. This is in most cases not transparent to the end user. We argue that this masks rather than reduces uncertainty, which may lead to avoidable forejudging of end users and decision makers. We present here a brief overview of state-of-the-art bias correction methods, discuss the related assumptions and implications, draw conclusions on the validity of bias correction and propose ways to cope with biased output of Circulation Models in the short term and how to reduce the bias in the long term. The most promising strategy for improved future Global and Regional Circulation Model simulations is the increase in model resolution to the convection-permitting scale in combination with ensemble predictions based on sophisticated approaches for ensemble perturbation. With this article, we advocate communicating the entire uncertainty range associated with climate change predictions openly and hope to stimulate a lively discussion on bias correction among the atmospheric and hydrological community and end users of climate change impact studies.
Regression dilution bias: tools for correction methods and sample size calculation.
Berglund, Lars
2012-08-01
Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.
Quezada, Amado D; García-Guerra, Armando; Escobar, Leticia
2016-06-01
To assess the performance of a simple correction method for nutritional status estimates in children under five years of age when exact age is not available from the data. The proposed method was based on the assumption of symmetry of age distributions within a given month of age and validated in a large population-based survey sample of Mexican preschool children. The main distributional assumption was consistent with the data. All prevalence estimates derived from the correction method showed no statistically significant bias. In contrast, failing to correct attained age resulted in an underestimation of stunting in general and an overestimation of overweight or obesity among the youngest. The proposed method performed remarkably well in terms of bias correction of estimates and could be easily applied in situations in which either birth or interview dates are not available from the data.
Bias correction of satellite-based rainfall data
NASA Astrophysics Data System (ADS)
Bhattacharya, Biswa; Solomatine, Dimitri
2015-04-01
Limitation in hydro-meteorological data availability in many catchments limits the possibility of reliable hydrological analyses especially for near-real-time predictions. However, the variety of satellite based and meteorological model products for rainfall provides new opportunities. Often times the accuracy of these rainfall products, when compared to rain gauge measurements, is not impressive. The systematic differences of these rainfall products from gauge observations can be partially compensated by adopting a bias (error) correction. Many of such methods correct the satellite based rainfall data by comparing their mean value to the mean value of rain gauge data. Refined approaches may also first find out a suitable time scale at which different data products are better comparable and then employ a bias correction at that time scale. More elegant methods use quantile-to-quantile bias correction, which however, assumes that the available (often limited) sample size can be useful in comparing probabilities of different rainfall products. Analysis of rainfall data and understanding of the process of its generation reveals that the bias in different rainfall data varies in space and time. The time aspect is sometimes taken into account by considering the seasonality. In this research we have adopted a bias correction approach that takes into account the variation of rainfall in space and time. A clustering based approach is employed in which every new data point (e.g. of Tropical Rainfall Measuring Mission (TRMM)) is first assigned to a specific cluster of that data product and then, by identifying the corresponding cluster of gauge data, the bias correction specific to that cluster is adopted. The presented approach considers the space-time variation of rainfall and as a result the corrected data is more realistic. Keywords: bias correction, rainfall, TRMM, satellite rainfall
NASA Astrophysics Data System (ADS)
Moise Famien, Adjoua; Defrance, Dimitri; Sultan, Benjamin; Janicot, Serge; Vrac, Mathieu
2017-04-01
Different CMIP exercises show that the simulations of the future/current temperature and precipitation are complex with a high uncertainty degree. For example, the African monsoon system is not correctly simulated and most of the CMIP5 models underestimate the precipitation. Therefore, Global Climate Models (GCMs) show significant systematic biases that require bias correction before it can be used in impacts studies. Several methods of bias corrections have been developed for several years and are increasingly using more complex statistical methods. The aims of this work is to show the interest of the CDFt (Cumulative Distribution Function transfom (Michelangeli et al.,2009)) method to reduce the data bias from 29 CMIP5 GCMs over Africa and to assess the impact of bias corrected data on crop yields prediction by the end of the 21st century. In this work, we apply the CDFt to daily data covering the period from 1950 to 2099 (Historical and RCP8.5) and we correct the climate variables (temperature, precipitation, solar radiation, wind) by the use of the new daily database from the EU project WATer and global CHange (WATCH) available from 1979 to 2013 as reference data. The performance of the method is assessed in several cases. First, data are corrected based on different calibrations periods and are compared, on one hand, with observations to estimate the sensitivity of the method to the calibration period and, on other hand, with another bias-correction method used in the ISIMIP project. We find that, whatever the calibration period used, CDFt corrects well the mean state of variables and preserves their trend, as well as daily rainfall occurrence and intensity distributions. However, some differences appear when compared to the outputs obtained with the method used in ISIMIP and show that the quality of the correction is strongly related to the reference data. Secondly, we validate the bias correction method with the agronomic simulations (SARRA-H model (Kouressy et al., 2008)) by comparison with FAO crops yields estimations over West Africa. Impact simulations show that crop model is sensitive to input data. They show also decreasing in crop yields by the end of this century. Michelangeli, P. A., Vrac, M., & Loukos, H. (2009). Probabilistic downscaling approaches: Application to wind cumulative distribution functions. Geophysical Research Letters, 36(11). Kouressy M, Dingkuhn M, Vaksmann M and Heinemann A B 2008: Adaptation to diverse semi-arid environments of sorghum genotypes having different plant type and sensitivity to photoperiod. Agric. Forest Meteorol., http://dx.doi.org/10.1016/j.agrformet.2007.09.009
The Impact of Assimilation of GPM Clear Sky Radiance on HWRF Hurricane Track and Intensity Forecasts
NASA Astrophysics Data System (ADS)
Yu, C. L.; Pu, Z.
2016-12-01
The impact of GPM microwave imager (GMI) clear sky radiances on hurricane forecasting is examined by ingesting GMI level 1C recalibrated brightness temperature into the NCEP Gridpoint Statistical Interpolation (GSI)- based ensemble-variational hybrid data assimilation system for the operational Hurricane Weather Research and Forecast (HWRF) system. The GMI clear sky radiances are compared with the Community Radiative Transfer Model (CRTM) simulated radiances to closely study the quality of the radiance observations. The quality check result indicates the presence of bias in various channels. A static bias correction scheme, in which the appropriate bias correction coefficients for GMI data is evaluated by applying regression method on a sufficiently large sample of data representative to the observational bias in the regions of concern, is used to correct the observational bias in GMI clear sky radiances. Forecast results with and without assimilation of GMI radiance are compared using hurricane cases from recent hurricane seasons (e.g., Hurricane Joaquin in 2015). Diagnoses of data assimilation results show that the bias correction coefficients obtained from the regression method can correct the inherent biases in GMI radiance data, significantly reducing observational residuals. The removal of biases also allows more data to pass GSI quality control and hence to be assimilated into the model. Forecast results for hurricane Joaquin demonstrates that the quality of analysis from the data assimilation is sensitive to the bias correction, with positive impacts on the hurricane track forecast when systematic biases are removed from the radiance data. Details will be presented at the symposium.
NASA Astrophysics Data System (ADS)
Wang, Wenhui; Cao, Changyong; Ignatov, Alex; Li, Zhenglong; Wang, Likun; Zhang, Bin; Blonski, Slawomir; Li, Jun
2017-09-01
The Suomi NPP VIIRS thermal emissive bands (TEB) have been performing very well since data became available on January 20, 2012. The longwave infrared bands at 11 and 12 um (M15 and M16) are primarily used for sea surface temperature (SST) retrievals. A long standing anomaly has been observed during the quarterly warm-up-cool-down (WUCD) events. During such event daytime SST product becomes anomalous with a warm bias shown as a spike in the SST time series on the order of 0.2 K. A previous study (CAO et al. 2017) suggested that the VIIRS TEB calibration anomaly during WUCD is due to a flawed theoretical assumption in the calibration equation and proposed an Ltrace method to address the issue. This paper complements that study and presents operational implementation and validation of the Ltrace method for M15 and M16. The Ltrace method applies bias correction during WUCD only. It requires a simple code change and one-time calibration parameter look-up-table update. The method was evaluated using colocated CrIS observations and the SST algorithm. Our results indicate that the method can effectively reduce WUCD calibration anomaly in M15, with residual bias of 0.02 K after the correction. It works less effectively for M16, with residual bias of 0.04 K. The Ltrace method may over-correct WUCD calibration biases, especially for M16. However, the residual WUCD biases are small in both bands. Evaluation results using the SST algorithm show that the method can effectively remove SST anomaly during WUCD events.
Bias correction of surface downwelling longwave and shortwave radiation for the EWEMBI dataset
NASA Astrophysics Data System (ADS)
Lange, Stefan
2018-05-01
Many meteorological forcing datasets include bias-corrected surface downwelling longwave and shortwave radiation (rlds and rsds). Methods used for such bias corrections range from multi-year monthly mean value scaling to quantile mapping at the daily timescale. An additional downscaling is necessary if the data to be corrected have a higher spatial resolution than the observational data used to determine the biases. This was the case when EartH2Observe (E2OBS; Calton et al., 2016) rlds and rsds were bias-corrected using more coarsely resolved Surface Radiation Budget (SRB; Stackhouse Jr. et al., 2011) data for the production of the meteorological forcing dataset EWEMBI (Lange, 2016). This article systematically compares various parametric quantile mapping methods designed specifically for this purpose, including those used for the production of EWEMBI rlds and rsds. The methods vary in the timescale at which they operate, in their way of accounting for physical upper radiation limits, and in their approach to bridging the spatial resolution gap between E2OBS and SRB. It is shown how temporal and spatial variability deflation related to bilinear interpolation and other deterministic downscaling approaches can be overcome by downscaling the target statistics of quantile mapping from the SRB to the E2OBS grid such that the sub-SRB-grid-scale spatial variability present in the original E2OBS data is retained. Cross validations at the daily and monthly timescales reveal that it is worthwhile to take empirical estimates of physical upper limits into account when adjusting either radiation component and that, overall, bias correction at the daily timescale is more effective than bias correction at the monthly timescale if sampling errors are taken into account.
A Dynamical Downscaling Approach with GCM Bias Corrections and Spectral Nudging
NASA Astrophysics Data System (ADS)
Xu, Z.; Yang, Z.
2013-12-01
To reduce the biases in the regional climate downscaling simulations, a dynamical downscaling approach with GCM bias corrections and spectral nudging is developed and assessed over North America. Regional climate simulations are performed with the Weather Research and Forecasting (WRF) model embedded in the National Center for Atmospheric Research (NCAR) Community Atmosphere Model (CAM). To reduce the GCM biases, the GCM climatological means and the variances of interannual variations are adjusted based on the National Centers for Environmental Prediction-NCAR global reanalysis products (NNRP) before using them to drive WRF which is the same as our previous method. In this study, we further introduce spectral nudging to reduce the RCM-based biases. Two sets of WRF experiments are performed with and without spectral nudging. All WRF experiments are identical except that the initial and lateral boundary conditions are derived from the NNRP, the original GCM output, and the bias corrected GCM output, respectively. The GCM-driven RCM simulations with bias corrections and spectral nudging (IDDng) are compared with those without spectral nudging (IDD) and North American Regional Reanalysis (NARR) data to assess the additional reduction in RCM biases relative to the IDD approach. The results show that the spectral nudging introduces the effect of GCM bias correction into the RCM domain, thereby minimizing the climate drift resulting from the RCM biases. The GCM bias corrections and spectral nudging significantly improve the downscaled mean climate and extreme temperature simulations. Our results suggest that both GCM bias corrections or spectral nudging are necessary to reduce the error of downscaled climate. Only one of them does not guarantee better downscaling simulation. The new dynamical downscaling method can be applied to regional projection of future climate or downscaling of GCM sensitivity simulations. Annual mean RMSEs. The RMSEs are computed over the verification region by monthly mean data over 1981-2010. Experimental design
Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods
Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun
2016-01-01
This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability. PMID:26751455
Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods.
Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun
2016-01-07
This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses' quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups' output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.
Ward, Zachary J.; Long, Michael W.; Resch, Stephen C.; Gortmaker, Steven L.; Cradock, Angie L.; Giles, Catherine; Hsiao, Amber; Wang, Y. Claire
2016-01-01
Background State-level estimates from the Centers for Disease Control and Prevention (CDC) underestimate the obesity epidemic because they use self-reported height and weight. We describe a novel bias-correction method and produce corrected state-level estimates of obesity and severe obesity. Methods Using non-parametric statistical matching, we adjusted self-reported data from the Behavioral Risk Factor Surveillance System (BRFSS) 2013 (n = 386,795) using measured data from the National Health and Nutrition Examination Survey (NHANES) (n = 16,924). We validated our national estimates against NHANES and estimated bias-corrected state-specific prevalence of obesity (BMI≥30) and severe obesity (BMI≥35). We compared these results with previous adjustment methods. Results Compared to NHANES, self-reported BRFSS data underestimated national prevalence of obesity by 16% (28.67% vs 34.01%), and severe obesity by 23% (11.03% vs 14.26%). Our method was not significantly different from NHANES for obesity or severe obesity, while previous methods underestimated both. Only four states had a corrected obesity prevalence below 30%, with four exceeding 40%–in contrast, most states were below 30% in CDC maps. Conclusions Twelve million adults with obesity (including 6.7 million with severe obesity) were misclassified by CDC state-level estimates. Previous bias-correction methods also resulted in underestimates. Accurate state-level estimates are necessary to plan for resources to address the obesity epidemic. PMID:26954566
The L0 Regularized Mumford-Shah Model for Bias Correction and Segmentation of Medical Images.
Duan, Yuping; Chang, Huibin; Huang, Weimin; Zhou, Jiayin; Lu, Zhongkang; Wu, Chunlin
2015-11-01
We propose a new variant of the Mumford-Shah model for simultaneous bias correction and segmentation of images with intensity inhomogeneity. First, based on the model of images with intensity inhomogeneity, we introduce an L0 gradient regularizer to model the true intensity and a smooth regularizer to model the bias field. In addition, we derive a new data fidelity using the local intensity properties to allow the bias field to be influenced by its neighborhood. Second, we use a two-stage segmentation method, where the fast alternating direction method is implemented in the first stage for the recovery of true intensity and bias field and a simple thresholding is used in the second stage for segmentation. Different from most of the existing methods for simultaneous bias correction and segmentation, we estimate the bias field and true intensity without fixing either the number of the regions or their values in advance. Our method has been validated on medical images of various modalities with intensity inhomogeneity. Compared with the state-of-art approaches and the well-known brain software tools, our model is fast, accurate, and robust with initializations.
NASA Astrophysics Data System (ADS)
Hakala, Kirsti; Addor, Nans; Seibert, Jan
2017-04-01
Streamflow stemming from Switzerland's mountainous landscape will be influenced by climate change, which will pose significant challenges to the water management and policy sector. In climate change impact research, the determination of future streamflow is impeded by different sources of uncertainty, which propagate through the model chain. In this research, we explicitly considered the following sources of uncertainty: (1) climate models, (2) downscaling of the climate projections to the catchment scale, (3) bias correction method and (4) parameterization of the hydrological model. We utilize climate projections at the 0.11 degree 12.5 km resolution from the EURO-CORDEX project, which are the most recent climate projections for the European domain. EURO-CORDEX is comprised of regional climate model (RCM) simulations, which have been downscaled from global climate models (GCMs) from the CMIP5 archive, using both dynamical and statistical techniques. Uncertainties are explored by applying a modeling chain involving 14 GCM-RCMs to ten Swiss catchments. We utilize the rainfall-runoff model HBV Light, which has been widely used in operational hydrological forecasting. The Lindström measure, a combination of model efficiency and volume error, was used as an objective function to calibrate HBV Light. Ten best sets of parameters are then achieved by calibrating using the genetic algorithm and Powell optimization (GAP) method. The GAP optimization method is based on the evolution of parameter sets, which works by selecting and recombining high performing parameter sets with each other. Once HBV is calibrated, we then perform a quantitative comparison of the influence of biases inherited from climate model simulations to the biases stemming from the hydrological model. The evaluation is conducted over two time periods: i) 1980-2009 to characterize the simulation realism under the current climate and ii) 2070-2099 to identify the magnitude of the projected change of streamflow under the climate scenarios RCP4.5 and RCP8.5. We utilize two techniques for correcting biases in the climate model output: quantile mapping and a new method, frequency bias correction. The FBC method matches the frequencies between observed and GCM-RCM data. In this way, it can be used to correct for all time scales, which is a known limitation of quantile mapping. A novel approach for the evaluation of the climate simulations and bias correction methods was then applied. Streamflow can be thought of as the "great integrator" of uncertainties. The ability, or the lack thereof, to correctly simulate streamflow is a way to assess the realism of the bias-corrected climate simulations. Long-term monthly mean as well as high and low flow metrics are used to evaluate the realism of the simulations under current climate and to gauge the impacts of climate change on streamflow. Preliminary results show that under present climate, calibration of the hydrological model comprises of a much smaller band of uncertainty in the modeling chain as compared to the bias correction of the GCM-RCMs. Therefore, for future time periods, we expect the bias correction of climate model data to have a greater influence on projected changes in streamflow than the calibration of the hydrological model.
A novel method for correcting scanline-observational bias of discontinuity orientation
Huang, Lei; Tang, Huiming; Tan, Qinwen; Wang, Dingjian; Wang, Liangqing; Ez Eldin, Mutasim A. M.; Li, Changdong; Wu, Qiong
2016-01-01
Scanline observation is known to introduce an angular bias into the probability distribution of orientation in three-dimensional space. In this paper, numerical solutions expressing the functional relationship between the scanline-observational distribution (in one-dimensional space) and the inherent distribution (in three-dimensional space) are derived using probability theory and calculus under the independence hypothesis of dip direction and dip angle. Based on these solutions, a novel method for obtaining the inherent distribution (also for correcting the bias) is proposed, an approach which includes two procedures: 1) Correcting the cumulative probabilities of orientation according to the solutions, and 2) Determining the distribution of the corrected orientations using approximation methods such as the one-sample Kolmogorov-Smirnov test. The inherent distribution corrected by the proposed method can be used for discrete fracture network (DFN) modelling, which is applied to such areas as rockmass stability evaluation, rockmass permeability analysis, rockmass quality calculation and other related fields. To maximize the correction capacity of the proposed method, the observed sample size is suggested through effectiveness tests for different distribution types, dispersions and sample sizes. The performance of the proposed method and the comparison of its correction capacity with existing methods are illustrated with two case studies. PMID:26961249
Evaluation of Bias Correction Method for Satellite-Based Rainfall Data
Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter
2016-01-01
With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration’s (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003–2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW’s) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach. PMID:27314363
Evaluation of Bias Correction Method for Satellite-Based Rainfall Data.
Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter
2016-06-15
With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration's (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003-2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW's) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach.
On the Performance of T2∗ Correction Methods for Quantification of Hepatic Fat Content
Reeder, Scott B.; Bice, Emily K.; Yu, Huanzhou; Hernando, Diego; Pineda, Angel R.
2014-01-01
Nonalcoholic fatty liver disease is the most prevalent chronic liver disease in Western societies. MRI can quantify liver fat, the hallmark feature of nonalcoholic fatty liver disease, so long as multiple confounding factors including T2∗ decay are addressed. Recently developed MRI methods that correct for T2∗ to improve the accuracy of fat quantification either assume a common T2∗ (single- T2∗) for better stability and noise performance or independently estimate the T2∗ for water and fat (dual- T2∗) for reduced bias, but with noise performance penalty. In this study, the tradeoff between bias and variance for different T2∗ correction methods is analyzed using the Cramér-Rao bound analysis for biased estimators and is validated using Monte Carlo experiments. A noise performance metric for estimation of fat fraction is proposed. Cramér-Rao bound analysis for biased estimators was used to compute the metric at different echo combinations. Optimization was performed for six echoes and typical T2∗ values. This analysis showed that all methods have better noise performance with very short first echo times and echo spacing of ∼π/2 for single- T2∗ correction, and ∼2π/3 for dual- T2∗ correction. Interestingly, when an echo spacing and first echo shift of ∼π/2 are used, methods without T2∗ correction have less than 5% bias in the estimates of fat fraction. PMID:21661045
He, Hua; McDermott, Michael P.
2012-01-01
Sensitivity and specificity are common measures of the accuracy of a diagnostic test. The usual estimators of these quantities are unbiased if data on the diagnostic test result and the true disease status are obtained from all subjects in an appropriately selected sample. In some studies, verification of the true disease status is performed only for a subset of subjects, possibly depending on the result of the diagnostic test and other characteristics of the subjects. Estimators of sensitivity and specificity based on this subset of subjects are typically biased; this is known as verification bias. Methods have been proposed to correct verification bias under the assumption that the missing data on disease status are missing at random (MAR), that is, the probability of missingness depends on the true (missing) disease status only through the test result and observed covariate information. When some of the covariates are continuous, or the number of covariates is relatively large, the existing methods require parametric models for the probability of disease or the probability of verification (given the test result and covariates), and hence are subject to model misspecification. We propose a new method for correcting verification bias based on the propensity score, defined as the predicted probability of verification given the test result and observed covariates. This is estimated separately for those with positive and negative test results. The new method classifies the verified sample into several subsamples that have homogeneous propensity scores and allows correction for verification bias. Simulation studies demonstrate that the new estimators are more robust to model misspecification than existing methods, but still perform well when the models for the probability of disease and probability of verification are correctly specified. PMID:21856650
Model-Based Control of Observer Bias for the Analysis of Presence-Only Data in Ecology
Warton, David I.; Renner, Ian W.; Ramp, Daniel
2013-01-01
Presence-only data, where information is available concerning species presence but not species absence, are subject to bias due to observers being more likely to visit and record sightings at some locations than others (hereafter “observer bias”). In this paper, we describe and evaluate a model-based approach to accounting for observer bias directly – by modelling presence locations as a function of known observer bias variables (such as accessibility variables) in addition to environmental variables, then conditioning on a common level of bias to make predictions of species occurrence free of such observer bias. We implement this idea using point process models with a LASSO penalty, a new presence-only method related to maximum entropy modelling, that implicitly addresses the “pseudo-absence problem” of where to locate pseudo-absences (and how many). The proposed method of bias-correction is evaluated using systematically collected presence/absence data for 62 plant species endemic to the Blue Mountains near Sydney, Australia. It is shown that modelling and controlling for observer bias significantly improves the accuracy of predictions made using presence-only data, and usually improves predictions as compared to pseudo-absence or “inventory” methods of bias correction based on absences from non-target species. Future research will consider the potential for improving the proposed bias-correction approach by estimating the observer bias simultaneously across multiple species. PMID:24260167
NASA Astrophysics Data System (ADS)
Hu, Taiyang; Lv, Rongchuan; Jin, Xu; Li, Hao; Chen, Wenxin
2018-01-01
The nonlinear bias analysis and correction of receiving channels in Chinese FY-3C meteorological satellite Microwave Temperature Sounder (MWTS) is a key technology of data assimilation for satellite radiance data. The thermal-vacuum chamber calibration data acquired from the MWTS can be analyzed to evaluate the instrument performance, including radiometric temperature sensitivity, channel nonlinearity and calibration accuracy. Especially, the nonlinearity parameters due to imperfect square-law detectors will be calculated from calibration data and further used to correct the nonlinear bias contributions of microwave receiving channels. Based upon the operational principles and thermalvacuum chamber calibration procedures of MWTS, this paper mainly focuses on the nonlinear bias analysis and correction methods for improving the calibration accuracy of the important instrument onboard FY-3C meteorological satellite, from the perspective of theoretical and experimental studies. Furthermore, a series of original results are presented to demonstrate the feasibility and significance of the methods.
Extracting muon momentum scale corrections for hadron collider experiments
NASA Astrophysics Data System (ADS)
Bodek, A.; van Dyne, A.; Han, J. Y.; Sakumoto, W.; Strelnikov, A.
2012-10-01
We present a simple method for the extraction of corrections for bias in the measurement of the momentum of muons in hadron collider experiments. Such bias can originate from a variety of sources such as detector misalignment, software reconstruction bias, and uncertainties in the magnetic field. The two step method uses the mean <1/p^{μ}T rangle for muons from Z→ μμ decays to determine the momentum scale corrections in bins of charge, η and ϕ. In the second step, the corrections are tuned by using the average invariant mass < MZ_{μμ }rangle of Z→ μμ events in the same bins of charge η and ϕ. The forward-backward asymmetry of Z/ γ ∗→ μμ pairs as a function of μ + μ - mass, and the ϕ distribution of Z bosons in the Collins-Soper frame are used to ascertain that the corrections remove the bias in the momentum measurements for positive versus negatively charged muons. By taking the sum and difference of the momentum scale corrections for positive and negative muons, we isolate additive corrections to 1/p^{μ}T that may originate from misalignments and multiplicative corrections that may originate from mis-modeling of the magnetic field (∫ Bṡ d L). This method has recently been used in the CDF experiment at Fermilab and in the CMS experiment at the Large Hadron Collider at CERN.
Adjusting for partial verification or workup bias in meta-analyses of diagnostic accuracy studies.
de Groot, Joris A H; Dendukuri, Nandini; Janssen, Kristel J M; Reitsma, Johannes B; Brophy, James; Joseph, Lawrence; Bossuyt, Patrick M M; Moons, Karel G M
2012-04-15
A key requirement in the design of diagnostic accuracy studies is that all study participants receive both the test under evaluation and the reference standard test. For a variety of practical and ethical reasons, sometimes only a proportion of patients receive the reference standard, which can bias the accuracy estimates. Numerous methods have been described for correcting this partial verification bias or workup bias in individual studies. In this article, the authors describe a Bayesian method for obtaining adjusted results from a diagnostic meta-analysis when partial verification or workup bias is present in a subset of the primary studies. The method corrects for verification bias without having to exclude primary studies with verification bias, thus preserving the main advantages of a meta-analysis: increased precision and better generalizability. The results of this method are compared with the existing methods for dealing with verification bias in diagnostic meta-analyses. For illustration, the authors use empirical data from a systematic review of studies of the accuracy of the immunohistochemistry test for diagnosis of human epidermal growth factor receptor 2 status in breast cancer patients.
NASA Astrophysics Data System (ADS)
Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish
2017-07-01
Use of General Circulation Model (GCM) precipitation and evapotranspiration sequences for hydrologic modelling can result in unrealistic simulations due to the coarse scales at which GCMs operate and the systematic biases they contain. The Bias Correction Spatial Disaggregation (BCSD) method is a popular statistical downscaling and bias correction method developed to address this issue. The advantage of BCSD is its ability to reduce biases in the distribution of precipitation totals at the GCM scale and then introduce more realistic variability at finer scales than simpler spatial interpolation schemes. Although BCSD corrects biases at the GCM scale before disaggregation; at finer spatial scales biases are re-introduced by the assumptions made in the spatial disaggregation process. Our study focuses on this limitation of BCSD and proposes a rank-based approach that aims to reduce the spatial disaggregation bias especially for both low and high precipitation extremes. BCSD requires the specification of a multiplicative bias correction anomaly field that represents the ratio of the fine scale precipitation to the disaggregated precipitation. It is shown that there is significant temporal variation in the anomalies, which is masked when a mean anomaly field is used. This can be improved by modelling the anomalies in rank-space. Results from the application of the rank-BCSD procedure improve the match between the distributions of observed and downscaled precipitation at the fine scale compared to the original BCSD approach. Further improvements in the distribution are identified when a scaling correction to preserve mass in the disaggregation process is implemented. An assessment of the approach using a single GCM over Australia shows clear advantages especially in the simulation of particularly low and high downscaled precipitation amounts.
Lin, Muqing; Chan, Siwa; Chen, Jeon-Hor; Chang, Daniel; Nie, Ke; Chen, Shih-Ting; Lin, Cheng-Ju; Shih, Tzu-Ching; Nalcioglu, Orhan; Su, Min-Ying
2011-01-01
Quantitative breast density is known as a strong risk factor associated with the development of breast cancer. Measurement of breast density based on three-dimensional breast MRI may provide very useful information. One important step for quantitative analysis of breast density on MRI is the correction of field inhomogeneity to allow an accurate segmentation of the fibroglandular tissue (dense tissue). A new bias field correction method by combining the nonparametric nonuniformity normalization (N3) algorithm and fuzzy-C-means (FCM)-based inhomogeneity correction algorithm is developed in this work. The analysis is performed on non-fat-sat T1-weighted images acquired using a 1.5 T MRI scanner. A total of 60 breasts from 30 healthy volunteers was analyzed. N3 is known as a robust correction method, but it cannot correct a strong bias field on a large area. FCM-based algorithm can correct the bias field on a large area, but it may change the tissue contrast and affect the segmentation quality. The proposed algorithm applies N3 first, followed by FCM, and then the generated bias field is smoothed using Gaussian kernal and B-spline surface fitting to minimize the problem of mistakenly changed tissue contrast. The segmentation results based on the N3+FCM corrected images were compared to the N3 and FCM alone corrected images and another method, coherent local intensity clustering (CLIC), corrected images. The segmentation quality based on different correction methods were evaluated by a radiologist and ranked. The authors demonstrated that the iterative N3+FCM correction method brightens the signal intensity of fatty tissues and that separates the histogram peaks between the fibroglandular and fatty tissues to allow an accurate segmentation between them. In the first reading session, the radiologist found (N3+FCM > N3 > FCM) ranking in 17 breasts, (N3+FCM > N3 = FCM) ranking in 7 breasts, (N3+FCM = N3 > FCM) in 32 breasts, (N3+FCM = N3 = FCM) in 2 breasts, and (N3 > N3+FCM > FCM) in 2 breasts. The results of the second reading session were similar. The performance in each pairwise Wilcoxon signed-rank test is significant, showing N3+FCM superior to both N3 and FCM, and N3 superior to FCM. The performance of the new N3+FCM algorithm was comparable to that of CLIC, showing equivalent quality in 57/60 breasts. Choosing an appropriate bias field correction method is a very important preprocessing step to allow an accurate segmentation of fibroglandular tissues based on breast MRI for quantitative measurement of breast density. The proposed algorithm combining N3+FCM and CLIC both yield satisfactory results.
Zhu, Qiaohao; Carriere, K C
2016-01-01
Publication bias can significantly limit the validity of meta-analysis when trying to draw conclusion about a research question from independent studies. Most research on detection and correction for publication bias in meta-analysis focus mainly on funnel plot-based methodologies or selection models. In this paper, we formulate publication bias as a truncated distribution problem, and propose new parametric solutions. We develop methodologies of estimating the underlying overall effect size and the severity of publication bias. We distinguish the two major situations, in which publication bias may be induced by: (1) small effect size or (2) large p-value. We consider both fixed and random effects models, and derive estimators for the overall mean and the truncation proportion. These estimators will be obtained using maximum likelihood estimation and method of moments under fixed- and random-effects models, respectively. We carried out extensive simulation studies to evaluate the performance of our methodology, and to compare with the non-parametric Trim and Fill method based on funnel plot. We find that our methods based on truncated normal distribution perform consistently well, both in detecting and correcting publication bias under various situations.
Measurement of the $B^-$ lifetime using a simulation free approach for trigger bias correction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aaltonen, T.; /Helsinki Inst. of Phys.; Adelman, J.
2010-04-01
The collection of a large number of B hadron decays to hadronic final states at the CDF II detector is possible due to the presence of a trigger that selects events based on track impact parameters. However, the nature of the selection requirements of the trigger introduces a large bias in the observed proper decay time distribution. A lifetime measurement must correct for this bias and the conventional approach has been to use a Monte Carlo simulation. The leading sources of systematic uncertainty in the conventional approach are due to differences between the data and the Monte Carlo simulation. Inmore » this paper they present an analytic method for bias correction without using simulation, thereby removing any uncertainty between data and simulation. This method is presented in the form of a measurement of the lifetime of the B{sup -} using the mode B{sup -} {yields} D{sup 0}{pi}{sup -}. The B{sup -} lifetime is measured as {tau}{sub B{sup -}} = 1.663 {+-} 0.023 {+-} 0.015 ps, where the first uncertainty is statistical and the second systematic. This new method results in a smaller systematic uncertainty in comparison to methods that use simulation to correct for the trigger bias.« less
NASA Astrophysics Data System (ADS)
Inoue, Makoto; Morino, Isamu; Uchino, Osamu; Nakatsuru, Takahiro; Yoshida, Yukio; Yokota, Tatsuya; Wunch, Debra; Wennberg, Paul O.; Roehl, Coleen M.; Griffith, David W. T.; Velazco, Voltaire A.; Deutscher, Nicholas M.; Warneke, Thorsten; Notholt, Justus; Robinson, John; Sherlock, Vanessa; Hase, Frank; Blumenstock, Thomas; Rettinger, Markus; Sussmann, Ralf; Kyrö, Esko; Kivi, Rigel; Shiomi, Kei; Kawakami, Shuji; De Mazière, Martine; Arnold, Sabrina G.; Feist, Dietrich G.; Barrow, Erica A.; Barney, James; Dubey, Manvendra; Schneider, Matthias; Iraci, Laura T.; Podolske, James R.; Hillyard, Patrick W.; Machida, Toshinobu; Sawa, Yousuke; Tsuboi, Kazuhiro; Matsueda, Hidekazu; Sweeney, Colm; Tans, Pieter P.; Andrews, Arlyn E.; Biraud, Sebastien C.; Fukuyama, Yukio; Pittman, Jasna V.; Kort, Eric A.; Tanaka, Tomoaki
2016-08-01
We describe a method for removing systematic biases of column-averaged dry air mole fractions of CO2 (XCO2) and CH4 (XCH4) derived from short-wavelength infrared (SWIR) spectra of the Greenhouse gases Observing SATellite (GOSAT). We conduct correlation analyses between the GOSAT biases and simultaneously retrieved auxiliary parameters. We use these correlations to bias correct the GOSAT data, removing these spurious correlations. Data from the Total Carbon Column Observing Network (TCCON) were used as reference values for this regression analysis. To evaluate the effectiveness of this correction method, the uncorrected/corrected GOSAT data were compared to independent XCO2 and XCH4 data derived from aircraft measurements taken for the Comprehensive Observation Network for TRace gases by AIrLiner (CONTRAIL) project, the National Oceanic and Atmospheric Administration (NOAA), the US Department of Energy (DOE), the National Institute for Environmental Studies (NIES), the Japan Meteorological Agency (JMA), the HIAPER Pole-to-Pole observations (HIPPO) program, and the GOSAT validation aircraft observation campaign over Japan. These comparisons demonstrate that the empirically derived bias correction improves the agreement between GOSAT XCO2/XCH4 and the aircraft data. Finally, we present spatial distributions and temporal variations of the derived GOSAT biases.
Statistical bias correction modelling for seasonal rainfall forecast for the case of Bali island
NASA Astrophysics Data System (ADS)
Lealdi, D.; Nurdiati, S.; Sopaheluwakan, A.
2018-04-01
Rainfall is an element of climate which is highly influential to the agricultural sector. Rain pattern and distribution highly determines the sustainability of agricultural activities. Therefore, information on rainfall is very useful for agriculture sector and farmers in anticipating the possibility of extreme events which often cause failures of agricultural production. This research aims to identify the biases from seasonal forecast products from ECMWF (European Centre for Medium-Range Weather Forecasts) rainfall forecast and to build a transfer function in order to correct the distribution biases as a new prediction model using quantile mapping approach. We apply this approach to the case of Bali Island, and as a result, the use of bias correction methods in correcting systematic biases from the model gives better results. The new prediction model obtained with this approach is better than ever. We found generally that during rainy season, the bias correction approach performs better than in dry season.
Zhao, Huaqing; Rebbeck, Timothy R; Mitra, Nandita
2009-12-01
Confounding due to population stratification (PS) arises when differences in both allele and disease frequencies exist in a population of mixed racial/ethnic subpopulations. Genomic control, structured association, principal components analysis (PCA), and multidimensional scaling (MDS) approaches have been proposed to address this bias using genetic markers. However, confounding due to PS can also be due to non-genetic factors. Propensity scores are widely used to address confounding in observational studies but have not been adapted to deal with PS in genetic association studies. We propose a genomic propensity score (GPS) approach to correct for bias due to PS that considers both genetic and non-genetic factors. We compare the GPS method with PCA and MDS using simulation studies. Our results show that GPS can adequately adjust and consistently correct for bias due to PS. Under no/mild, moderate, and severe PS, GPS yielded estimated with bias close to 0 (mean=-0.0044, standard error=0.0087). Under moderate or severe PS, the GPS method consistently outperforms the PCA method in terms of bias, coverage probability (CP), and type I error. Under moderate PS, the GPS method consistently outperforms the MDS method in terms of CP. PCA maintains relatively high power compared to both MDS and GPS methods under the simulated situations. GPS and MDS are comparable in terms of statistical properties such as bias, type I error, and power. The GPS method provides a novel and robust tool for obtaining less-biased estimates of genetic associations that can consider both genetic and non-genetic factors. 2009 Wiley-Liss, Inc.
Characterizing bias correction uncertainty in wheat yield predictions
NASA Astrophysics Data System (ADS)
Ortiz, Andrea Monica; Jones, Julie; Freckleton, Robert; Scaife, Adam
2017-04-01
Farming systems are under increased pressure due to current and future climate change, variability and extremes. Research on the impacts of climate change on crop production typically rely on the output of complex Global and Regional Climate Models, which are used as input to crop impact models. Yield predictions from these top-down approaches can have high uncertainty for several reasons, including diverse model construction and parameterization, future emissions scenarios, and inherent or response uncertainty. These uncertainties propagate down each step of the 'cascade of uncertainty' that flows from climate input to impact predictions, leading to yield predictions that may be too complex for their intended use in practical adaptation options. In addition to uncertainty from impact models, uncertainty can also stem from the intermediate steps that are used in impact studies to adjust climate model simulations to become more realistic when compared to observations, or to correct the spatial or temporal resolution of climate simulations, which are often not directly applicable as input into impact models. These important steps of bias correction or calibration also add uncertainty to final yield predictions, given the various approaches that exist to correct climate model simulations. In order to address how much uncertainty the choice of bias correction method can add to yield predictions, we use several evaluation runs from Regional Climate Models from the Coordinated Regional Downscaling Experiment over Europe (EURO-CORDEX) at different resolutions together with different bias correction methods (linear and variance scaling, power transformation, quantile-quantile mapping) as input to a statistical crop model for wheat, a staple European food crop. The objective of our work is to compare the resulting simulation-driven hindcasted wheat yields to climate observation-driven wheat yield hindcasts from the UK and Germany in order to determine ranges of yield uncertainty that result from different climate model simulation input and bias correction methods. We simulate wheat yields using a General Linear Model that includes the effects of seasonal maximum temperatures and precipitation, since wheat is sensitive to heat stress during important developmental stages. We use the same statistical model to predict future wheat yields using the recently available bias-corrected simulations of EURO-CORDEX-Adjust. While statistical models are often criticized for their lack of complexity, an advantage is that we are here able to consider only the effect of the choice of climate model, resolution or bias correction method on yield. Initial results using both past and future bias-corrected climate simulations with a process-based model will also be presented. Through these methods, we make recommendations in preparing climate model output for crop models.
Mazoure, Bogdan; Caraus, Iurie; Nadon, Robert; Makarenkov, Vladimir
2018-06-01
Data generated by high-throughput screening (HTS) technologies are prone to spatial bias. Traditionally, bias correction methods used in HTS assume either a simple additive or, more recently, a simple multiplicative spatial bias model. These models do not, however, always provide an accurate correction of measurements in wells located at the intersection of rows and columns affected by spatial bias. The measurements in these wells depend on the nature of interaction between the involved biases. Here, we propose two novel additive and two novel multiplicative spatial bias models accounting for different types of bias interactions. We describe a statistical procedure that allows for detecting and removing different types of additive and multiplicative spatial biases from multiwell plates. We show how this procedure can be applied by analyzing data generated by the four HTS technologies (homogeneous, microorganism, cell-based, and gene expression HTS), the three high-content screening (HCS) technologies (area, intensity, and cell-count HCS), and the only small-molecule microarray technology available in the ChemBank small-molecule screening database. The proposed methods are included in the AssayCorrector program, implemented in R, and available on CRAN.
2016-01-01
Reliably estimating wildlife abundance is fundamental to effective management. Aerial surveys are one of the only spatially robust tools for estimating large mammal populations, but statistical sampling methods are required to address detection biases that affect accuracy and precision of the estimates. Although various methods for correcting aerial survey bias are employed on large mammal species around the world, these have rarely been rigorously validated. Several populations of feral horses (Equus caballus) in the western United States have been intensively studied, resulting in identification of all unique individuals. This provided a rare opportunity to test aerial survey bias correction on populations of known abundance. We hypothesized that a hybrid method combining simultaneous double-observer and sightability bias correction techniques would accurately estimate abundance. We validated this integrated technique on populations of known size and also on a pair of surveys before and after a known number was removed. Our analysis identified several covariates across the surveys that explained and corrected biases in the estimates. All six tests on known populations produced estimates with deviations from the known value ranging from -8.5% to +13.7% and <0.7 standard errors. Precision varied widely, from 6.1% CV to 25.0% CV. In contrast, the pair of surveys conducted around a known management removal produced an estimated change in population between the surveys that was significantly larger than the known reduction. Although the deviation between was only 9.1%, the precision estimate (CV = 1.6%) may have been artificially low. It was apparent that use of a helicopter in those surveys perturbed the horses, introducing detection error and heterogeneity in a manner that could not be corrected by our statistical models. Our results validate the hybrid method, highlight its potentially broad applicability, identify some limitations, and provide insight and guidance for improving survey designs. PMID:27139732
Lubow, Bruce C; Ransom, Jason I
2016-01-01
Reliably estimating wildlife abundance is fundamental to effective management. Aerial surveys are one of the only spatially robust tools for estimating large mammal populations, but statistical sampling methods are required to address detection biases that affect accuracy and precision of the estimates. Although various methods for correcting aerial survey bias are employed on large mammal species around the world, these have rarely been rigorously validated. Several populations of feral horses (Equus caballus) in the western United States have been intensively studied, resulting in identification of all unique individuals. This provided a rare opportunity to test aerial survey bias correction on populations of known abundance. We hypothesized that a hybrid method combining simultaneous double-observer and sightability bias correction techniques would accurately estimate abundance. We validated this integrated technique on populations of known size and also on a pair of surveys before and after a known number was removed. Our analysis identified several covariates across the surveys that explained and corrected biases in the estimates. All six tests on known populations produced estimates with deviations from the known value ranging from -8.5% to +13.7% and <0.7 standard errors. Precision varied widely, from 6.1% CV to 25.0% CV. In contrast, the pair of surveys conducted around a known management removal produced an estimated change in population between the surveys that was significantly larger than the known reduction. Although the deviation between was only 9.1%, the precision estimate (CV = 1.6%) may have been artificially low. It was apparent that use of a helicopter in those surveys perturbed the horses, introducing detection error and heterogeneity in a manner that could not be corrected by our statistical models. Our results validate the hybrid method, highlight its potentially broad applicability, identify some limitations, and provide insight and guidance for improving survey designs.
A New Online Calibration Method Based on Lord's Bias-Correction.
He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei
2017-09-01
Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.
NASA Astrophysics Data System (ADS)
Mehrotra, Rajeshwar; Sharma, Ashish
2012-12-01
The quality of the absolute estimates of general circulation models (GCMs) calls into question the direct use of GCM outputs for climate change impact assessment studies, particularly at regional scales. Statistical correction of GCM output is often necessary when significant systematic biasesoccur between the modeled output and observations. A common procedure is to correct the GCM output by removing the systematic biases in low-order moments relative to observations or to reanalysis data at daily, monthly, or seasonal timescales. In this paper, we present an extension of a recently published nested bias correction (NBC) technique to correct for the low- as well as higher-order moments biases in the GCM-derived variables across selected multiple time-scales. The proposed recursive nested bias correction (RNBC) approach offers an improved basis for applying bias correction at multiple timescales over the original NBC procedure. The method ensures that the bias-corrected series exhibits improvements that are consistently spread over all of the timescales considered. Different variations of the approach starting from the standard NBC to the more complex recursive alternatives are tested to assess their impacts on a range of GCM-simulated atmospheric variables of interest in downscaling applications related to hydrology and water resources. Results of the study suggest that three to five iteration RNBCs are the most effective in removing distributional and persistence related biases across the timescales considered.
Efficient bias correction for magnetic resonance image denoising.
Mukherjee, Partha Sarathi; Qiu, Peihua
2013-05-30
Magnetic resonance imaging (MRI) is a popular radiology technique that is used for visualizing detailed internal structure of the body. Observed MRI images are generated by the inverse Fourier transformation from received frequency signals of a magnetic resonance scanner system. Previous research has demonstrated that random noise involved in the observed MRI images can be described adequately by the so-called Rician noise model. Under that model, the observed image intensity at a given pixel is a nonlinear function of the true image intensity and of two independent zero-mean random variables with the same normal distribution. Because of such a complicated noise structure in the observed MRI images, denoised images by conventional denoising methods are usually biased, and the bias could reduce image contrast and negatively affect subsequent image analysis. Therefore, it is important to address the bias issue properly. To this end, several bias-correction procedures have been proposed in the literature. In this paper, we study the Rician noise model and the corresponding bias-correction problem systematically and propose a new and more effective bias-correction formula based on the regression analysis and Monte Carlo simulation. Numerical studies show that our proposed method works well in various applications. Copyright © 2012 John Wiley & Sons, Ltd.
Estimation of satellite position, clock and phase bias corrections
NASA Astrophysics Data System (ADS)
Henkel, Patrick; Psychas, Dimitrios; Günther, Christoph; Hugentobler, Urs
2018-05-01
Precise point positioning with integer ambiguity resolution requires precise knowledge of satellite position, clock and phase bias corrections. In this paper, a method for the estimation of these parameters with a global network of reference stations is presented. The method processes uncombined and undifferenced measurements of an arbitrary number of frequencies such that the obtained satellite position, clock and bias corrections can be used for any type of differenced and/or combined measurements. We perform a clustering of reference stations. The clustering enables a common satellite visibility within each cluster and an efficient fixing of the double difference ambiguities within each cluster. Additionally, the double difference ambiguities between the reference stations of different clusters are fixed. We use an integer decorrelation for ambiguity fixing in dense global networks. The performance of the proposed method is analysed with both simulated Galileo measurements on E1 and E5a and real GPS measurements of the IGS network. We defined 16 clusters and obtained satellite position, clock and phase bias corrections with a precision of better than 2 cm.
Adaptable gene-specific dye bias correction for two-channel DNA microarrays.
Margaritis, Thanasis; Lijnzaad, Philip; van Leenen, Dik; Bouwmeester, Diane; Kemmeren, Patrick; van Hooff, Sander R; Holstege, Frank C P
2009-01-01
DNA microarray technology is a powerful tool for monitoring gene expression or for finding the location of DNA-bound proteins. DNA microarrays can suffer from gene-specific dye bias (GSDB), causing some probes to be affected more by the dye than by the sample. This results in large measurement errors, which vary considerably for different probes and also across different hybridizations. GSDB is not corrected by conventional normalization and has been difficult to address systematically because of its variance. We show that GSDB is influenced by label incorporation efficiency, explaining the variation of GSDB across different hybridizations. A correction method (Gene- And Slide-Specific Correction, GASSCO) is presented, whereby sequence-specific corrections are modulated by the overall bias of individual hybridizations. GASSCO outperforms earlier methods and works well on a variety of publically available datasets covering a range of platforms, organisms and applications, including ChIP on chip. A sequence-based model is also presented, which predicts which probes will suffer most from GSDB, useful for microarray probe design and correction of individual hybridizations. Software implementing the method is publicly available.
Adaptable gene-specific dye bias correction for two-channel DNA microarrays
Margaritis, Thanasis; Lijnzaad, Philip; van Leenen, Dik; Bouwmeester, Diane; Kemmeren, Patrick; van Hooff, Sander R; Holstege, Frank CP
2009-01-01
DNA microarray technology is a powerful tool for monitoring gene expression or for finding the location of DNA-bound proteins. DNA microarrays can suffer from gene-specific dye bias (GSDB), causing some probes to be affected more by the dye than by the sample. This results in large measurement errors, which vary considerably for different probes and also across different hybridizations. GSDB is not corrected by conventional normalization and has been difficult to address systematically because of its variance. We show that GSDB is influenced by label incorporation efficiency, explaining the variation of GSDB across different hybridizations. A correction method (Gene- And Slide-Specific Correction, GASSCO) is presented, whereby sequence-specific corrections are modulated by the overall bias of individual hybridizations. GASSCO outperforms earlier methods and works well on a variety of publically available datasets covering a range of platforms, organisms and applications, including ChIP on chip. A sequence-based model is also presented, which predicts which probes will suffer most from GSDB, useful for microarray probe design and correction of individual hybridizations. Software implementing the method is publicly available. PMID:19401678
Correcting the SAT's Ethnic and Social-Class Bias: A Method for Reestimating SAT Scores.
ERIC Educational Resources Information Center
Freedle, Roy O.
2003-01-01
A corrective scoring method, the Revised-Scholastic Achievement Test (R-SAT), addresses nonrandom ethnic test bias patterns found in the SAT. The R-SAT has been shown to reduce the mean-score difference between African-American and white test-takers by one-third, increase verbal scores by as much as 200-300 points for individuals, and benefit…
Image-guided regularization level set evolution for MR image segmentation and bias field correction.
Wang, Lingfeng; Pan, Chunhong
2014-01-01
Magnetic resonance (MR) image segmentation is a crucial step in surgical and treatment planning. In this paper, we propose a level-set-based segmentation method for MR images with intensity inhomogeneous problem. To tackle the initialization sensitivity problem, we propose a new image-guided regularization to restrict the level set function. The maximum a posteriori inference is adopted to unify segmentation and bias field correction within a single framework. Under this framework, both the contour prior and the bias field prior are fully used. As a result, the image intensity inhomogeneity can be well solved. Extensive experiments are provided to evaluate the proposed method, showing significant improvements in both segmentation and bias field correction accuracies as compared with other state-of-the-art approaches. Copyright © 2014 Elsevier Inc. All rights reserved.
Estimation and correction of visibility bias in aerial surveys of wintering ducks
Pearse, A.T.; Gerard, P.D.; Dinsmore, S.J.; Kaminski, R.M.; Reinecke, K.J.
2008-01-01
Incomplete detection of all individuals leading to negative bias in abundance estimates is a pervasive source of error in aerial surveys of wildlife, and correcting that bias is a critical step in improving surveys. We conducted experiments using duck decoys as surrogates for live ducks to estimate bias associated with surveys of wintering ducks in Mississippi, USA. We found detection of decoy groups was related to wetland cover type (open vs. forested), group size (1?100 decoys), and interaction of these variables. Observers who detected decoy groups reported counts that averaged 78% of the decoys actually present, and this counting bias was not influenced by either covariate cited above. We integrated this sightability model into estimation procedures for our sample surveys with weight adjustments derived from probabilities of group detection (estimated by logistic regression) and count bias. To estimate variances of abundance estimates, we used bootstrap resampling of transects included in aerial surveys and data from the bias-correction experiment. When we implemented bias correction procedures on data from a field survey conducted in January 2004, we found bias-corrected estimates of abundance increased 36?42%, and associated standard errors increased 38?55%, depending on species or group estimated. We deemed our method successful for integrating correction of visibility bias in an existing sample survey design for wintering ducks in Mississippi, and we believe this procedure could be implemented in a variety of sampling problems for other locations and species.
Inoue, Makoto; Morino, Isamu; Uchino, Osamu; ...
2016-08-01
We describe a method for removing systematic biases of column-averaged dry air mole fractions of CO 2 (XCO 2) and CH 4 (XCH 4) derived from short-wavelength infrared (SWIR) spectra of the Greenhouse gases Observing SATellite (GOSAT). We conduct correlation analyses between the GOSAT biases and simultaneously retrieved auxiliary parameters. We use these correlations to bias correct the GOSAT data, removing these spurious correlations. Data from the Total Carbon Column Observing Network (TCCON) were used as reference values for this regression analysis. To evaluate the effectiveness of this correction method, the uncorrected/corrected GOSAT data were compared to independent XCO 2more » and XCH 4 data derived from aircraft measurements taken for the Comprehensive Observation Network for TRace gases by AIrLiner (CONTRAIL) project, the National Oceanic and Atmospheric Administration (NOAA), the US Department of Energy (DOE), the National Institute for Environmental Studies (NIES), the Japan Meteorological Agency (JMA), the HIAPER Pole-to-Pole observations (HIPPO) program, and the GOSAT validation aircraft observation campaign over Japan. These comparisons demonstrate that the empirically derived bias correction improves the agreement between GOSAT XCO 2/XCH 4 and the aircraft data. Finally, we present spatial distributions and temporal variations of the derived GOSAT biases.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Inoue, Makoto; Morino, Isamu; Uchino, Osamu
We describe a method for removing systematic biases of column-averaged dry air mole fractions of CO 2 (XCO 2) and CH 4 (XCH 4) derived from short-wavelength infrared (SWIR) spectra of the Greenhouse gases Observing SATellite (GOSAT). We conduct correlation analyses between the GOSAT biases and simultaneously retrieved auxiliary parameters. We use these correlations to bias correct the GOSAT data, removing these spurious correlations. Data from the Total Carbon Column Observing Network (TCCON) were used as reference values for this regression analysis. To evaluate the effectiveness of this correction method, the uncorrected/corrected GOSAT data were compared to independent XCO 2more » and XCH 4 data derived from aircraft measurements taken for the Comprehensive Observation Network for TRace gases by AIrLiner (CONTRAIL) project, the National Oceanic and Atmospheric Administration (NOAA), the US Department of Energy (DOE), the National Institute for Environmental Studies (NIES), the Japan Meteorological Agency (JMA), the HIAPER Pole-to-Pole observations (HIPPO) program, and the GOSAT validation aircraft observation campaign over Japan. These comparisons demonstrate that the empirically derived bias correction improves the agreement between GOSAT XCO 2/XCH 4 and the aircraft data. Finally, we present spatial distributions and temporal variations of the derived GOSAT biases.« less
Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao
2016-06-01
An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.
High-resolution near real-time drought monitoring in South Asia
NASA Astrophysics Data System (ADS)
Aadhar, Saran; Mishra, Vimal
2017-10-01
Drought in South Asia affect food and water security and pose challenges for millions of people. For policy-making, planning, and management of water resources at sub-basin or administrative levels, high-resolution datasets of precipitation and air temperature are required in near-real time. We develop a high-resolution (0.05°) bias-corrected precipitation and temperature data that can be used to monitor near real-time drought conditions over South Asia. Moreover, the dataset can be used to monitor climatic extremes (heat and cold waves, dry and wet anomalies) in South Asia. A distribution mapping method was applied to correct bias in precipitation and air temperature, which performed well compared to the other bias correction method based on linear scaling. Bias-corrected precipitation and temperature data were used to estimate Standardized precipitation index (SPI) and Standardized Precipitation Evapotranspiration Index (SPEI) to assess the historical and current drought conditions in South Asia. We evaluated drought severity and extent against the satellite-based Normalized Difference Vegetation Index (NDVI) anomalies and satellite-driven Drought Severity Index (DSI) at 0.05°. The bias-corrected high-resolution data can effectively capture observed drought conditions as shown by the satellite-based drought estimates. High resolution near real-time dataset can provide valuable information for decision-making at district and sub-basin levels.
Fully correcting the meteor speed distribution for radar observing biases
NASA Astrophysics Data System (ADS)
Moorhead, Althea V.; Brown, Peter G.; Campbell-Brown, Margaret D.; Heynen, Denis; Cooke, William J.
2017-09-01
Meteor radars such as the Canadian Meteor Orbit Radar (CMOR) have the ability to detect millions of meteors, making it possible to study the meteoroid environment in great detail. However, meteor radars also suffer from a number of detection biases; these biases must be fully corrected for in order to derive an accurate description of the meteoroid population. We present a bias correction method for patrol radars that accounts for the full form of ionization efficiency and mass distribution. This is an improvement over previous methods such as that of Taylor (1995), which requires power-law distributions for ionization efficiency and a single mass index. We apply this method to the meteor speed distribution observed by CMOR and find a significant enhancement of slow meteors compared to earlier treatments. However, when the data set is severely restricted to include only meteors with very small uncertainties in speed, the fraction of slow meteors is substantially reduced, indicating that speed uncertainties must be carefully handled.
A method to preserve trends in quantile mapping bias correction of climate modeled temperature
NASA Astrophysics Data System (ADS)
Grillakis, Manolis G.; Koutroulis, Aristeidis G.; Daliakopoulos, Ioannis N.; Tsanis, Ioannis K.
2017-09-01
Bias correction of climate variables is a standard practice in climate change impact (CCI) studies. Various methodologies have been developed within the framework of quantile mapping. However, it is well known that quantile mapping may significantly modify the long-term statistics due to the time dependency of the temperature bias. Here, a method to overcome this issue without compromising the day-to-day correction statistics is presented. The methodology separates the modeled temperature signal into a normalized and a residual component relative to the modeled reference period climatology, in order to adjust the biases only for the former and preserve the signal of the later. The results show that this method allows for the preservation of the originally modeled long-term signal in the mean, the standard deviation and higher and lower percentiles of temperature. To illustrate the improvements, the methodology is tested on daily time series obtained from five Euro CORDEX regional climate models (RCMs).
Sequence-specific bias correction for RNA-seq data using recurrent neural networks.
Zhang, Yao-Zhong; Yamaguchi, Rui; Imoto, Seiya; Miyano, Satoru
2017-01-25
The recent success of deep learning techniques in machine learning and artificial intelligence has stimulated a great deal of interest among bioinformaticians, who now wish to bring the power of deep learning to bare on a host of bioinformatical problems. Deep learning is ideally suited for biological problems that require automatic or hierarchical feature representation for biological data when prior knowledge is limited. In this work, we address the sequence-specific bias correction problem for RNA-seq data redusing Recurrent Neural Networks (RNNs) to model nucleotide sequences without pre-determining sequence structures. The sequence-specific bias of a read is then calculated based on the sequence probabilities estimated by RNNs, and used in the estimation of gene abundance. We explore the application of two popular RNN recurrent units for this task and demonstrate that RNN-based approaches provide a flexible way to model nucleotide sequences without knowledge of predetermined sequence structures. Our experiments show that training a RNN-based nucleotide sequence model is efficient and RNN-based bias correction methods compare well with the-state-of-the-art sequence-specific bias correction method on the commonly used MAQC-III data set. RNNs provides an alternative and flexible way to calculate sequence-specific bias without explicitly pre-determining sequence structures.
Analysis and correction of gradient nonlinearity bias in ADC measurements
Malyarenko, Dariya I.; Ross, Brian D.; Chenevert, Thomas L.
2013-01-01
Purpose Gradient nonlinearity of MRI systems leads to spatially-dependent b-values and consequently high non-uniformity errors (10–20%) in ADC measurements over clinically relevant field-of-views. This work seeks practical correction procedure that effectively reduces observed ADC bias for media of arbitrary anisotropy in the fewest measurements. Methods All-inclusive bias analysis considers spatial and time-domain cross-terms for diffusion and imaging gradients. The proposed correction is based on rotation of the gradient nonlinearity tensor into the diffusion gradient frame where spatial bias of b-matrix can be approximated by its Euclidean norm. Correction efficiency of the proposed procedure is numerically evaluated for a range of model diffusion tensor anisotropies and orientations. Results Spatial dependence of nonlinearity correction terms accounts for the bulk (75–95%) of ADC bias for FA = 0.3–0.9. Residual ADC non-uniformity errors are amplified for anisotropic diffusion. This approximation obviates need for full diffusion tensor measurement and diagonalization to derive a corrected ADC. Practical scenarios are outlined for implementation of the correction on clinical MRI systems. Conclusions The proposed simplified correction algorithm appears sufficient to control ADC non-uniformity errors in clinical studies using three orthogonal diffusion measurements. The most efficient reduction of ADC bias for anisotropic medium is achieved with non-lab-based diffusion gradients. PMID:23794533
Potential of bias correction for downscaling passive microwave and soil moisture data
USDA-ARS?s Scientific Manuscript database
Passive microwave satellites such as SMOS (Soil Moisture and Ocean Salinity) or SMAP (Soil Moisture Active Passive) observe brightness temperature (TB) and retrieve soil moisture at a spatial resolution greater than most hydrological processes. Bias correction is proposed as a simple method to disag...
Ciceri, E; Recchia, S; Dossi, C; Yang, L; Sturgeon, R E
2008-01-15
The development and validation of a method for the determination of mercury in sediments using a sector field inductively coupled plasma mass spectrometer (SF-ICP-MS) for detection is described. The utilization of isotope dilution (ID) calibration is shown to solve analytical problems related to matrix composition. Mass bias is corrected using an internal mass bias correction technique, validated against the traditional standard bracketing method. The overall analytical protocol is validated against NRCC PACS-2 marine sediment CRM. The estimated limit of detection is 12ng/g. The proposed procedure was applied to the analysis of a real sediment core sampled to a depth of 160m in Lake Como, where Hg concentrations ranged from 66 to 750ng/g.
Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies
Theis, Fabian J.
2017-01-01
Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464
Correction of bias in belt transect studies of immotile objects
Anderson, D.R.; Pospahala, R.S.
1970-01-01
Unless a correction is made, population estimates derived from a sample of belt transects will be biased if a fraction of, the individuals on the sample transects are not counted. An approach, useful for correcting this bias when sampling immotile populations using transects of a fixed width, is presented. The method assumes that a searcher's ability to find objects near the center of the transect is nearly perfect. The method utilizes a mathematical equation, estimated from the data, to represent the searcher's inability to find all objects at increasing distances from the center of the transect. An example of the analysis of data, formation of the equation, and application is presented using waterfowl nesting data collected in Colorado.
Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15
ERIC Educational Resources Information Center
Zhang, Jinming
2005-01-01
Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…
NASA Astrophysics Data System (ADS)
Li, Jingwan; Sharma, Ashish; Evans, Jason; Johnson, Fiona
2018-01-01
Addressing systematic biases in regional climate model simulations of extreme rainfall is a necessary first step before assessing changes in future rainfall extremes. Commonly used bias correction methods are designed to match statistics of the overall simulated rainfall with observations. This assumes that change in the mix of different types of extreme rainfall events (i.e. convective and non-convective) in a warmer climate is of little relevance in the estimation of overall change, an assumption that is not supported by empirical or physical evidence. This study proposes an alternative approach to account for the potential change of alternate rainfall types, characterized here by synoptic weather patterns (SPs) using self-organizing maps classification. The objective of this study is to evaluate the added influence of SPs on the bias correction, which is achieved by comparing the corrected distribution of future extreme rainfall with that using conventional quantile mapping. A comprehensive synthetic experiment is first defined to investigate the conditions under which the additional information of SPs makes a significant difference to the bias correction. Using over 600,000 synthetic cases, statistically significant differences are found to be present in 46% cases. This is followed by a case study over the Sydney region using a high-resolution run of the Weather Research and Forecasting (WRF) regional climate model, which indicates a small change in the proportions of the SPs and a statistically significant change in the extreme rainfall over the region, although the differences between the changes obtained from the two bias correction methods are not statistically significant.
Boundary pint corrections for variable radius plots - simulation results
Margaret Penner; Sam Otukol
2000-01-01
The boundary plot problem is encountered when a forest inventory plot includes two or more forest conditions. Depending on the correction method used, the resulting estimates can be biased. The various correction alternatives are reviewed. No correction, area correction, half sweep, and toss-back methods are evaluated using simulation on an actual data set. Based on...
Li, Xinpeng; Li, Hong; Liu, Yun; Xiong, Wei; Fang, Sheng
2018-03-05
The release rate of atmospheric radionuclide emissions is a critical factor in the emergency response to nuclear accidents. However, there are unavoidable biases in radionuclide transport models, leading to inaccurate estimates. In this study, a method that simultaneously corrects these biases and estimates the release rate is developed. Our approach provides a more complete measurement-by-measurement correction of the biases with a coefficient matrix that considers both deterministic and stochastic deviations. This matrix and the release rate are jointly solved by the alternating minimization algorithm. The proposed method is generic because it does not rely on specific features of transport models or scenarios. It is validated against wind tunnel experiments that simulate accidental releases in a heterogonous and densely built nuclear power plant site. The sensitivities to the position, number, and quality of measurements and extendibility of the method are also investigated. The results demonstrate that this method effectively corrects the model biases, and therefore outperforms Tikhonov's method in both release rate estimation and model prediction. The proposed approach is robust to uncertainties and extendible with various center estimators, thus providing a flexible framework for robust source inversion in real accidents, even if large uncertainties exist in multiple factors. Copyright © 2017 Elsevier B.V. All rights reserved.
Szatkiewicz, Jin P; Wang, WeiBo; Sullivan, Patrick F; Wang, Wei; Sun, Wei
2013-02-01
Structural variation is an important class of genetic variation in mammals. High-throughput sequencing (HTS) technologies promise to revolutionize copy-number variation (CNV) detection but present substantial analytic challenges. Converging evidence suggests that multiple types of CNV-informative data (e.g. read-depth, read-pair, split-read) need be considered, and that sophisticated methods are needed for more accurate CNV detection. We observed that various sources of experimental biases in HTS confound read-depth estimation, and note that bias correction has not been adequately addressed by existing methods. We present a novel read-depth-based method, GENSENG, which uses a hidden Markov model and negative binomial regression framework to identify regions of discrete copy-number changes while simultaneously accounting for the effects of multiple confounders. Based on extensive calibration using multiple HTS data sets, we conclude that our method outperforms existing read-depth-based CNV detection algorithms. The concept of simultaneous bias correction and CNV detection can serve as a basis for combining read-depth with other types of information such as read-pair or split-read in a single analysis. A user-friendly and computationally efficient implementation of our method is freely available.
Marcos, Raül; Llasat, Ma Carmen; Quintana-Seguí, Pere; Turco, Marco
2018-01-01
In this paper, we have compared different bias correction methodologies to assess whether they could be advantageous for improving the performance of a seasonal prediction model for volume anomalies in the Boadella reservoir (northwestern Mediterranean). The bias correction adjustments have been applied on precipitation and temperature from the European Centre for Middle-range Weather Forecasting System 4 (S4). We have used three bias correction strategies: two linear (mean bias correction, BC, and linear regression, LR) and one non-linear (Model Output Statistics analogs, MOS-analog). The results have been compared with climatology and persistence. The volume-anomaly model is a previously computed Multiple Linear Regression that ingests precipitation, temperature and in-flow anomaly data to simulate monthly volume anomalies. The potential utility for end-users has been assessed using economic value curve areas. We have studied the S4 hindcast period 1981-2010 for each month of the year and up to seven months ahead considering an ensemble of 15 members. We have shown that the MOS-analog and LR bias corrections can improve the original S4. The application to volume anomalies points towards the possibility to introduce bias correction methods as a tool to improve water resource seasonal forecasts in an end-user context of climate services. Particularly, the MOS-analog approach gives generally better results than the other approaches in late autumn and early winter. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Namikawa, Toshiya
We present here a new method for delensing B modes of the cosmic microwave background (CMB) using a lensing potential reconstructed from the same realization of the CMB polarization (CMB internal delensing). The B -mode delensing is required to improve sensitivity to primary B modes generated by, e.g., the inflationary gravitational waves, axionlike particles, modified gravity, primordial magnetic fields, and topological defects such as cosmic strings. However, the CMB internal delensing suffers from substantial biases due to correlations between observed CMB maps to be delensed and that used for reconstructing a lensing potential. Since the bias depends on realizations, wemore » construct a realization-dependent (RD) estimator for correcting these biases by deriving a general optimal estimator for higher-order correlations. The RD method is less sensitive to simulation uncertainties. Compared to the previous ℓ -splitting method, we find that the RD method corrects the biases without substantial degradation of the delensing efficiency.« less
High-Resolution Near Real-Time Drought Monitoring in South Asia
NASA Astrophysics Data System (ADS)
Aadhar, S.; Mishra, V.
2017-12-01
Drought in South Asia affect food and water security and pose challenges for millions of people. For policy-making, planning and management of water resources at the sub-basin or administrative levels, high-resolution datasets of precipitation and air temperature are required in near-real time. Here we develop a high resolution (0.05 degree) bias-corrected precipitation and temperature data that can be used to monitor near real-time drought conditions over South Asia. Moreover, the dataset can be used to monitor climatic extremes (heat waves, cold waves, dry and wet anomalies) in South Asia. A distribution mapping method was applied to correct bias in precipitation and air temperature (maximum and minimum), which performed well compared to the other bias correction method based on linear scaling. Bias-corrected precipitation and temperature data were used to estimate Standardized precipitation index (SPI) and Standardized Precipitation Evapotranspiration Index (SPEI) to assess the historical and current drought conditions in South Asia. We evaluated drought severity and extent against the satellite-based Normalized Difference Vegetation Index (NDVI) anomalies and satellite-driven Drought Severity Index (DSI) at 0.05˚. We find that the bias-corrected high-resolution data can effectively capture observed drought conditions as shown by the satellite-based drought estimates. High resolution near real-time dataset can provide valuable information for decision-making at district and sub- basin levels.
Improved Correction of Misclassification Bias With Bootstrap Imputation.
van Walraven, Carl
2018-07-01
Diagnostic codes used in administrative database research can create bias due to misclassification. Quantitative bias analysis (QBA) can correct for this bias, requires only code sensitivity and specificity, but may return invalid results. Bootstrap imputation (BI) can also address misclassification bias but traditionally requires multivariate models to accurately estimate disease probability. This study compared misclassification bias correction using QBA and BI. Serum creatinine measures were used to determine severe renal failure status in 100,000 hospitalized patients. Prevalence of severe renal failure in 86 patient strata and its association with 43 covariates was determined and compared with results in which renal failure status was determined using diagnostic codes (sensitivity 71.3%, specificity 96.2%). Differences in results (misclassification bias) were then corrected with QBA or BI (using progressively more complex methods to estimate disease probability). In total, 7.4% of patients had severe renal failure. Imputing disease status with diagnostic codes exaggerated prevalence estimates [median relative change (range), 16.6% (0.8%-74.5%)] and its association with covariates [median (range) exponentiated absolute parameter estimate difference, 1.16 (1.01-2.04)]. QBA produced invalid results 9.3% of the time and increased bias in estimates of both disease prevalence and covariate associations. BI decreased misclassification bias with increasingly accurate disease probability estimates. QBA can produce invalid results and increase misclassification bias. BI avoids invalid results and can importantly decrease misclassification bias when accurate disease probability estimates are used.
Using a bias aware EnKF to account for unresolved structure in an unsaturated zone model
NASA Astrophysics Data System (ADS)
Erdal, D.; Neuweiler, I.; Wollschläger, U.
2014-01-01
When predicting flow in the unsaturated zone, any method for modeling the flow will have to define how, and to what level, the subsurface structure is resolved. In this paper, we use the Ensemble Kalman Filter to assimilate local soil water content observations from both a synthetic layered lysimeter and a real field experiment in layered soil in an unsaturated water flow model. We investigate the use of colored noise bias corrections to account for unresolved subsurface layering in a homogeneous model and compare this approach with a fully resolved model. In both models, we use a simplified model parameterization in the Ensemble Kalman Filter. The results show that the use of bias corrections can increase the predictive capability of a simplified homogeneous flow model if the bias corrections are applied to the model states. If correct knowledge of the layering structure is available, the fully resolved model performs best. However, if no, or erroneous, layering is used in the model, the use of a homogeneous model with bias corrections can be the better choice for modeling the behavior of the system.
Lee, Wen-Chung
2014-02-05
The randomized controlled study is the gold-standard research method in biomedicine. In contrast, the validity of a (nonrandomized) observational study is often questioned because of unknown/unmeasured factors, which may have confounding and/or effect-modifying potential. In this paper, the author proposes a perturbation test to detect the bias of unmeasured factors and a perturbation adjustment to correct for such bias. The proposed method circumvents the problem of measuring unknowns by collecting the perturbations of unmeasured factors instead. Specifically, a perturbation is a variable that is readily available (or can be measured easily) and is potentially associated, though perhaps only very weakly, with unmeasured factors. The author conducted extensive computer simulations to provide a proof of concept. Computer simulations show that, as the number of perturbation variables increases from data mining, the power of the perturbation test increased progressively, up to nearly 100%. In addition, after the perturbation adjustment, the bias decreased progressively, down to nearly 0%. The data-mining perturbation analysis described here is recommended for use in detecting and correcting the bias of unmeasured factors in observational studies.
Zhang, Ying; Alonzo, Todd A
2016-11-01
In diagnostic medicine, the volume under the receiver operating characteristic (ROC) surface (VUS) is a commonly used index to quantify the ability of a continuous diagnostic test to discriminate between three disease states. In practice, verification of the true disease status may be performed only for a subset of subjects under study since the verification procedure is invasive, risky, or expensive. The selection for disease examination might depend on the results of the diagnostic test and other clinical characteristics of the patients, which in turn can cause bias in estimates of the VUS. This bias is referred to as verification bias. Existing verification bias correction in three-way ROC analysis focuses on ordinal tests. We propose verification bias-correction methods to construct ROC surface and estimate the VUS for a continuous diagnostic test, based on inverse probability weighting. By applying U-statistics theory, we develop asymptotic properties for the estimator. A Jackknife estimator of variance is also derived. Extensive simulation studies are performed to evaluate the performance of the new estimators in terms of bias correction and variance. The proposed methods are used to assess the ability of a biomarker to accurately identify stages of Alzheimer's disease. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
ERIC Educational Resources Information Center
Pfaffel, Andreas; Schober, Barbara; Spiel, Christiane
2016-01-01
A common methodological problem in the evaluation of the predictive validity of selection methods, e.g. in educational and employment selection, is that the correlation between predictor and criterion is biased. Thorndike's (1949) formulas are commonly used to correct for this biased correlation. An alternative approach is to view the selection…
Process-conditioned bias correction for seasonal forecasting: a case-study with ENSO in Peru
NASA Astrophysics Data System (ADS)
Manzanas, R.; Gutiérrez, J. M.
2018-05-01
This work assesses the suitability of a first simple attempt for process-conditioned bias correction in the context of seasonal forecasting. To do this, we focus on the northwestern part of Peru and bias correct 1- and 4-month lead seasonal predictions of boreal winter (DJF) precipitation from the ECMWF System4 forecasting system for the period 1981-2010. In order to include information about the underlying large-scale circulation which may help to discriminate between precipitation affected by different processes, we introduce here an empirical quantile-quantile mapping method which runs conditioned on the state of the Southern Oscillation Index (SOI), which is accurately predicted by System4 and is known to affect the local climate. Beyond the reduction of model biases, our results show that the SOI-conditioned method yields better ROC skill scores and reliability than the raw model output over the entire region of study, whereas the standard unconditioned implementation provides no added value for any of these metrics. This suggests that conditioning the bias correction on simple but well-simulated large-scale processes relevant to the local climate may be a suitable approach for seasonal forecasting. Yet, further research on the suitability of the application of similar approaches to the one considered here for other regions, seasons and/or variables is needed.
A Simple Noise Correction Scheme for Diffusional Kurtosis Imaging
Glenn, G. Russell; Tabesh, Ali; Jensen, Jens H.
2014-01-01
Purpose Diffusional kurtosis imaging (DKI) is sensitive to the effects of signal noise due to strong diffusion weightings and higher order modeling of the diffusion weighted signal. A simple noise correction scheme is proposed to remove the majority of the noise bias in the estimated diffusional kurtosis. Methods Weighted linear least squares (WLLS) fitting together with a voxel-wise, subtraction-based noise correction from multiple, independent acquisitions are employed to reduce noise bias in DKI data. The method is validated in phantom experiments and demonstrated for in vivo human brain for DKI-derived parameter estimates. Results As long as the signal-to-noise ratio (SNR) for the most heavily diffusion weighted images is greater than 2.1, errors in phantom diffusional kurtosis estimates are found to be less than 5 percent with noise correction, but as high as 44 percent for uncorrected estimates. In human brain, noise correction is also shown to improve diffusional kurtosis estimates derived from measurements made with low SNR. Conclusion The proposed correction technique removes the majority of noise bias from diffusional kurtosis estimates in noisy phantom data and is applicable to DKI of human brain. Features of the method include computational simplicity and ease of integration into standard WLLS DKI post-processing algorithms. PMID:25172990
Sensitivity of Hydrologic Response to Climate Model Debiasing Procedures
NASA Astrophysics Data System (ADS)
Channell, K.; Gronewold, A.; Rood, R. B.; Xiao, C.; Lofgren, B. M.; Hunter, T.
2017-12-01
Climate change is already having a profound impact on the global hydrologic cycle. In the Laurentian Great Lakes, changes in long-term evaporation and precipitation can lead to rapid water level fluctuations in the lakes, as evidenced by unprecedented change in water levels seen in the last two decades. These fluctuations often have an adverse impact on the region's human, environmental, and economic well-being, making accurate long-term water level projections invaluable to regional water resources management planning. Here we use hydrological components from a downscaled climate model (GFDL-CM3/WRF), to obtain future water supplies for the Great Lakes. We then apply a suite of bias correction procedures before propagating these water supplies through a routing model to produce lake water levels. Results using conventional bias correction methods suggest that water levels will decline by several feet in the coming century. However, methods that reflect the seasonal water cycle and explicitly debias individual hydrological components (overlake precipitation, overlake evaporation, runoff) imply that future water levels may be closer to their historical average. This discrepancy between debiased results indicates that water level forecasts are highly influenced by the bias correction method, a source of sensitivity that is commonly overlooked. Debiasing, however, does not remedy misrepresentation of the underlying physical processes in the climate model that produce these biases and contribute uncertainty to the hydrological projections. This uncertainty coupled with the differences in water level forecasts from varying bias correction methods are important for water management and long term planning in the Great Lakes region.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Favazza, C; Fetterly, K
2016-06-15
Purpose: Application of a channelized Hotelling model observer (CHO) over a wide range of x-ray angiography detector target dose (DTD) levels demonstrated substantial bias for conditions yielding low detectability indices (d’), including low DTD and small test objects. The purpose of this work was to develop theory and methods to correct this bias. Methods: A hypothesis was developed wherein the measured detectability index (d’b) for a known test object is positively biased by temporally variable non-stationary noise in the images. Hotelling’s T2 test statistic provided the foundation for a mathematical theory which accounts for independent contributions to the measured d’bmore » value from both the test object (d’o) and non-stationary noise (d’ns). Experimental methods were developed to directly estimate d’o by determining d’ns and subtracting it from d’b, in accordance with the theory. Specifically, d’ns was determined from two sets of images from which the traditional test object was withheld. This method was applied to angiography images with DTD levels in the range 0 to 240 nGy and for disk-shaped iodine-based contrast targets with diameters 0.5 to 4.0 mm. Results: Bias in d’ was evidenced by d’b values which exceeded values expected from a quantum limited imaging system and decreasing object size and DTD. d’ns increased with decreasing DTD, reaching a maximum of 2.6 for DTD = 0. Bias-corrected d’o estimates demonstrated sub-quantum limited performance of the x-ray angiography for low DTD. Findings demonstrated that the source of non-stationary noise was detector electronic readout noise. Conclusion: Theory and methods to estimate and correct bias in CHO measurements from temporally variable non-stationary noise were presented. The temporal non-stationary noise was shown to be due to electronic readout noise. This method facilitates accurate estimates of d’ values over a large range of object size and detector target dose.« less
Grinde, Kelsey E.; Arbet, Jaron; Green, Alden; O'Connell, Michael; Valcarcel, Alessandra; Westra, Jason; Tintle, Nathan
2017-01-01
To date, gene-based rare variant testing approaches have focused on aggregating information across sets of variants to maximize statistical power in identifying genes showing significant association with diseases. Beyond identifying genes that are associated with diseases, the identification of causal variant(s) in those genes and estimation of their effect is crucial for planning replication studies and characterizing the genetic architecture of the locus. However, we illustrate that straightforward single-marker association statistics can suffer from substantial bias introduced by conditioning on gene-based test significance, due to the phenomenon often referred to as “winner's curse.” We illustrate the ramifications of this bias on variant effect size estimation and variant prioritization/ranking approaches, outline parameters of genetic architecture that affect this bias, and propose a bootstrap resampling method to correct for this bias. We find that our correction method significantly reduces the bias due to winner's curse (average two-fold decrease in bias, p < 2.2 × 10−6) and, consequently, substantially improves mean squared error and variant prioritization/ranking. The method is particularly helpful in adjustment for winner's curse effects when the initial gene-based test has low power and for relatively more common, non-causal variants. Adjustment for winner's curse is recommended for all post-hoc estimation and ranking of variants after a gene-based test. Further work is necessary to continue seeking ways to reduce bias and improve inference in post-hoc analysis of gene-based tests under a wide variety of genetic architectures. PMID:28959274
NASA Astrophysics Data System (ADS)
Yang, P.; Fekete, B. M.; Rosenzweig, B.; Lengyel, F.; Vorosmarty, C. J.
2012-12-01
Atmospheric dynamics are essential inputs to Regional-scale Earth System Models (RESMs). Variables including surface air temperature, total precipitation, solar radiation, wind speed and humidity must be downscaled from coarse-resolution, global General Circulation Models (GCMs) to the high temporal and spatial resolution required for regional modeling. However, this downscaling procedure can be challenging due to the need to correct for bias from the GCM and to capture the spatiotemporal heterogeneity of the regional dynamics. In this study, the results obtained using several downscaling techniques and observational datasets were compared for a RESM of the Northeast Corridor of the United States. Previous efforts have enhanced GCM model outputs through bias correction using novel techniques. For example, the Climate Impact Research at Potsdam Institute developed a series of bias-corrected GCMs towards the next generation climate change scenarios (Schiermeier, 2012; Moss et al., 2010). Techniques to better represent the heterogeneity of climate variables have also been improved using statistical approaches (Maurer, 2008; Abatzoglou, 2011). For this study, four downscaling approaches to transform bias-corrected HADGEM2-ES Model output (daily at .5 x .5 degree) to the 3'*3'(longitude*latitude) daily and monthly resolution required for the Northeast RESM were compared: 1) Bilinear Interpolation, 2) Daily bias-corrected spatial downscaling (D-BCSD) with Gridded Meteorological Datasets (developed by Abazoglou 2011), 3) Monthly bias-corrected spatial disaggregation (M-BCSD) with CRU(Climate Research Unit) and 4) Dynamic Downscaling based on Weather Research and Forecast (WRF) model. Spatio-temporal analysis of the variability in precipitation was conducted over the study domain. Validation of the variables of different downscaling methods against observational datasets was carried out for assessment of the downscaled climate model outputs. The effects of using the different approaches to downscale atmospheric variables (specifically air temperature and precipitation) for use as inputs to the Water Balance Model (WBMPlus, Vorosmarty et al., 1998;Wisser et al., 2008) for simulation of daily discharge and monthly stream flow in the Northeast US for a 100-year period in the 21st century were also assessed. Statistical techniques especially monthly bias-corrected spatial disaggregation (M-BCSD) showed potential advantage among other methods for the daily discharge and monthly stream flow simulation. However, Dynamic Downscaling will provide important complements to the statistical approaches tested.
EMC Global Climate And Weather Modeling Branch Personnel
Comparison Statistics which includes: NCEP Raw and Bias-Corrected Ensemble Domain Averaged Bias NCEP Raw and Bias-Corrected Ensemble Domain Averaged Bias Reduction (Percents) CMC Raw and Bias-Corrected Control Forecast Domain Averaged Bias CMC Raw and Bias-Corrected Control Forecast Domain Averaged Bias Reduction
Calibration of weak-lensing shear in the Kilo-Degree Survey
NASA Astrophysics Data System (ADS)
Fenech Conti, I.; Herbonnet, R.; Hoekstra, H.; Merten, J.; Miller, L.; Viola, M.
2017-05-01
We describe and test the pipeline used to measure the weak-lensing shear signal from the Kilo-Degree Survey (KiDS). It includes a novel method of 'self-calibration' that partially corrects for the effect of noise bias. We also discuss the 'weight bias' that may arise in optimally weighted measurements, and present a scheme to mitigate that bias. To study the residual biases arising from both galaxy selection and shear measurement, and to derive an empirical correction to reduce the shear biases to ≲1 per cent, we create a suite of simulated images whose properties are close to those of the KiDS survey observations. We find that the use of 'self-calibration' reduces the additive and multiplicative shear biases significantly, although further correction via a calibration scheme is required, which also corrects for a dependence of the bias on galaxy properties. We find that the calibration relation itself is biased by the use of noisy, measured galaxy properties, which may limit the final accuracy that can be achieved. We assess the accuracy of the calibration in the tomographic bins used for the KiDS cosmic shear analysis, testing in particular the effect of possible variations in the uncertain distributions of galaxy size, magnitude and ellipticity, and conclude that the calibration procedure is accurate at the level of multiplicative bias ≲1 per cent required for the KiDS cosmic shear analysis.
An improved level set method for brain MR images segmentation and bias correction.
Chen, Yunjie; Zhang, Jianwei; Macione, Jim
2009-10-01
Intensity inhomogeneities cause considerable difficulty in the quantitative analysis of magnetic resonance (MR) images. Thus, bias field estimation is a necessary step before quantitative analysis of MR data can be undertaken. This paper presents a variational level set approach to bias correction and segmentation for images with intensity inhomogeneities. Our method is based on an observation that intensities in a relatively small local region are separable, despite of the inseparability of the intensities in the whole image caused by the overall intensity inhomogeneity. We first define a localized K-means-type clustering objective function for image intensities in a neighborhood around each point. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. The objective function is then integrated over the entire domain to define the data term into the level set framework. Our method is able to capture bias of quite general profiles. Moreover, it is robust to initialization, and thereby allows fully automated applications. The proposed method has been used for images of various modalities with promising results.
Standing on the shoulders of giants: improving medical image segmentation via bias correction.
Wang, Hongzhi; Das, Sandhitsu; Pluta, John; Craige, Caryne; Altinay, Murat; Avants, Brian; Weiner, Michael; Mueller, Susanne; Yushkevich, Paul
2010-01-01
We propose a simple strategy to improve automatic medical image segmentation. The key idea is that without deep understanding of a segmentation method, we can still improve its performance by directly calibrating its results with respect to manual segmentation. We formulate the calibration process as a bias correction problem, which is addressed by machine learning using training data. We apply this methodology on three segmentation problems/methods and show significant improvements for all of them.
Simultaneous quaternion estimation (QUEST) and bias determination
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1989-01-01
Tests of a new method for the simultaneous estimation of spacecraft attitude and sensor biases, based on a quaternion estimation algorithm minimizing Wahba's loss function are presented. The new method is compared with a conventional batch least-squares differential correction algorithm. The estimates are based on data from strapdown gyros and star trackers, simulated with varying levels of Gaussian noise for both inertially-fixed and Earth-pointing reference attitudes. Both algorithms solve for the spacecraft attitude and the gyro drift rate biases. They converge to the same estimates at the same rate for inertially-fixed attitude, but the new algorithm converges more slowly than the differential correction for Earth-pointing attitude. The slower convergence of the new method for non-zero attitude rates is believed to be due to the use of an inadequate approximation for a partial derivative matrix. The new method requires about twice the computational effort of the differential correction. Improving the approximation for the partial derivative matrix in the new method is expected to improve its convergence at the cost of increased computational effort.
Delatour, Vincent; Lalere, Beatrice; Saint-Albin, Karène; Peignaux, Maryline; Hattchouel, Jean-Marc; Dumont, Gilles; De Graeve, Jacques; Vaslin-Reimann, Sophie; Gillery, Philippe
2012-11-20
The reliability of biological tests is a major issue for patient care in terms of public health that involves high economic stakes. Reference methods, as well as regular external quality assessment schemes (EQAS), are needed to monitor the analytical performance of field methods. However, control material commutability is a major concern to assess method accuracy. To overcome material non-commutability, we investigated the possibility of using lyophilized serum samples together with a limited number of frozen serum samples to assign matrix-corrected target values, taking the example of glucose assays. Trueness of the current glucose assays was first measured against a primary reference method by using human frozen sera. Methods using hexokinase and glucose oxidase with spectroreflectometric detection proved very accurate, with bias ranging between -2.2% and +2.3%. Bias of methods using glucose oxidase with spectrophotometric detection was +4.5%. Matrix-related bias of the lyophilized materials was then determined and ranged from +2.5% to -14.4%. Matrix-corrected target values were assigned and used to assess trueness of 22 sub-peer groups. We demonstrated that matrix-corrected target values can be a valuable tool to assess field method accuracy in large scale surveys where commutable materials are not available in sufficient amount with acceptable costs. Copyright © 2012 Elsevier B.V. All rights reserved.
Corrected ROC analysis for misclassified binary outcomes.
Zawistowski, Matthew; Sussman, Jeremy B; Hofer, Timothy P; Bentley, Douglas; Hayward, Rodney A; Wiitala, Wyndy L
2017-06-15
Creating accurate risk prediction models from Big Data resources such as Electronic Health Records (EHRs) is a critical step toward achieving precision medicine. A major challenge in developing these tools is accounting for imperfect aspects of EHR data, particularly the potential for misclassified outcomes. Misclassification, the swapping of case and control outcome labels, is well known to bias effect size estimates for regression prediction models. In this paper, we study the effect of misclassification on accuracy assessment for risk prediction models and find that it leads to bias in the area under the curve (AUC) metric from standard ROC analysis. The extent of the bias is determined by the false positive and false negative misclassification rates as well as disease prevalence. Notably, we show that simply correcting for misclassification while building the prediction model is not sufficient to remove the bias in AUC. We therefore introduce an intuitive misclassification-adjusted ROC procedure that accounts for uncertainty in observed outcomes and produces bias-corrected estimates of the true AUC. The method requires that misclassification rates are either known or can be estimated, quantities typically required for the modeling step. The computational simplicity of our method is a key advantage, making it ideal for efficiently comparing multiple prediction models on very large datasets. Finally, we apply the correction method to a hospitalization prediction model from a cohort of over 1 million patients from the Veterans Health Administrations EHR. Implementations of the ROC correction are provided for Stata and R. Published 2017. This article is a U.S. Government work and is in the public domain in the USA. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
Analytic Methods for Adjusting Subjective Rating Schemes.
ERIC Educational Resources Information Center
Cooper, Richard V. L.; Nelson, Gary R.
Statistical and econometric techniques of correcting for supervisor bias in models of individual performance appraisal were developed, using a variant of the classical linear regression model. Location bias occurs when individual performance is systematically overestimated or underestimated, while scale bias results when raters either exaggerate…
A Realization of Bias Correction Method in the GMAO Coupled System
NASA Technical Reports Server (NTRS)
Chang, Yehui; Koster, Randal; Wang, Hailan; Schubert, Siegfried; Suarez, Max
2018-01-01
Over the past several decades, a tremendous effort has been made to improve model performance in the simulation of the climate system. The cold or warm sea surface temperature (SST) bias in the tropics is still a problem common to most coupled ocean atmosphere general circulation models (CGCMs). The precipitation biases in CGCMs are also accompanied by SST and surface wind biases. The deficiencies and biases over the equatorial oceans through their influence on the Walker circulation likely contribute the precipitation biases over land surfaces. In this study, we introduce an approach in the CGCM modeling to correct model biases. This approach utilizes the history of the model's short-term forecasting errors and their seasonal dependence to modify model's tendency term and to minimize its climate drift. The study shows that such an approach removes most of model climate biases. A number of other aspects of the model simulation (e.g. extratropical transient activities) are also improved considerably due to the imposed pre-processed initial 3-hour model drift corrections. Because many regional biases in the GEOS-5 CGCM are common amongst other current models, our approaches and findings are applicable to these other models as well.
Fat fraction bias correction using T1 estimates and flip angle mapping.
Yang, Issac Y; Cui, Yifan; Wiens, Curtis N; Wade, Trevor P; Friesen-Waldner, Lanette J; McKenzie, Charles A
2014-01-01
To develop a new method of reducing T1 bias in proton density fat fraction (PDFF) measured with iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL). PDFF maps reconstructed from high flip angle IDEAL measurements were simulated and acquired from phantoms and volunteer L4 vertebrae. T1 bias was corrected using a priori T1 values for water and fat, both with and without flip angle correction. Signal-to-noise ratio (SNR) maps were used to measure precision of the reconstructed PDFF maps. PDFF measurements acquired using small flip angles were then compared to both sets of corrected large flip angle measurements for accuracy and precision. Simulations show similar results in PDFF error between small flip angle measurements and corrected large flip angle measurements as long as T1 estimates were within one standard deviation from the true value. Compared to low flip angle measurements, phantom and in vivo measurements demonstrate better precision and accuracy in PDFF measurements if images were acquired at a high flip angle, with T1 bias corrected using T1 estimates and flip angle mapping. T1 bias correction of large flip angle acquisitions using estimated T1 values with flip angle mapping yields fat fraction measurements of similar accuracy and superior precision compared to low flip angle acquisitions. Copyright © 2013 Wiley Periodicals, Inc.
Bias correction for selecting the minimal-error classifier from many machine learning models.
Ding, Ying; Tang, Shaowu; Liao, Serena G; Jia, Jia; Oesterreich, Steffi; Lin, Yan; Tseng, George C
2014-11-15
Supervised machine learning is commonly applied in genomic research to construct a classifier from the training data that is generalizable to predict independent testing data. When test datasets are not available, cross-validation is commonly used to estimate the error rate. Many machine learning methods are available, and it is well known that no universally best method exists in general. It has been a common practice to apply many machine learning methods and report the method that produces the smallest cross-validation error rate. Theoretically, such a procedure produces a selection bias. Consequently, many clinical studies with moderate sample sizes (e.g. n = 30-60) risk reporting a falsely small cross-validation error rate that could not be validated later in independent cohorts. In this article, we illustrated the probabilistic framework of the problem and explored the statistical and asymptotic properties. We proposed a new bias correction method based on learning curve fitting by inverse power law (IPL) and compared it with three existing methods: nested cross-validation, weighted mean correction and Tibshirani-Tibshirani procedure. All methods were compared in simulation datasets, five moderate size real datasets and two large breast cancer datasets. The result showed that IPL outperforms the other methods in bias correction with smaller variance, and it has an additional advantage to extrapolate error estimates for larger sample sizes, a practical feature to recommend whether more samples should be recruited to improve the classifier and accuracy. An R package 'MLbias' and all source files are publicly available. tsenglab.biostat.pitt.edu/software.htm. ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
CMB internal delensing with general optimal estimator for higher-order correlations
Namikawa, Toshiya
2017-05-24
We present here a new method for delensing B modes of the cosmic microwave background (CMB) using a lensing potential reconstructed from the same realization of the CMB polarization (CMB internal delensing). The B -mode delensing is required to improve sensitivity to primary B modes generated by, e.g., the inflationary gravitational waves, axionlike particles, modified gravity, primordial magnetic fields, and topological defects such as cosmic strings. However, the CMB internal delensing suffers from substantial biases due to correlations between observed CMB maps to be delensed and that used for reconstructing a lensing potential. Since the bias depends on realizations, wemore » construct a realization-dependent (RD) estimator for correcting these biases by deriving a general optimal estimator for higher-order correlations. The RD method is less sensitive to simulation uncertainties. Compared to the previous ℓ -splitting method, we find that the RD method corrects the biases without substantial degradation of the delensing efficiency.« less
Adaptive correction of ensemble forecasts
NASA Astrophysics Data System (ADS)
Pelosi, Anna; Battista Chirico, Giovanni; Van den Bergh, Joris; Vannitsem, Stephane
2017-04-01
Forecasts from numerical weather prediction (NWP) models often suffer from both systematic and non-systematic errors. These are present in both deterministic and ensemble forecasts, and originate from various sources such as model error and subgrid variability. Statistical post-processing techniques can partly remove such errors, which is particularly important when NWP outputs concerning surface weather variables are employed for site specific applications. Many different post-processing techniques have been developed. For deterministic forecasts, adaptive methods such as the Kalman filter are often used, which sequentially post-process the forecasts by continuously updating the correction parameters as new ground observations become available. These methods are especially valuable when long training data sets do not exist. For ensemble forecasts, well-known techniques are ensemble model output statistics (EMOS), and so-called "member-by-member" approaches (MBM). Here, we introduce a new adaptive post-processing technique for ensemble predictions. The proposed method is a sequential Kalman filtering technique that fully exploits the information content of the ensemble. One correction equation is retrieved and applied to all members, however the parameters of the regression equations are retrieved by exploiting the second order statistics of the forecast ensemble. We compare our new method with two other techniques: a simple method that makes use of a running bias correction of the ensemble mean, and an MBM post-processing approach that rescales the ensemble mean and spread, based on minimization of the Continuous Ranked Probability Score (CRPS). We perform a verification study for the region of Campania in southern Italy. We use two years (2014-2015) of daily meteorological observations of 2-meter temperature and 10-meter wind speed from 18 ground-based automatic weather stations distributed across the region, comparing them with the corresponding COSMO-LEPS ensemble forecasts. Deterministic verification scores (e.g., mean absolute error, bias) and probabilistic scores (e.g., CRPS) are used to evaluate the post-processing techniques. We conclude that the new adaptive method outperforms the simpler running bias-correction. The proposed adaptive method often outperforms the MBM method in removing bias. The MBM method has the advantage of correcting the ensemble spread, although it needs more training data.
Explanation of Two Anomalous Results in Statistical Mediation Analysis.
Fritz, Matthew S; Taylor, Aaron B; Mackinnon, David P
2012-01-01
Previous studies of different methods of testing mediation models have consistently found two anomalous results. The first result is elevated Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap tests not found in nonresampling tests or in resampling tests that did not include a bias correction. This is of special concern as the bias-corrected bootstrap is often recommended and used due to its higher statistical power compared with other tests. The second result is statistical power reaching an asymptote far below 1.0 and in some conditions even declining slightly as the size of the relationship between X and M , a , increased. Two computer simulations were conducted to examine these findings in greater detail. Results from the first simulation found that the increased Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap are a function of an interaction between the size of the individual paths making up the mediated effect and the sample size, such that elevated Type I error rates occur when the sample size is small and the effect size of the nonzero path is medium or larger. Results from the second simulation found that stagnation and decreases in statistical power as a function of the effect size of the a path occurred primarily when the path between M and Y , b , was small. Two empirical mediation examples are provided using data from a steroid prevention and health promotion program aimed at high school football players (Athletes Training and Learning to Avoid Steroids; Goldberg et al., 1996), one to illustrate a possible Type I error for the bias-corrected bootstrap test and a second to illustrate a loss in power related to the size of a . Implications of these findings are discussed.
Data assimilation in integrated hydrological modelling in the presence of observation bias
NASA Astrophysics Data System (ADS)
Rasmussen, J.; Madsen, H.; Jensen, K. H.; Refsgaard, J. C.
2015-08-01
The use of bias-aware Kalman filters for estimating and correcting observation bias in groundwater head observations is evaluated using both synthetic and real observations. In the synthetic test, groundwater head observations with a constant bias and unbiased stream discharge observations are assimilated in a catchment scale integrated hydrological model with the aim of updating stream discharge and groundwater head, as well as several model parameters relating to both stream flow and groundwater modeling. The Colored Noise Kalman filter (ColKF) and the Separate bias Kalman filter (SepKF) are tested and evaluated for correcting the observation biases. The study found that both methods were able to estimate most of the biases and that using any of the two bias estimation methods resulted in significant improvements over using a bias-unaware Kalman Filter. While the convergence of the ColKF was significantly faster than the convergence of the SepKF, a much larger ensemble size was required as the estimation of biases would otherwise fail. Real observations of groundwater head and stream discharge were also assimilated, resulting in improved stream flow modeling in terms of an increased Nash-Sutcliffe coefficient while no clear improvement in groundwater head modeling was observed. Both the ColKF and the SepKF tended to underestimate the biases, which resulted in drifting model behavior and sub-optimal parameter estimation, but both methods provided better state updating and parameter estimation than using a bias-unaware filter.
Data assimilation in integrated hydrological modelling in the presence of observation bias
NASA Astrophysics Data System (ADS)
Rasmussen, Jørn; Madsen, Henrik; Høgh Jensen, Karsten; Refsgaard, Jens Christian
2016-05-01
The use of bias-aware Kalman filters for estimating and correcting observation bias in groundwater head observations is evaluated using both synthetic and real observations. In the synthetic test, groundwater head observations with a constant bias and unbiased stream discharge observations are assimilated in a catchment-scale integrated hydrological model with the aim of updating stream discharge and groundwater head, as well as several model parameters relating to both streamflow and groundwater modelling. The coloured noise Kalman filter (ColKF) and the separate-bias Kalman filter (SepKF) are tested and evaluated for correcting the observation biases. The study found that both methods were able to estimate most of the biases and that using any of the two bias estimation methods resulted in significant improvements over using a bias-unaware Kalman filter. While the convergence of the ColKF was significantly faster than the convergence of the SepKF, a much larger ensemble size was required as the estimation of biases would otherwise fail. Real observations of groundwater head and stream discharge were also assimilated, resulting in improved streamflow modelling in terms of an increased Nash-Sutcliffe coefficient while no clear improvement in groundwater head modelling was observed. Both the ColKF and the SepKF tended to underestimate the biases, which resulted in drifting model behaviour and sub-optimal parameter estimation, but both methods provided better state updating and parameter estimation than using a bias-unaware filter.
Modal Correction Method For Dynamically Induced Errors In Wind-Tunnel Model Attitude Measurements
NASA Technical Reports Server (NTRS)
Buehrle, R. D.; Young, C. P., Jr.
1995-01-01
This paper describes a method for correcting the dynamically induced bias errors in wind tunnel model attitude measurements using measured modal properties of the model system. At NASA Langley Research Center, the predominant instrumentation used to measure model attitude is a servo-accelerometer device that senses the model attitude with respect to the local vertical. Under smooth wind tunnel operating conditions, this inertial device can measure the model attitude with an accuracy of 0.01 degree. During wind tunnel tests when the model is responding at high dynamic amplitudes, the inertial device also senses the centrifugal acceleration associated with model vibration. This centrifugal acceleration results in a bias error in the model attitude measurement. A study of the response of a cantilevered model system to a simulated dynamic environment shows significant bias error in the model attitude measurement can occur and is vibration mode and amplitude dependent. For each vibration mode contributing to the bias error, the error is estimated from the measured modal properties and tangential accelerations at the model attitude device. Linear superposition is used to combine the bias estimates for individual modes to determine the overall bias error as a function of time. The modal correction model predicts the bias error to a high degree of accuracy for the vibration modes characterized in the simulated dynamic environment.
A New Variational Method for Bias Correction and Its Applications to Rodent Brain Extraction.
Chang, Huibin; Huang, Weimin; Wu, Chunlin; Huang, Su; Guan, Cuntai; Sekar, Sakthivel; Bhakoo, Kishore Kumar; Duan, Yuping
2017-03-01
Brain extraction is an important preprocessing step for further analysis of brain MR images. Significant intensity inhomogeneity can be observed in rodent brain images due to the high-field MRI technique. Unlike most existing brain extraction methods that require bias corrected MRI, we present a high-order and L 0 regularized variational model for bias correction and brain extraction. The model is composed of a data fitting term, a piecewise constant regularization and a smooth regularization, which is constructed on a 3-D formulation for medical images with anisotropic voxel sizes. We propose an efficient multi-resolution algorithm for fast computation. At each resolution layer, we solve an alternating direction scheme, all subproblems of which have the closed-form solutions. The method is tested on three T2 weighted acquisition configurations comprising a total of 50 rodent brain volumes, which are with the acquisition field strengths of 4.7 Tesla, 9.4 Tesla and 17.6 Tesla, respectively. On one hand, we compare the results of bias correction with N3 and N4 in terms of the coefficient of variations on 20 different tissues of rodent brain. On the other hand, the results of brain extraction are compared against manually segmented gold standards, BET, BSE and 3-D PCNN based on a number of metrics. With the high accuracy and efficiency, our proposed method can facilitate automatic processing of large-scale brain studies.
Jeon, Jihyoun; Hsu, Li; Gorfine, Malka
2012-07-01
Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.
NASA Astrophysics Data System (ADS)
Huang, Zhijiong; Hu, Yongtao; Zheng, Junyu; Zhai, Xinxin; Huang, Ran
2018-05-01
Lateral boundary conditions (LBCs) are essential for chemical transport models to simulate regional transport; however they often contain large uncertainties. This study proposes an optimized data fusion approach to reduce the bias of LBCs by fusing gridded model outputs, from which the daughter domain's LBCs are derived, with ground-level measurements. The optimized data fusion approach follows the framework of a previous interpolation-based fusion method but improves it by using a bias kriging method to correct the spatial bias in gridded model outputs. Cross-validation shows that the optimized approach better estimates fused fields in areas with a large number of observations compared to the previous interpolation-based method. The optimized approach was applied to correct LBCs of PM2.5 concentrations for simulations in the Pearl River Delta (PRD) region as a case study. Evaluations show that the LBCs corrected by data fusion improve in-domain PM2.5 simulations in terms of the magnitude and temporal variance. Correlation increases by 0.13-0.18 and fractional bias (FB) decreases by approximately 3%-15%. This study demonstrates the feasibility of applying data fusion to improve regional air quality modeling.
Number-counts slope estimation in the presence of Poisson noise
NASA Technical Reports Server (NTRS)
Schmitt, Juergen H. M. M.; Maccacaro, Tommaso
1986-01-01
The slope determination of a power-law number flux relationship in the case of photon-limited sampling. This case is important for high-sensitivity X-ray surveys with imaging telescopes, where the error in an individual source measurement depends on integrated flux and is Poisson, rather than Gaussian, distributed. A bias-free method of slope estimation is developed that takes into account the exact error distribution, the influence of background noise, and the effects of varying limiting sensitivities. It is shown that the resulting bias corrections are quite insensitive to the bias correction procedures applied, as long as only sources with signal-to-noise ratio five or greater are considered. However, if sources with signal-to-noise ratio five or less are included, the derived bias corrections depend sensitively on the shape of the error distribution.
Bias correction for estimated QTL effects using the penalized maximum likelihood method.
Zhang, J; Yue, C; Zhang, Y-M
2012-04-01
A penalized maximum likelihood method has been proposed as an important approach to the detection of epistatic quantitative trait loci (QTL). However, this approach is not optimal in two special situations: (1) closely linked QTL with effects in opposite directions and (2) small-effect QTL, because the method produces downwardly biased estimates of QTL effects. The present study aims to correct the bias by using correction coefficients and shifting from the use of a uniform prior on the variance parameter of a QTL effect to that of a scaled inverse chi-square prior. The results of Monte Carlo simulation experiments show that the improved method increases the power from 25 to 88% in the detection of two closely linked QTL of equal size in opposite directions and from 60 to 80% in the identification of QTL with small effects (0.5% of the total phenotypic variance). We used the improved method to detect QTL responsible for the barley kernel weight trait using 145 doubled haploid lines developed in the North American Barley Genome Mapping Project. Application of the proposed method to other shrinkage estimation of QTL effects is discussed.
Saska, Pavel; van der Werf, Wopke; Hemerik, Lia; Luff, Martin L; Hatten, Timothy D; Honek, Alois; Pocock, Michael
2013-02-01
Carabids and other epigeal arthropods make important contributions to biodiversity, food webs and biocontrol of invertebrate pests and weeds. Pitfall trapping is widely used for sampling carabid populations, but this technique yields biased estimates of abundance ('activity-density') because individual activity - which is affected by climatic factors - affects the rate of catch. To date, the impact of temperature on pitfall catches, while suspected to be large, has not been quantified, and no method is available to account for it. This lack of knowledge and the unavailability of a method for bias correction affect the confidence that can be placed on results of ecological field studies based on pitfall data.Here, we develop a simple model for the effect of temperature, assuming a constant proportional change in the rate of catch per °C change in temperature, r , consistent with an exponential Q 10 response to temperature. We fit this model to 38 time series of pitfall catches and accompanying temperature records from the literature, using first differences and other detrending methods to account for seasonality. We use meta-analysis to assess consistency of the estimated parameter r among studies.The mean rate of increase in total catch across data sets was 0·0863 ± 0·0058 per °C of maximum temperature and 0·0497 ± 0·0107 per °C of minimum temperature. Multiple regression analyses of 19 data sets showed that temperature is the key climatic variable affecting total catch. Relationships between temperature and catch were also identified at species level. Correction for temperature bias had substantial effects on seasonal trends of carabid catches. Synthesis and Applications . The effect of temperature on pitfall catches is shown here to be substantial and worthy of consideration when interpreting results of pitfall trapping. The exponential model can be used both for effect estimation and for bias correction of observed data. Correcting for temperature-related trapping bias is straightforward and enables population estimates to be more comparable. It may thus improve data interpretation in ecological, conservation and monitoring studies, and assist in better management and conservation of habitats and ecosystem services. Nevertheless, field ecologists should remain vigilant for other sources of bias.
MacDonald, M. Ethan; Forkert, Nils D.; Pike, G. Bruce; Frayne, Richard
2016-01-01
Purpose Volume flow rate (VFR) measurements based on phase contrast (PC)-magnetic resonance (MR) imaging datasets have spatially varying bias due to eddy current induced phase errors. The purpose of this study was to assess the impact of phase errors in time averaged PC-MR imaging of the cerebral vasculature and explore the effects of three common correction schemes (local bias correction (LBC), local polynomial correction (LPC), and whole brain polynomial correction (WBPC)). Methods Measurements of the eddy current induced phase error from a static phantom were first obtained. In thirty healthy human subjects, the methods were then assessed in background tissue to determine if local phase offsets could be removed. Finally, the techniques were used to correct VFR measurements in cerebral vessels and compared statistically. Results In the phantom, phase error was measured to be <2.1 ml/s per pixel and the bias was reduced with the correction schemes. In background tissue, the bias was significantly reduced, by 65.6% (LBC), 58.4% (LPC) and 47.7% (WBPC) (p < 0.001 across all schemes). Correction did not lead to significantly different VFR measurements in the vessels (p = 0.997). In the vessel measurements, the three correction schemes led to flow measurement differences of -0.04 ± 0.05 ml/s, 0.09 ± 0.16 ml/s, and -0.02 ± 0.06 ml/s. Although there was an improvement in background measurements with correction, there was no statistical difference between the three correction schemes (p = 0.242 in background and p = 0.738 in vessels). Conclusions While eddy current induced phase errors can vary between hardware and sequence configurations, our results showed that the impact is small in a typical brain PC-MR protocol and does not have a significant effect on VFR measurements in cerebral vessels. PMID:26910600
Batistatou, Evridiki; McNamee, Roseanne
2012-12-10
It is known that measurement error leads to bias in assessing exposure effects, which can however, be corrected if independent replicates are available. For expensive replicates, two-stage (2S) studies that produce data 'missing by design', may be preferred over a single-stage (1S) study, because in the second stage, measurement of replicates is restricted to a sample of first-stage subjects. Motivated by an occupational study on the acute effect of carbon black exposure on respiratory morbidity, we compare the performance of several bias-correction methods for both designs in a simulation study: an instrumental variable method (EVROS IV) based on grouping strategies, which had been recommended especially when measurement error is large, the regression calibration and the simulation extrapolation methods. For the 2S design, either the problem of 'missing' data was ignored or the 'missing' data were imputed using multiple imputations. Both in 1S and 2S designs, in the case of small or moderate measurement error, regression calibration was shown to be the preferred approach in terms of root mean square error. For 2S designs, regression calibration as implemented by Stata software is not recommended in contrast to our implementation of this method; the 'problematic' implementation of regression calibration although substantially improved with use of multiple imputations. The EVROS IV method, under a good/fairly good grouping, outperforms the regression calibration approach in both design scenarios when exposure mismeasurement is severe. Both in 1S and 2S designs with moderate or large measurement error, simulation extrapolation severely failed to correct for bias. Copyright © 2012 John Wiley & Sons, Ltd.
Revisiting the Logan plot to account for non-negligible blood volume in brain tissue.
Schain, Martin; Fazio, Patrik; Mrzljak, Ladislav; Amini, Nahid; Al-Tawil, Nabil; Fitzer-Attas, Cheryl; Bronzova, Juliana; Landwehrmeyer, Bernhard; Sampaio, Christina; Halldin, Christer; Varrone, Andrea
2017-08-18
Reference tissue-based quantification of brain PET data does not typically include correction for signal originating from blood vessels, which is known to result in biased outcome measures. The bias extent depends on the amount of radioactivity in the blood vessels. In this study, we seek to revisit the well-established Logan plot and derive alternative formulations that provide estimation of distribution volume ratios (DVRs) that are corrected for the signal originating from the vasculature. New expressions for the Logan plot based on arterial input function and reference tissue were derived, which included explicit terms for whole blood radioactivity. The new methods were evaluated using PET data acquired using [ 11 C]raclopride and [ 18 F]MNI-659. The two-tissue compartment model (2TCM), with which signal originating from blood can be explicitly modeled, was used as a gold standard. DVR values obtained for [ 11 C]raclopride using the either blood-based or reference tissue-based Logan plot were systematically underestimated compared to 2TCM, and for [ 18 F]MNI-659, a proportionality bias was observed, i.e., the bias varied across regions. The biases disappeared when optimal blood-signal correction was used for respective tracer, although for the case of [ 18 F]MNI-659 a small but systematic overestimation of DVR was still observed. The new method appears to remove the bias introduced due to absence of correction for blood volume in regular graphical analysis and can be considered in clinical studies. Further studies are however required to derive a generic mapping between plasma and whole-blood radioactivity levels.
Correcting the Relative Bias of Light Obscuration and Flow Imaging Particle Counters.
Ripple, Dean C; Hu, Zhishang
2016-03-01
Industry and regulatory bodies desire more accurate methods for counting and characterizing particles. Measurements of proteinaceous-particle concentrations by light obscuration and flow imaging can differ by factors of ten or more. We propose methods to correct the diameters reported by light obscuration and flow imaging instruments. For light obscuration, diameters were rescaled based on characterization of the refractive index of typical particles and a light scattering model for the extinction efficiency factor. The light obscuration models are applicable for either homogeneous materials (e.g., silicone oil) or for chemically homogeneous, but spatially non-uniform aggregates (e.g., protein aggregates). For flow imaging, the method relied on calibration of the instrument with silica beads suspended in water-glycerol mixtures. These methods were applied to a silicone-oil droplet suspension and four particle suspensions containing particles produced from heat stressed and agitated human serum albumin, agitated polyclonal immunoglobulin, and abraded ethylene tetrafluoroethylene polymer. All suspensions were measured by two flow imaging and one light obscuration apparatus. Prior to correction, results from the three instruments disagreed by a factor ranging from 3.1 to 48 in particle concentration over the size range from 2 to 20 μm. Bias corrections reduced the disagreement from an average factor of 14 down to an average factor of 1.5. The methods presented show promise in reducing the relative bias between light obscuration and flow imaging.
Bias of shear wave elasticity measurements in thin layer samples and a simple correction strategy.
Mo, Jianqiang; Xu, Hao; Qiang, Bo; Giambini, Hugo; Kinnick, Randall; An, Kai-Nan; Chen, Shigao; Luo, Zongping
2016-01-01
Shear wave elastography (SWE) is an emerging technique for measuring biological tissue stiffness. However, the application of SWE in thin layer tissues is limited by bias due to the influence of geometry on measured shear wave speed. In this study, we investigated the bias of Young's modulus measured by SWE in thin layer gelatin-agar phantoms, and compared the result with finite element method and Lamb wave model simulation. The result indicated that the Young's modulus measured by SWE decreased continuously when the sample thickness decreased, and this effect was more significant for smaller thickness. We proposed a new empirical formula which can conveniently correct the bias without the need of using complicated mathematical modeling. In summary, we confirmed the nonlinear relation between thickness and Young's modulus measured by SWE in thin layer samples, and offered a simple and practical correction strategy which is convenient for clinicians to use.
Normalization, bias correction, and peak calling for ChIP-seq
Diaz, Aaron; Park, Kiyoub; Lim, Daniel A.; Song, Jun S.
2012-01-01
Next-generation sequencing is rapidly transforming our ability to profile the transcriptional, genetic, and epigenetic states of a cell. In particular, sequencing DNA from the immunoprecipitation of protein-DNA complexes (ChIP-seq) and methylated DNA (MeDIP-seq) can reveal the locations of protein binding sites and epigenetic modifications. These approaches contain numerous biases which may significantly influence the interpretation of the resulting data. Rigorous computational methods for detecting and removing such biases are still lacking. Also, multi-sample normalization still remains an important open problem. This theoretical paper systematically characterizes the biases and properties of ChIP-seq data by comparing 62 separate publicly available datasets, using rigorous statistical models and signal processing techniques. Statistical methods for separating ChIP-seq signal from background noise, as well as correcting enrichment test statistics for sequence-dependent and sonication biases, are presented. Our method effectively separates reads into signal and background components prior to normalization, improving the signal-to-noise ratio. Moreover, most peak callers currently use a generic null model which suffers from low specificity at the sensitivity level requisite for detecting subtle, but true, ChIP enrichment. The proposed method of determining a cell type-specific null model, which accounts for cell type-specific biases, is shown to be capable of achieving a lower false discovery rate at a given significance threshold than current methods. PMID:22499706
Development of Spatiotemporal Bias-Correction Techniques for Downscaling GCM Predictions
NASA Astrophysics Data System (ADS)
Hwang, S.; Graham, W. D.; Geurink, J.; Adams, A.; Martinez, C. J.
2010-12-01
Accurately representing the spatial variability of precipitation is an important factor for predicting watershed response to climatic forcing, particularly in small, low-relief watersheds affected by convective storm systems. Although Global Circulation Models (GCMs) generally preserve spatial relationships between large-scale and local-scale mean precipitation trends, most GCM downscaling techniques focus on preserving only observed temporal variability on point by point basis, not spatial patterns of events. Downscaled GCM results (e.g., CMIP3 ensembles) have been widely used to predict hydrologic implications of climate variability and climate change in large snow-dominated river basins in the western United States (Diffenbaugh et al., 2008; Adam et al., 2009). However fewer applications to smaller rain-driven river basins in the southeastern US (where preserving spatial variability of rainfall patterns may be more important) have been reported. In this study a new method was developed to bias-correct GCMs to preserve both the long term temporal mean and variance of the precipitation data, and the spatial structure of daily precipitation fields. Forty-year retrospective simulations (1960-1999) from 16 GCMs were collected (IPCC, 2007; WCRP CMIP3 multi-model database: https://esg.llnl.gov:8443/), and the daily precipitation data at coarse resolution (i.e., 280km) were interpolated to 12km spatial resolution and bias corrected using gridded observations over the state of Florida (Maurer et al., 2002; Wood et al, 2002; Wood et al, 2004). In this method spatial random fields which preserved the observed spatial correlation structure of the historic gridded observations and the spatial mean corresponding to the coarse scale GCM daily rainfall were generated. The spatiotemporal variability of the spatio-temporally bias-corrected GCMs were evaluated against gridded observations, and compared to the original temporally bias-corrected and downscaled CMIP3 data for the central Florida. The hydrologic response of two southwest Florida watersheds to the gridded observation data, the original bias corrected CMIP3 data, and the new spatiotemporally corrected CMIP3 predictions was compared using an integrated surface-subsurface hydrologic model developed by Tampa Bay Water.
NASA Astrophysics Data System (ADS)
Shen, Xiang; Liu, Bin; Li, Qing-Quan
2017-03-01
The Rational Function Model (RFM) has proven to be a viable alternative to the rigorous sensor models used for geo-processing of high-resolution satellite imagery. Because of various errors in the satellite ephemeris and instrument calibration, the Rational Polynomial Coefficients (RPCs) supplied by image vendors are often not sufficiently accurate, and there is therefore a clear need to correct the systematic biases in order to meet the requirements of high-precision topographic mapping. In this paper, we propose a new RPC bias-correction method using the thin-plate spline modeling technique. Benefiting from its excellent performance and high flexibility in data fitting, the thin-plate spline model has the potential to remove complex distortions in vendor-provided RPCs, such as the errors caused by short-period orbital perturbations. The performance of the new method was evaluated by using Ziyuan-3 satellite images and was compared against the recently developed least-squares collocation approach, as well as the classical affine-transformation and quadratic-polynomial based methods. The results show that the accuracies of the thin-plate spline and the least-squares collocation approaches were better than the other two methods, which indicates that strong non-rigid deformations exist in the test data because they cannot be adequately modeled by simple polynomial-based methods. The performance of the thin-plate spline method was close to that of the least-squares collocation approach when only a few Ground Control Points (GCPs) were used, and it improved more rapidly with an increase in the number of redundant observations. In the test scenario using 21 GCPs (some of them located at the four corners of the scene), the correction residuals of the thin-plate spline method were about 36%, 37%, and 19% smaller than those of the affine transformation method, the quadratic polynomial method, and the least-squares collocation algorithm, respectively, which demonstrates that the new method can be more effective at removing systematic biases in vendor-supplied RPCs.
Modeling bias and variation in the stochastic processes of small RNA sequencing
Etheridge, Alton; Sakhanenko, Nikita; Galas, David
2017-01-01
Abstract The use of RNA-seq as the preferred method for the discovery and validation of small RNA biomarkers has been hindered by high quantitative variability and biased sequence counts. In this paper we develop a statistical model for sequence counts that accounts for ligase bias and stochastic variation in sequence counts. This model implies a linear quadratic relation between the mean and variance of sequence counts. Using a large number of sequencing datasets, we demonstrate how one can use the generalized additive models for location, scale and shape (GAMLSS) distributional regression framework to calculate and apply empirical correction factors for ligase bias. Bias correction could remove more than 40% of the bias for miRNAs. Empirical bias correction factors appear to be nearly constant over at least one and up to four orders of magnitude of total RNA input and independent of sample composition. Using synthetic mixes of known composition, we show that the GAMLSS approach can analyze differential expression with greater accuracy, higher sensitivity and specificity than six existing algorithms (DESeq2, edgeR, EBSeq, limma, DSS, voom) for the analysis of small RNA-seq data. PMID:28369495
Ensemble stacking mitigates biases in inference of synaptic connectivity.
Chambers, Brendan; Levy, Maayan; Dechery, Joseph B; MacLean, Jason N
2018-01-01
A promising alternative to directly measuring the anatomical connections in a neuronal population is inferring the connections from the activity. We employ simulated spiking neuronal networks to compare and contrast commonly used inference methods that identify likely excitatory synaptic connections using statistical regularities in spike timing. We find that simple adjustments to standard algorithms improve inference accuracy: A signing procedure improves the power of unsigned mutual-information-based approaches and a correction that accounts for differences in mean and variance of background timing relationships, such as those expected to be induced by heterogeneous firing rates, increases the sensitivity of frequency-based methods. We also find that different inference methods reveal distinct subsets of the synaptic network and each method exhibits different biases in the accurate detection of reciprocity and local clustering. To correct for errors and biases specific to single inference algorithms, we combine methods into an ensemble. Ensemble predictions, generated as a linear combination of multiple inference algorithms, are more sensitive than the best individual measures alone, and are more faithful to ground-truth statistics of connectivity, mitigating biases specific to single inference methods. These weightings generalize across simulated datasets, emphasizing the potential for the broad utility of ensemble-based approaches.
Carnegie, Nicole Bohme
2011-04-15
The incidence of new infections is a key measure of the status of the HIV epidemic, but accurate measurement of incidence is often constrained by limited data. Karon et al. (Statist. Med. 2008; 27:4617–4633) developed a model to estimate the incidence of HIV infection from surveillance data with biologic testing for recent infection for newly diagnosed cases. This method has been implemented by public health departments across the United States and is behind the new national incidence estimates, which are about 40 per cent higher than previous estimates. We show that the delta method approximation given for the variance of the estimator is incomplete, leading to an inflated variance estimate. This contributes to the generation of overly conservative confidence intervals, potentially obscuring important differences between populations. We demonstrate via simulation that an innovative model-based bootstrap method using the specified model for the infection and surveillance process improves confidence interval coverage and adjusts for the bias in the point estimate. Confidence interval coverage is about 94–97 per cent after correction, compared with 96–99 per cent before. The simulated bias in the estimate of incidence ranges from −6.3 to +14.6 per cent under the original model but is consistently under 1 per cent after correction by the model-based bootstrap. In an application to data from King County, Washington in 2007 we observe correction of 7.2 per cent relative bias in the incidence estimate and a 66 per cent reduction in the width of the 95 per cent confidence interval using this method. We provide open-source software to implement the method that can also be extended for alternate models.
NASA Technical Reports Server (NTRS)
Kumar, S. V.; Peters-Lidard, C. D.; Santanello, J. A.; Reichle, R. H.; Draper, C. S.; Koster, R. D.; Nearing, G.; Jasinski, M. F.
2015-01-01
Earth's land surface is characterized by tremendous natural heterogeneity and human-engineered modifications, both of which are challenging to represent in land surface models. Satellite remote sensing is often the most practical and effective method to observe the land surface over large geographical areas. Agricultural irrigation is an important human-induced modification to natural land surface processes, as it is pervasive across the world and because of its significant influence on the regional and global water budgets. In this article, irrigation is used as an example of a human-engineered, often unmodeled land surface process, and the utility of satellite soil moisture retrievals over irrigated areas in the continental US is examined. Such retrievals are based on passive or active microwave observations from the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E), the Advanced Microwave Scanning Radiometer 2 (AMSR2), the Soil Moisture Ocean Salinity (SMOS) mission, WindSat and the Advanced Scatterometer (ASCAT). The analysis suggests that the skill of these retrievals for representing irrigation effects is mixed, with ASCAT-based products somewhat more skillful than SMOS and AMSR2 products. The article then examines the suitability of typical bias correction strategies in current land data assimilation systems when unmodeled processes dominate the bias between the model and the observations. Using a suite of synthetic experiments that includes bias correction strategies such as quantile mapping and trained forward modeling, it is demonstrated that the bias correction practices lead to the exclusion of the signals from unmodeled processes, if these processes are the major source of the biases. It is further shown that new methods are needed to preserve the observational information about unmodeled processes during data assimilation.
Braaf, Boy; Donner, Sabine; Nam, Ahhyun S.; Bouma, Brett E.; Vakoc, Benjamin J.
2018-01-01
Complex differential variance (CDV) provides phase-sensitive angiographic imaging for optical coherence tomography (OCT) with immunity to phase-instabilities of the imaging system and small-scale axial bulk motion. However, like all angiographic methods, measurement noise can result in erroneous indications of blood flow that confuse the interpretation of angiographic images. In this paper, a modified CDV algorithm that corrects for this noise-bias is presented. This is achieved by normalizing the CDV signal by analytically derived upper and lower limits. The noise-bias corrected CDV algorithm was implemented into an experimental 1 μm wavelength OCT system for retinal imaging that used an eye tracking scanner laser ophthalmoscope at 815 nm for compensation of lateral eye motions. The noise-bias correction improved the CDV imaging of the blood flow in tissue layers with a low signal-to-noise ratio and suppressed false indications of blood flow outside the tissue. In addition, the CDV signal normalization suppressed noise induced by galvanometer scanning errors and small-scale lateral motion. High quality cross-section and motion-corrected en face angiograms of the retina and choroid are presented. PMID:29552388
Braaf, Boy; Donner, Sabine; Nam, Ahhyun S; Bouma, Brett E; Vakoc, Benjamin J
2018-02-01
Complex differential variance (CDV) provides phase-sensitive angiographic imaging for optical coherence tomography (OCT) with immunity to phase-instabilities of the imaging system and small-scale axial bulk motion. However, like all angiographic methods, measurement noise can result in erroneous indications of blood flow that confuse the interpretation of angiographic images. In this paper, a modified CDV algorithm that corrects for this noise-bias is presented. This is achieved by normalizing the CDV signal by analytically derived upper and lower limits. The noise-bias corrected CDV algorithm was implemented into an experimental 1 μm wavelength OCT system for retinal imaging that used an eye tracking scanner laser ophthalmoscope at 815 nm for compensation of lateral eye motions. The noise-bias correction improved the CDV imaging of the blood flow in tissue layers with a low signal-to-noise ratio and suppressed false indications of blood flow outside the tissue. In addition, the CDV signal normalization suppressed noise induced by galvanometer scanning errors and small-scale lateral motion. High quality cross-section and motion-corrected en face angiograms of the retina and choroid are presented.
Correcting length-frequency distributions for imperfect detection
Breton, André R.; Hawkins, John A.; Winkelman, Dana L.
2013-01-01
Sampling gear selects for specific sizes of fish, which may bias length-frequency distributions that are commonly used to assess population size structure, recruitment patterns, growth, and survival. To properly correct for sampling biases caused by gear and other sources, length-frequency distributions need to be corrected for imperfect detection. We describe a method for adjusting length-frequency distributions when capture and recapture probabilities are a function of fish length, temporal variation, and capture history. The method is applied to a study involving the removal of Smallmouth Bass Micropterus dolomieu by boat electrofishing from a 38.6-km reach on the Yampa River, Colorado. Smallmouth Bass longer than 100 mm were marked and released alive from 2005 to 2010 on one or more electrofishing passes and removed on all other passes from the population. Using the Huggins mark–recapture model, we detected a significant effect of fish total length, previous capture history (behavior), year, pass, year×behavior, and year×pass on capture and recapture probabilities. We demonstrate how to partition the Huggins estimate of abundance into length frequencies to correct for these effects. Uncorrected length frequencies of fish removed from Little Yampa Canyon were negatively biased in every year by as much as 88% relative to mark–recapture estimates for the smallest length-class in our analysis (100–110 mm). Bias declined but remained high even for adult length-classes (≥200 mm). The pattern of bias across length-classes was variable across years. The percentage of unadjusted counts that were below the lower 95% confidence interval from our adjusted length-frequency estimates were 95, 89, 84, 78, 81, and 92% from 2005 to 2010, respectively. Length-frequency distributions are widely used in fisheries science and management. Our simple method for correcting length-frequency estimates for imperfect detection could be widely applied when mark–recapture data are available.
Sampling and handling artifacts can bias filter-based measurements of particulate organic carbon (OC). Several measurement-based methods for OC artifact reduction and/or estimation are currently used in research-grade field studies. OC frequently is not artifact-corrected in larg...
The SAMI Galaxy Survey: can we trust aperture corrections to predict star formation?
NASA Astrophysics Data System (ADS)
Richards, S. N.; Bryant, J. J.; Croom, S. M.; Hopkins, A. M.; Schaefer, A. L.; Bland-Hawthorn, J.; Allen, J. T.; Brough, S.; Cecil, G.; Cortese, L.; Fogarty, L. M. R.; Gunawardhana, M. L. P.; Goodwin, M.; Green, A. W.; Ho, I.-T.; Kewley, L. J.; Konstantopoulos, I. S.; Lawrence, J. S.; Lorente, N. P. F.; Medling, A. M.; Owers, M. S.; Sharp, R.; Sweet, S. M.; Taylor, E. N.
2016-01-01
In the low-redshift Universe (z < 0.3), our view of galaxy evolution is primarily based on fibre optic spectroscopy surveys. Elaborate methods have been developed to address aperture effects when fixed aperture sizes only probe the inner regions for galaxies of ever decreasing redshift or increasing physical size. These aperture corrections rely on assumptions about the physical properties of galaxies. The adequacy of these aperture corrections can be tested with integral-field spectroscopic data. We use integral-field spectra drawn from 1212 galaxies observed as part of the SAMI Galaxy Survey to investigate the validity of two aperture correction methods that attempt to estimate a galaxy's total instantaneous star formation rate. We show that biases arise when assuming that instantaneous star formation is traced by broad-band imaging, and when the aperture correction is built only from spectra of the nuclear region of galaxies. These biases may be significant depending on the selection criteria of a survey sample. Understanding the sensitivities of these aperture corrections is essential for correct handling of systematic errors in galaxy evolution studies.
A multi-source precipitation approach to fill gaps over a radar precipitation field
NASA Astrophysics Data System (ADS)
Tesfagiorgis, K. B.; Mahani, S. E.; Khanbilvardi, R.
2012-12-01
Satellite Precipitation Estimates (SPEs) may be the only available source of information for operational hydrologic and flash flood prediction due to spatial limitations of radar and gauge products. The present work develops an approach to seamlessly blend satellite, radar, climatological and gauge precipitation products to fill gaps over ground-based radar precipitation fields. To mix different precipitation products, the bias of any of the products relative to each other should be removed. For bias correction, the study used an ensemble-based method which aims to estimate spatially varying multiplicative biases in SPEs using a radar rainfall product. Bias factors were calculated for a randomly selected sample of rainy pixels in the study area. Spatial fields of estimated bias were generated taking into account spatial variation and random errors in the sampled values. A weighted Successive Correction Method (SCM) is proposed to make the merging between error corrected satellite and radar rainfall estimates. In addition to SCM, we use a Bayesian spatial method for merging the gap free radar with rain gauges, climatological rainfall sources and SPEs. We demonstrate the method using SPE Hydro-Estimator (HE), radar- based Stage-II, a climatological product PRISM and rain gauge dataset for several rain events from 2006 to 2008 over three different geographical locations of the United States. Results show that: the SCM method in combination with the Bayesian spatial model produced a precipitation product in good agreement with independent measurements. The study implies that using the available radar pixels surrounding the gap area, rain gauge, PRISM and satellite products, a radar like product is achievable over radar gap areas that benefits the scientific community.
2008112500 2008112400 Background information bias reduction = ( | domain-averaged ensemble mean bias | - | domain-averaged bias-corrected ensemble mean bias | / | domain-averaged bias-corrected ensemble mean bias
NASA Technical Reports Server (NTRS)
Whiteman, D. N.; Cadirola, M.; Venable, D.; Calhoun, M.; Miloshevich, L; Vermeesch, K.; Twigg, L.; Dirisu, A.; Hurst, D.; Hall, E.;
2012-01-01
The MOHAVE-2009 campaign brought together diverse instrumentation for measuring atmospheric water vapor. We report on the participation of the ALVICE (Atmospheric Laboratory for Validation, Interagency Collaboration and Education) mobile laboratory in the MOHAVE-2009 campaign. In appendices we also report on the performance of the corrected Vaisala RS92 radiosonde measurements during the campaign, on a new radiosonde based calibration algorithm that reduces the influence of atmospheric variability on the derived calibration constant, and on other results of the ALVICE deployment. The MOHAVE-2009 campaign permitted the Raman lidar systems participating to discover and address measurement biases in the upper troposphere and lower stratosphere. The ALVICE lidar system was found to possess a wet bias which was attributed to fluorescence of insect material that was deposited on the telescope early in the mission. Other sources of wet biases are discussed and data from other Raman lidar systems are investigated, revealing that wet biases in upper tropospheric (UT) and lower stratospheric (LS) water vapor measurements appear to be quite common in Raman lidar systems. Lower stratospheric climatology of water vapor is investigated both as a means to check for the existence of these wet biases in Raman lidar data and as a source of correction for the bias. A correction technique is derived and applied to the ALVICE lidar water vapor profiles. Good agreement is found between corrected ALVICE lidar measurments and those of RS92, frost point hygrometer and total column water. The correction is offered as a general method to both quality control Raman water vapor lidar data and to correct those data that have signal-dependent bias. The influence of the correction is shown to be small at regions in the upper troposphere where recent work indicates detection of trends in atmospheric water vapor may be most robust. The correction shown here holds promise for permitting useful upper tropospheric water vapor profiles to be consistently measured by Raman lidar within NDACC (Network for the Detection of Atmospheric Composition Change) and elsewhere, despite the prevalence of instrumental and atmospheric effects that can contaminate the very low signal to noise measurements in the UT.
Fetterly, Kenneth A; Favazza, Christopher P
2016-08-07
Channelized Hotelling model observer (CHO) methods were developed to assess performance of an x-ray angiography system. The analytical methods included correction for known bias error due to finite sampling. Detectability indices ([Formula: see text]) corresponding to disk-shaped objects with diameters in the range 0.5-4 mm were calculated. Application of the CHO for variable detector target dose (DTD) in the range 6-240 nGy frame(-1) resulted in [Formula: see text] estimates which were as much as 2.9× greater than expected of a quantum limited system. Over-estimation of [Formula: see text] was presumed to be a result of bias error due to temporally variable non-stationary noise. Statistical theory which allows for independent contributions of 'signal' from a test object (o) and temporally variable non-stationary noise (ns) was developed. The theory demonstrates that the biased [Formula: see text] is the sum of the detectability indices associated with the test object [Formula: see text] and non-stationary noise ([Formula: see text]). Given the nature of the imaging system and the experimental methods, [Formula: see text] cannot be directly determined independent of [Formula: see text]. However, methods to estimate [Formula: see text] independent of [Formula: see text] were developed. In accordance with the theory, [Formula: see text] was subtracted from experimental estimates of [Formula: see text], providing an unbiased estimate of [Formula: see text]. Estimates of [Formula: see text] exhibited trends consistent with expectations of an angiography system that is quantum limited for high DTD and compromised by detector electronic readout noise for low DTD conditions. Results suggest that these methods provide [Formula: see text] estimates which are accurate and precise for [Formula: see text]. Further, results demonstrated that the source of bias was detector electronic readout noise. In summary, this work presents theory and methods to test for the presence of bias in Hotelling model observers due to temporally variable non-stationary noise and correct this bias when the temporally variable non-stationary noise is independent and additive with respect to the test object signal.
Meta-analysis of alcohol price and income elasticities – with corrections for publication bias
2013-01-01
Background This paper contributes to the evidence-base on prices and alcohol use by presenting meta-analytic summaries of price and income elasticities for alcohol beverages. The analysis improves on previous meta-analyses by correcting for outliers and publication bias. Methods Adjusting for outliers is important to avoid assigning too much weight to studies with very small standard errors or large effect sizes. Trimmed samples are used for this purpose. Correcting for publication bias is important to avoid giving too much weight to studies that reflect selection by investigators or others involved with publication processes. Cumulative meta-analysis is proposed as a method to avoid or reduce publication bias, resulting in more robust estimates. The literature search obtained 182 primary studies for aggregate alcohol consumption, which exceeds the database used in previous reviews and meta-analyses. Results For individual beverages, corrected price elasticities are smaller (less elastic) by 28-29 percent compared with consensus averages frequently used for alcohol beverages. The average price and income elasticities are: beer, -0.30 and 0.50; wine, -0.45 and 1.00; and spirits, -0.55 and 1.00. For total alcohol, the price elasticity is -0.50 and the income elasticity is 0.60. Conclusions These new results imply that attempts to reduce alcohol consumption through price or tax increases will be less effective or more costly than previously claimed. PMID:23883547
Skin Temperature Analysis and Bias Correction in a Coupled Land-Atmosphere Data Assimilation System
NASA Technical Reports Server (NTRS)
Bosilovich, Michael G.; Radakovich, Jon D.; daSilva, Arlindo; Todling, Ricardo; Verter, Frances
2006-01-01
In an initial investigation, remotely sensed surface temperature is assimilated into a coupled atmosphere/land global data assimilation system, with explicit accounting for biases in the model state. In this scheme, an incremental bias correction term is introduced in the model's surface energy budget. In its simplest form, the algorithm estimates and corrects a constant time mean bias for each gridpoint; additional benefits are attained with a refined version of the algorithm which allows for a correction of the mean diurnal cycle. The method is validated against the assimilated observations, as well as independent near-surface air temperature observations. In many regions, not accounting for the diurnal cycle of bias caused degradation of the diurnal amplitude of background model air temperature. Energy fluxes collected through the Coordinated Enhanced Observing Period (CEOP) are used to more closely inspect the surface energy budget. In general, sensible heat flux is improved with the surface temperature assimilation, and two stations show a reduction of bias by as much as 30 Wm(sup -2) Rondonia station in Amazonia, the Bowen ratio changes direction in an improvement related to the temperature assimilation. However, at many stations the monthly latent heat flux bias is slightly increased. These results show the impact of univariate assimilation of surface temperature observations on the surface energy budget, and suggest the need for multivariate land data assimilation. The results also show the need for independent validation data, especially flux stations in varied climate regimes.
NASA Astrophysics Data System (ADS)
Watanabe, S.; Kim, H.; Utsumi, N.
2017-12-01
This study aims to develop a new approach which projects hydrology under climate change using super ensemble experiments. The use of multiple ensemble is essential for the estimation of extreme, which is a major issue in the impact assessment of climate change. Hence, the super ensemble experiments are recently conducted by some research programs. While it is necessary to use multiple ensemble, the multiple calculations of hydrological simulation for each output of ensemble simulations needs considerable calculation costs. To effectively use the super ensemble experiments, we adopt a strategy to use runoff projected by climate models directly. The general approach of hydrological projection is to conduct hydrological model simulations which include land-surface and river routing process using atmospheric boundary conditions projected by climate models as inputs. This study, on the other hand, simulates only river routing model using runoff projected by climate models. In general, the climate model output is systematically biased so that a preprocessing which corrects such bias is necessary for impact assessments. Various bias correction methods have been proposed, but, to the best of our knowledge, no method has proposed for variables other than surface meteorology. Here, we newly propose a method for utilizing the projected future runoff directly. The developed method estimates and corrects the bias based on the pseudo-observation which is a result of retrospective offline simulation. We show an application of this approach to the super ensemble experiments conducted under the program of Half a degree Additional warming, Prognosis and Projected Impacts (HAPPI). More than 400 ensemble experiments from multiple climate models are available. The results of the validation using historical simulations by HAPPI indicates that the output of this approach can effectively reproduce retrospective runoff variability. Likewise, the bias of runoff from super ensemble climate projections is corrected, and the impact of climate change on hydrologic extremes is assessed in a cost-efficient way.
Tuerk, Andreas; Wiktorin, Gregor; Güler, Serhat
2017-05-01
Accuracy of transcript quantification with RNA-Seq is negatively affected by positional fragment bias. This article introduces Mix2 (rd. "mixquare"), a transcript quantification method which uses a mixture of probability distributions to model and thereby neutralize the effects of positional fragment bias. The parameters of Mix2 are trained by Expectation Maximization resulting in simultaneous transcript abundance and bias estimates. We compare Mix2 to Cufflinks, RSEM, eXpress and PennSeq; state-of-the-art quantification methods implementing some form of bias correction. On four synthetic biases we show that the accuracy of Mix2 overall exceeds the accuracy of the other methods and that its bias estimates converge to the correct solution. We further evaluate Mix2 on real RNA-Seq data from the Microarray and Sequencing Quality Control (MAQC, SEQC) Consortia. On MAQC data, Mix2 achieves improved correlation to qPCR measurements with a relative increase in R2 between 4% and 50%. Mix2 also yields repeatable concentration estimates across technical replicates with a relative increase in R2 between 8% and 47% and reduced standard deviation across the full concentration range. We further observe more accurate detection of differential expression with a relative increase in true positives between 74% and 378% for 5% false positives. In addition, Mix2 reveals 5 dominant biases in MAQC data deviating from the common assumption of a uniform fragment distribution. On SEQC data, Mix2 yields higher consistency between measured and predicted concentration ratios. A relative error of 20% or less is obtained for 51% of transcripts by Mix2, 40% of transcripts by Cufflinks and RSEM and 30% by eXpress. Titration order consistency is correct for 47% of transcripts for Mix2, 41% for Cufflinks and RSEM and 34% for eXpress. We, further, observe improved repeatability across laboratory sites with a relative increase in R2 between 8% and 44% and reduced standard deviation.
Assessment of bias correction under transient climate change
NASA Astrophysics Data System (ADS)
Van Schaeybroeck, Bert; Vannitsem, Stéphane
2015-04-01
Calibration of climate simulations is necessary since large systematic discrepancies are generally found between the model climate and the observed climate. Recent studies have cast doubt upon the common assumption of the bias being stationary when the climate changes. This led to the development of new methods, mostly based on linear sensitivity of the biases as a function of time or forcing (Kharin et al. 2012). However, recent studies uncovered more fundamental problems using both low-order systems (Vannitsem 2011) and climate models, showing that the biases may display complicated non-linear variations under climate change. This last analysis focused on biases derived from the equilibrium climate sensitivity, thereby ignoring the effect of the transient climate sensitivity. Based on the linear response theory, a general method of bias correction is therefore proposed that can be applied on any climate forcing scenario. The validity of the method is addressed using twin experiments with a climate model of intermediate complexity LOVECLIM (Goosse et al., 2010). We evaluate to what extent the bias change is sensitive to the structure (frequency) of the applied forcing (here greenhouse gases) and whether the linear response theory is valid for global and/or local variables. To answer these question we perform large-ensemble simulations using different 300-year scenarios of forced carbon-dioxide concentrations. Reality and simulations are assumed to differ by a model error emulated as a parametric error in the wind drag or in the radiative scheme. References [1] H. Goosse et al., 2010: Description of the Earth system model of intermediate complexity LOVECLIM version 1.2, Geosci. Model Dev., 3, 603-633. [2] S. Vannitsem, 2011: Bias correction and post-processing under climate change, Nonlin. Processes Geophys., 18, 911-924. [3] V.V. Kharin, G. J. Boer, W. J. Merryfield, J. F. Scinocca, and W.-S. Lee, 2012: Statistical adjustment of decadal predictions in a changing climate, Geophys. Res. Lett., 39, L19705.
Mixed Model Association with Family-Biased Case-Control Ascertainment.
Hayeck, Tristan J; Loh, Po-Ru; Pollack, Samuela; Gusev, Alexander; Patterson, Nick; Zaitlen, Noah A; Price, Alkes L
2017-01-05
Mixed models have become the tool of choice for genetic association studies; however, standard mixed model methods may be poorly calibrated or underpowered under family sampling bias and/or case-control ascertainment. Previously, we introduced a liability threshold-based mixed model association statistic (LTMLM) to address case-control ascertainment in unrelated samples. Here, we consider family-biased case-control ascertainment, where case and control subjects are ascertained non-randomly with respect to family relatedness. Previous work has shown that this type of ascertainment can severely bias heritability estimates; we show here that it also impacts mixed model association statistics. We introduce a family-based association statistic (LT-Fam) that is robust to this problem. Similar to LTMLM, LT-Fam is computed from posterior mean liabilities (PML) under a liability threshold model; however, LT-Fam uses published narrow-sense heritability estimates to avoid the problem of biased heritability estimation, enabling correct calibration. In simulations with family-biased case-control ascertainment, LT-Fam was correctly calibrated (average χ 2 = 1.00-1.02 for null SNPs), whereas the Armitage trend test (ATT), standard mixed model association (MLM), and case-control retrospective association test (CARAT) were mis-calibrated (e.g., average χ 2 = 0.50-1.22 for MLM, 0.89-2.65 for CARAT). LT-Fam also attained higher power than other methods in some settings. In 1,259 type 2 diabetes-affected case subjects and 5,765 control subjects from the CARe cohort, downsampled to induce family-biased ascertainment, LT-Fam was correctly calibrated whereas ATT, MLM, and CARAT were again mis-calibrated. Our results highlight the importance of modeling family sampling bias in case-control datasets with related samples. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Tesfagiorgis, Kibrewossen B.
Satellite Precipitation Estimates (SPEs) may be the only available source of information for operational hydrologic and flash flood prediction due to spatial limitations of radar and gauge products in mountainous regions. The present work develops an approach to seamlessly blend satellite, available radar, climatological and gauge precipitation products to fill gaps in ground-based radar precipitation field. To mix different precipitation products, the error of any of the products relative to each other should be removed. For bias correction, the study uses a new ensemble-based method which aims to estimate spatially varying multiplicative biases in SPEs using a radar-gauge precipitation product. Bias factors were calculated for a randomly selected sample of rainy pixels in the study area. Spatial fields of estimated bias were generated taking into account spatial variation and random errors in the sampled values. In addition to biases, sometimes there is also spatial error between the radar and satellite precipitation estimates; one of them has to be geometrically corrected with reference to the other. A set of corresponding raining points between SPE and radar products are selected to apply linear registration using a regularized least square technique to minimize the dislocation error in SPEs with respect to available radar products. A weighted Successive Correction Method (SCM) is used to make the merging between error corrected satellite and radar precipitation estimates. In addition to SCM, we use a combination of SCM and Bayesian spatial method for merging the rain gauges and climatological precipitation sources with radar and SPEs. We demonstrated the method using two satellite-based, CPC Morphing (CMORPH) and Hydro-Estimator (HE), two radar-gauge based, Stage-II and ST-IV, a climatological product PRISM and rain gauge dataset for several rain events from 2006 to 2008 over different geographical locations of the United States. Results show that: (a) the method of ensembles helped reduce biases in SPEs significantly; (b) the SCM method in combination with the Bayesian spatial model produced a precipitation product in good agreement with independent measurements .The study implies that using the available radar pixels surrounding the gap area, rain gauge, PRISM and satellite products, a radar like product is achievable over radar gap areas that benefits the operational meteorology and hydrology community.
Correcting Biases in a lower resolution global circulation model with data assimilation
NASA Astrophysics Data System (ADS)
Canter, Martin; Barth, Alexander
2016-04-01
With this work, we aim at developping a new method of bias correction using data assimilation. This method is based on the stochastic forcing of a model to correct bias. First, through a preliminary run, we estimate the bias of the model and its possible sources. Then, we establish a forcing term which is directly added inside the model's equations. We create an ensemble of runs and consider the forcing term as a control variable during the assimilation of observations. We then use this analysed forcing term to correct the bias of the model. Since the forcing is added inside the model, it acts as a source term, unlike external forcings such as wind. This procedure has been developed and successfully tested with a twin experiment on a Lorenz 95 model. It is currently being applied and tested on the sea ice ocean NEMO LIM model, which is used in the PredAntar project. NEMO LIM is a global and low resolution (2 degrees) coupled model (hydrodynamic model and sea ice model) with long time steps allowing simulations over several decades. Due to its low resolution, the model is subject to bias in area where strong currents are present. We aim at correcting this bias by using perturbed current fields from higher resolution models and randomly generated perturbations. The random perturbations need to be constrained in order to respect the physical properties of the ocean, and not create unwanted phenomena. To construct those random perturbations, we first create a random field with the Diva tool (Data-Interpolating Variational Analysis). Using a cost function, this tool penalizes abrupt variations in the field, while using a custom correlation length. It also decouples disconnected areas based on topography. Then, we filter the field to smoothen it and remove small scale variations. We use this field as a random stream function, and take its derivatives to get zonal and meridional velocity fields. We also constrain the stream function along the coasts in order not to have currents perpendicular to the coast. The randomly generated stochastic forcing are then directly injected into the NEMO LIM model's equations in order to force the model at each timestep, and not only during the assimilation step. Results from a twin experiment will be presented. This method is being applied to a real case, with observations on the sea surface height available from the mean dynamic topography of CNES (Centre national d'études spatiales). The model, the bias correction, and more extensive forcings, in particular with a three dimensional structure and a time-varying component, will also be presented.
Weighted divergence correction scheme and its fast implementation
NASA Astrophysics Data System (ADS)
Wang, ChengYue; Gao, Qi; Wei, RunJie; Li, Tian; Wang, JinJun
2017-05-01
Forcing the experimental volumetric velocity fields to satisfy mass conversation principles has been proved beneficial for improving the quality of measured data. A number of correction methods including the divergence correction scheme (DCS) have been proposed to remove divergence errors from measurement velocity fields. For tomographic particle image velocimetry (TPIV) data, the measurement uncertainty for the velocity component along the light thickness direction is typically much larger than for the other two components. Such biased measurement errors would weaken the performance of traditional correction methods. The paper proposes a variant for the existing DCS by adding weighting coefficients to the three velocity components, named as the weighting DCS (WDCS). The generalized cross validation (GCV) method is employed to choose the suitable weighting coefficients. A fast algorithm for DCS or WDCS is developed, making the correction process significantly low-cost to implement. WDCS has strong advantages when correcting velocity components with biased noise levels. Numerical tests validate the accuracy and efficiency of the fast algorithm, the effectiveness of GCV method, and the advantages of WDCS. Lastly, DCS and WDCS are employed to process experimental velocity fields from the TPIV measurement of a turbulent boundary layer. This shows that WDCS achieves a better performance than DCS in improving some flow statistics.
Estimating Gravity Biases with Wavelets in Support of a 1-cm Accurate Geoid Model
NASA Astrophysics Data System (ADS)
Ahlgren, K.; Li, X.
2017-12-01
Systematic errors that reside in surface gravity datasets are one of the major hurdles in constructing a high-accuracy geoid model at high resolutions. The National Oceanic and Atmospheric Administration's (NOAA) National Geodetic Survey (NGS) has an extensive historical surface gravity dataset consisting of approximately 10 million gravity points that are known to have systematic biases at the mGal level (Saleh et al. 2013). As most relevant metadata is absent, estimating and removing these errors to be consistent with a global geopotential model and airborne data in the corresponding wavelength is quite a difficult endeavor. However, this is crucial to support a 1-cm accurate geoid model for the United States. With recently available independent gravity information from GRACE/GOCE and airborne gravity from the NGS Gravity for the Redefinition of the American Vertical Datum (GRAV-D) project, several different methods of bias estimation are investigated which utilize radial basis functions and wavelet decomposition. We estimate a surface gravity value by incorporating a satellite gravity model, airborne gravity data, and forward-modeled topography at wavelet levels according to each dataset's spatial wavelength. Considering the estimated gravity values over an entire gravity survey, an estimate of the bias and/or correction for the entire survey can be found and applied. In order to assess the accuracy of each bias estimation method, two techniques are used. First, each bias estimation method is used to predict the bias for two high-quality (unbiased and high accuracy) geoid slope validation surveys (GSVS) (Smith et al. 2013 & Wang et al. 2017). Since these surveys are unbiased, the various bias estimation methods should reflect that and provide an absolute accuracy metric for each of the bias estimation methods. Secondly, the corrected gravity datasets from each of the bias estimation methods are used to build a geoid model. The accuracy of each geoid model provides an additional metric to assess the performance of each bias estimation method. The geoid model accuracies are assessed using the two GSVS lines and GPS-leveling data across the United States.
NASA Astrophysics Data System (ADS)
Weber, Torsten; Haensler, Andreas; Jacob, Daniela
2017-12-01
Regional climate models (RCMs) have been used to dynamically downscale global climate projections at high spatial and temporal resolution in order to analyse the atmospheric water cycle. In southern Africa, precipitation pattern were strongly affected by the moisture transport from the southeast Atlantic and southwest Indian Ocean and, consequently, by their sea surface temperatures (SSTs). However, global ocean models often have deficiencies in resolving regional to local scale ocean currents, e.g. in ocean areas offshore the South African continent. By downscaling global climate projections using RCMs, the biased SSTs from the global forcing data were introduced to the RCMs and affected the results of regional climate projections. In this work, the impact of the SST bias correction on precipitation, evaporation and moisture transport were analysed over southern Africa. For this analysis, several experiments were conducted with the regional climate model REMO using corrected and uncorrected SSTs. In these experiments, a global MPI-ESM-LR historical simulation was downscaled with the regional climate model REMO to a high spatial resolution of 50 × 50 km2 and of 25 × 25 km2 for southern Africa using a double-nesting method. The results showed a distinct impact of the corrected SST on the moisture transport, the meridional vertical circulation and on the precipitation pattern in southern Africa. Furthermore, it was found that the experiment with the corrected SST led to a reduction of the wet bias over southern Africa and to a better agreement with observations as without SST bias corrections.
Effects of vibration on inertial wind-tunnel model attitude measurement devices
NASA Technical Reports Server (NTRS)
Young, Clarence P., Jr.; Buehrle, Ralph D.; Balakrishna, S.; Kilgore, W. Allen
1994-01-01
Results of an experimental study of a wind tunnel model inertial angle-of-attack sensor response to a simulated dynamic environment are presented. The inertial device cannot distinguish between the gravity vector and the centrifugal accelerations associated with wind tunnel model vibration, this situation results in a model attitude measurement bias error. Significant bias error in model attitude measurement was found for the model system tested. The model attitude bias error was found to be vibration mode and amplitude dependent. A first order correction model was developed and used for estimating attitude measurement bias error due to dynamic motion. A method for correcting the output of the model attitude inertial sensor in the presence of model dynamics during on-line wind tunnel operation is proposed.
Quality Controlled Radiosonde Profile from MC3E
Toto, Tami; Jensen, Michael
2014-11-13
The sonde-adjust VAP produces data that corrects documented biases in radiosonde humidity measurements. Unique fields contained within this datastream include smoothed original relative humidity, dry bias corrected relative humidity, and final corrected relative humidity. The smoothed RH field refines the relative humidity from integers - the resolution of the instrument - to fractions of a percent. This profile is then used to calculate the dry bias corrected field. The final correction fixes a time-lag problem and uses the dry-bias field as input into the algorithm. In addition to dry bias, solar heating is another correction that is encompassed in the final corrected relative humidity field. Additional corrections were made to soundings at the extended facility sites (S0*) as necessary: Corrected erroneous surface elevation (and up through rest of height of sounding), for S03, S04 and S05. Corrected erroneous surface pressure at Chanute (S02).
NASA Astrophysics Data System (ADS)
Vrac, Mathieu
2018-06-01
Climate simulations often suffer from statistical biases with respect to observations or reanalyses. It is therefore common to correct (or adjust) those simulations before using them as inputs into impact models. However, most bias correction (BC) methods are univariate and so do not account for the statistical dependences linking the different locations and/or physical variables of interest. In addition, they are often deterministic, and stochasticity is frequently needed to investigate climate uncertainty and to add constrained randomness to climate simulations that do not possess a realistic variability. This study presents a multivariate method of rank resampling for distributions and dependences (R2D2) bias correction allowing one to adjust not only the univariate distributions but also their inter-variable and inter-site dependence structures. Moreover, the proposed R2D2 method provides some stochasticity since it can generate as many multivariate corrected outputs as the number of statistical dimensions (i.e., number of grid cell × number of climate variables) of the simulations to be corrected. It is based on an assumption of stability in time of the dependence structure - making it possible to deal with a high number of statistical dimensions - that lets the climate model drive the temporal properties and their changes in time. R2D2 is applied on temperature and precipitation reanalysis time series with respect to high-resolution reference data over the southeast of France (1506 grid cell). Bivariate, 1506-dimensional and 3012-dimensional versions of R2D2 are tested over a historical period and compared to a univariate BC. How the different BC methods behave in a climate change context is also illustrated with an application to regional climate simulations over the 2071-2100 period. The results indicate that the 1d-BC basically reproduces the climate model multivariate properties, 2d-R2D2 is only satisfying in the inter-variable context, 1506d-R2D2 strongly improves inter-site properties and 3012d-R2D2 is able to account for both. Applications of the proposed R2D2 method to various climate datasets are relevant for many impact studies. The perspectives of improvements are numerous, such as introducing stochasticity in the dependence itself, questioning its stability assumption, and accounting for temporal properties adjustment while including more physics in the adjustment procedures.
Oh, Eric J; Shepherd, Bryan E; Lumley, Thomas; Shaw, Pamela A
2018-04-15
For time-to-event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression-free survival or time to AIDS progression) can be difficult to assess or reliant on self-report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log-linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic. Copyright © 2017 John Wiley & Sons, Ltd.
2008073000 2008072900 2008072800 Background information bias reduction = ( | domain-averaged ensemble mean bias | - | domain-averaged bias-corrected ensemble mean bias | / | domain-averaged bias-corrected ensemble mean bias | NAEFS Products | NAEFS | EMC Ensemble Products EMC | NCEP | National Weather Service
NASA Astrophysics Data System (ADS)
Wang, Kuo-Nung; de la Torre Juárez, Manuel; Ao, Chi O.; Xie, Feiqin
2017-12-01
Global Navigation Satellite System (GNSS) radio occultation (RO) measurements are promising in sensing the vertical structure of the Earth's planetary boundary layer (PBL). However, large refractivity changes near the top of PBL can cause ducting and lead to a negative bias in the retrieved refractivity within the PBL (below ˜ 2 km). To remove the bias, a reconstruction method with assumption of linear structure inside the ducting layer models has been proposed by Xie et al. (2006). While the negative bias can be reduced drastically as demonstrated in the simulation, the lack of high-quality surface refractivity constraint makes its application to real RO data difficult. In this paper, we use the widely available precipitable water (PW) satellite observation as the external constraint for the bias correction. A new framework is proposed to incorporate optimization into the RO reconstruction retrievals in the presence of ducting conditions. The new method uses optimal estimation to select the best refractivity solution whose PW and PBL height best match the externally retrieved PW and the known a priori states, respectively. The near-coincident PW retrievals from AMSR-E microwave radiometer instruments are used as an external observational constraint. This new reconstruction method is tested on both the simulated GNSS-RO profiles and the actual GNSS-RO data. Our results show that the proposed method can greatly reduce the negative refractivity bias when compared to the traditional Abel inversion.
Kim, Kio; Habas, Piotr A.; Rajagopalan, Vidya; Scott, Julia A.; Corbett-Detig, James M.; Rousseau, Francois; Barkovich, A. James; Glenn, Orit A.; Studholme, Colin
2012-01-01
A common solution to clinical MR imaging in the presence of large anatomical motion is to use fast multi-slice 2D studies to reduce slice acquisition time and provide clinically usable slice data. Recently, techniques have been developed which retrospectively correct large scale 3D motion between individual slices allowing the formation of a geometrically correct 3D volume from the multiple slice stacks. One challenge, however, in the final reconstruction process is the possibility of varying intensity bias in the slice data, typically due to the motion of the anatomy relative to imaging coils. As a result, slices which cover the same region of anatomy at different times may exhibit different sensitivity. This bias field inconsistency can induce artifacts in the final 3D reconstruction that can impact both clinical interpretation of key tissue boundaries and the automated analysis of the data. Here we describe a framework to estimate and correct the bias field inconsistency in each slice collectively across all motion corrupted image slices. Experiments using synthetic and clinical data show that the proposed method reduces intensity variability in tissues and improves the distinction between key tissue types. PMID:21511561
Kim, Kio; Habas, Piotr A; Rajagopalan, Vidya; Scott, Julia A; Corbett-Detig, James M; Rousseau, Francois; Barkovich, A James; Glenn, Orit A; Studholme, Colin
2011-09-01
A common solution to clinical MR imaging in the presence of large anatomical motion is to use fast multislice 2D studies to reduce slice acquisition time and provide clinically usable slice data. Recently, techniques have been developed which retrospectively correct large scale 3D motion between individual slices allowing the formation of a geometrically correct 3D volume from the multiple slice stacks. One challenge, however, in the final reconstruction process is the possibility of varying intensity bias in the slice data, typically due to the motion of the anatomy relative to imaging coils. As a result, slices which cover the same region of anatomy at different times may exhibit different sensitivity. This bias field inconsistency can induce artifacts in the final 3D reconstruction that can impact both clinical interpretation of key tissue boundaries and the automated analysis of the data. Here we describe a framework to estimate and correct the bias field inconsistency in each slice collectively across all motion corrupted image slices. Experiments using synthetic and clinical data show that the proposed method reduces intensity variability in tissues and improves the distinction between key tissue types.
A Systematic Error Correction Method for TOVS Radiances
NASA Technical Reports Server (NTRS)
Joiner, Joanna; Rokke, Laurie; Einaudi, Franco (Technical Monitor)
2000-01-01
Treatment of systematic errors is crucial for the successful use of satellite data in a data assimilation system. Systematic errors in TOVS radiance measurements and radiative transfer calculations can be as large or larger than random instrument errors. The usual assumption in data assimilation is that observational errors are unbiased. If biases are not effectively removed prior to assimilation, the impact of satellite data will be lessened and can even be detrimental. Treatment of systematic errors is important for short-term forecast skill as well as the creation of climate data sets. A systematic error correction algorithm has been developed as part of a 1D radiance assimilation. This scheme corrects for spectroscopic errors, errors in the instrument response function, and other biases in the forward radiance calculation for TOVS. Such algorithms are often referred to as tuning of the radiances. The scheme is able to account for the complex, air-mass dependent biases that are seen in the differences between TOVS radiance observations and forward model calculations. We will show results of systematic error correction applied to the NOAA 15 Advanced TOVS as well as its predecessors. We will also discuss the ramifications of inter-instrument bias with a focus on stratospheric measurements.
NASA Astrophysics Data System (ADS)
Oh, Seok-Geun; Suh, Myoung-Seok
2017-07-01
The projection skills of five ensemble methods were analyzed according to simulation skills, training period, and ensemble members, using 198 sets of pseudo-simulation data (PSD) produced by random number generation assuming the simulated temperature of regional climate models. The PSD sets were classified into 18 categories according to the relative magnitude of bias, variance ratio, and correlation coefficient, where each category had 11 sets (including 1 truth set) with 50 samples. The ensemble methods used were as follows: equal weighted averaging without bias correction (EWA_NBC), EWA with bias correction (EWA_WBC), weighted ensemble averaging based on root mean square errors and correlation (WEA_RAC), WEA based on the Taylor score (WEA_Tay), and multivariate linear regression (Mul_Reg). The projection skills of the ensemble methods improved generally as compared with the best member for each category. However, their projection skills are significantly affected by the simulation skills of the ensemble member. The weighted ensemble methods showed better projection skills than non-weighted methods, in particular, for the PSD categories having systematic biases and various correlation coefficients. The EWA_NBC showed considerably lower projection skills than the other methods, in particular, for the PSD categories with systematic biases. Although Mul_Reg showed relatively good skills, it showed strong sensitivity to the PSD categories, training periods, and number of members. On the other hand, the WEA_Tay and WEA_RAC showed relatively superior skills in both the accuracy and reliability for all the sensitivity experiments. This indicates that WEA_Tay and WEA_RAC are applicable even for simulation data with systematic biases, a short training period, and a small number of ensemble members.
Bootstrap confidence levels for phylogenetic trees.
Efron, B; Halloran, E; Holmes, S
1996-07-09
Evolutionary trees are often estimated from DNA or RNA sequence data. How much confidence should we have in the estimated trees? In 1985, Felsenstein [Felsenstein, J. (1985) Evolution 39, 783-791] suggested the use of the bootstrap to answer this question. Felsenstein's method, which in concept is a straightforward application of the bootstrap, is widely used, but has been criticized as biased in the genetics literature. This paper concerns the use of the bootstrap in the tree problem. We show that Felsenstein's method is not biased, but that it can be corrected to better agree with standard ideas of confidence levels and hypothesis testing. These corrections can be made by using the more elaborate bootstrap method presented here, at the expense of considerably more computation.
NASA Astrophysics Data System (ADS)
Liersch, Stefan; Tecklenburg, Julia; Rust, Henning; Dobler, Andreas; Fischer, Madlen; Kruschke, Tim; Koch, Hagen; Fokko Hattermann, Fred
2018-04-01
Climate simulations are the fuel to drive hydrological models that are used to assess the impacts of climate change and variability on hydrological parameters, such as river discharges, soil moisture, and evapotranspiration. Unlike with cars, where we know which fuel the engine requires, we never know in advance what unexpected side effects might be caused by the fuel we feed our models with. Sometimes we increase the fuel's octane number (bias correction) to achieve better performance and find out that the model behaves differently but not always as was expected or desired. This study investigates the impacts of projected climate change on the hydrology of the Upper Blue Nile catchment using two model ensembles consisting of five global CMIP5 Earth system models and 10 regional climate models (CORDEX Africa). WATCH forcing data were used to calibrate an eco-hydrological model and to bias-correct both model ensembles using slightly differing approaches. On the one hand it was found that the bias correction methods considerably improved the performance of average rainfall characteristics in the reference period (1970-1999) in most of the cases. This also holds true for non-extreme discharge conditions between Q20 and Q80. On the other hand, bias-corrected simulations tend to overemphasize magnitudes of projected change signals and extremes. A general weakness of both uncorrected and bias-corrected simulations is the rather poor representation of high and low flows and their extremes, which were often deteriorated by bias correction. This inaccuracy is a crucial deficiency for regional impact studies dealing with water management issues and it is therefore important to analyse model performance and characteristics and the effect of bias correction, and eventually to exclude some climate models from the ensemble. However, the multi-model means of all ensembles project increasing average annual discharges in the Upper Blue Nile catchment and a shift in seasonal patterns, with decreasing discharges in June and July and increasing discharges from August to November.
Medrano-Gracia, Pau; Cowan, Brett R; Bluemke, David A; Finn, J Paul; Kadish, Alan H; Lee, Daniel C; Lima, Joao A C; Suinesiaputra, Avan; Young, Alistair A
2013-09-13
Cardiovascular imaging studies generate a wealth of data which is typically used only for individual study endpoints. By pooling data from multiple sources, quantitative comparisons can be made of regional wall motion abnormalities between different cohorts, enabling reuse of valuable data. Atlas-based analysis provides precise quantification of shape and motion differences between disease groups and normal subjects. However, subtle shape differences may arise due to differences in imaging protocol between studies. A mathematical model describing regional wall motion and shape was used to establish a coordinate system registered to the cardiac anatomy. The atlas was applied to data contributed to the Cardiac Atlas Project from two independent studies which used different imaging protocols: steady state free precession (SSFP) and gradient recalled echo (GRE) cardiovascular magnetic resonance (CMR). Shape bias due to imaging protocol was corrected using an atlas-based transformation which was generated from a set of 46 volunteers who were imaged with both protocols. Shape bias between GRE and SSFP was regionally variable, and was effectively removed using the atlas-based transformation. Global mass and volume bias was also corrected by this method. Regional shape differences between cohorts were more statistically significant after removing regional artifacts due to imaging protocol bias. Bias arising from imaging protocol can be both global and regional in nature, and is effectively corrected using an atlas-based transformation, enabling direct comparison of regional wall motion abnormalities between cohorts acquired in separate studies.
Malkyarenko, Dariya I; Chenevert, Thomas L
2014-12-01
To describe an efficient procedure to empirically characterize gradient nonlinearity and correct for the corresponding apparent diffusion coefficient (ADC) bias on a clinical magnetic resonance imaging (MRI) scanner. Spatial nonlinearity scalars for individual gradient coils along superior and right directions were estimated via diffusion measurements of an isotropicic e-water phantom. Digital nonlinearity model from an independent scanner, described in the literature, was rescaled by system-specific scalars to approximate 3D bias correction maps. Correction efficacy was assessed by comparison to unbiased ADC values measured at isocenter. Empirically estimated nonlinearity scalars were confirmed by geometric distortion measurements of a regular grid phantom. The applied nonlinearity correction for arbitrarily oriented diffusion gradients reduced ADC bias from 20% down to 2% at clinically relevant offsets both for isotropic and anisotropic media. Identical performance was achieved using either corrected diffusion-weighted imaging (DWI) intensities or corrected b-values for each direction in brain and ice-water. Direction-average trace image correction was adequate only for isotropic medium. Empiric scalar adjustment of an independent gradient nonlinearity model adequately described DWI bias for a clinical scanner. Observed efficiency of implemented ADC bias correction quantitatively agreed with previous theoretical predictions and numerical simulations. The described procedure provides an independent benchmark for nonlinearity bias correction of clinical MRI scanners.
Ma, Jianzhong; Amos, Christopher I; Warwick Daw, E
2007-09-01
Although extended pedigrees are often sampled through probands with extreme levels of a quantitative trait, Markov chain Monte Carlo (MCMC) methods for segregation and linkage analysis have not been able to perform ascertainment corrections. Further, the extent to which ascertainment of pedigrees leads to biases in the estimation of segregation and linkage parameters has not been previously studied for MCMC procedures. In this paper, we studied these issues with a Bayesian MCMC approach for joint segregation and linkage analysis, as implemented in the package Loki. We first simulated pedigrees ascertained through individuals with extreme values of a quantitative trait in spirit of the sequential sampling theory of Cannings and Thompson [Cannings and Thompson [1977] Clin. Genet. 12:208-212]. Using our simulated data, we detected no bias in estimates of the trait locus location. However, in addition to allele frequencies, when the ascertainment threshold was higher than or close to the true value of the highest genotypic mean, bias was also found in the estimation of this parameter. When there were multiple trait loci, this bias destroyed the additivity of the effects of the trait loci, and caused biases in the estimation all genotypic means when a purely additive model was used for analyzing the data. To account for pedigree ascertainment with sequential sampling, we developed a Bayesian ascertainment approach and implemented Metropolis-Hastings updates in the MCMC samplers used in Loki. Ascertainment correction greatly reduced biases in parameter estimates. Our method is designed for multiple, but a fixed number of trait loci. Copyright (c) 2007 Wiley-Liss, Inc.
Bias-correction and Spatial Disaggregation for Climate Change Impact Assessments at a basin scale
NASA Astrophysics Data System (ADS)
Nyunt, Cho; Koike, Toshio; Yamamoto, Akio; Nemoto, Toshihoro; Kitsuregawa, Masaru
2013-04-01
Basin-scale climate change impact studies mainly rely on general circulation models (GCMs) comprising the related emission scenarios. Realistic and reliable data from GCM is crucial for national scale or basin scale impact and vulnerability assessments to build safety society under climate change. However, GCM fail to simulate regional climate features due to the imprecise parameterization schemes in atmospheric physics and coarse resolution scale. This study describes how to exclude some unsatisfactory GCMs with respect to focused basin, how to minimize the biases of GCM precipitation through statistical bias correction and how to cover spatial disaggregation scheme, a kind of downscaling, within in a basin. GCMs rejection is based on the regional climate features of seasonal evolution as a bench mark and mainly depends on spatial correlation and root mean square error of precipitation and atmospheric variables over the target region. Global Precipitation Climatology Project (GPCP) and Japanese 25-uear Reanalysis Project (JRA-25) are specified as references in figuring spatial pattern and error of GCM. Statistical bias-correction scheme comprises improvements of three main flaws of GCM precipitation such as low intensity drizzled rain days with no dry day, underestimation of heavy rainfall and inter-annual variability of local climate. Biases of heavy rainfall are conducted by generalized Pareto distribution (GPD) fitting over a peak over threshold series. Frequency of rain day error is fixed by rank order statistics and seasonal variation problem is solved by using a gamma distribution fitting in each month against insi-tu stations vs. corresponding GCM grids. By implementing the proposed bias-correction technique to all insi-tu stations and their respective GCM grid, an easy and effective downscaling process for impact studies at the basin scale is accomplished. The proposed method have been examined its applicability to some of the basins in various climate regions all over the world. The biases are controlled very well by using this scheme in all applied basins. After that, bias-corrected and downscaled GCM precipitation are ready to use for simulating the Water and Energy Budget based Distributed Hydrological Model (WEB-DHM) to analyse the stream flow change or water availability of a target basin under the climate change in near future. Furthermore, it can be investigated any inter-disciplinary studies such as drought, flood, food, health and so on.In summary, an effective and comprehensive statistical bias-correction method was established to fulfil the generative applicability of GCM scale to basin scale without difficulty. This gap filling also promotes the sound decision of river management in the basin with more reliable information to build the resilience society.
Wang, Chaolong; Schroeder, Kari B.; Rosenberg, Noah A.
2012-01-01
Allelic dropout is a commonly observed source of missing data in microsatellite genotypes, in which one or both allelic copies at a locus fail to be amplified by the polymerase chain reaction. Especially for samples with poor DNA quality, this problem causes a downward bias in estimates of observed heterozygosity and an upward bias in estimates of inbreeding, owing to mistaken classifications of heterozygotes as homozygotes when one of the two copies drops out. One general approach for avoiding allelic dropout involves repeated genotyping of homozygous loci to minimize the effects of experimental error. Existing computational alternatives often require replicate genotyping as well. These approaches, however, are costly and are suitable only when enough DNA is available for repeated genotyping. In this study, we propose a maximum-likelihood approach together with an expectation-maximization algorithm to jointly estimate allelic dropout rates and allele frequencies when only one set of nonreplicated genotypes is available. Our method considers estimates of allelic dropout caused by both sample-specific factors and locus-specific factors, and it allows for deviation from Hardy–Weinberg equilibrium owing to inbreeding. Using the estimated parameters, we correct the bias in the estimation of observed heterozygosity through the use of multiple imputations of alleles in cases where dropout might have occurred. With simulated data, we show that our method can (1) effectively reproduce patterns of missing data and heterozygosity observed in real data; (2) correctly estimate model parameters, including sample-specific dropout rates, locus-specific dropout rates, and the inbreeding coefficient; and (3) successfully correct the downward bias in estimating the observed heterozygosity. We find that our method is fairly robust to violations of model assumptions caused by population structure and by genotyping errors from sources other than allelic dropout. Because the data sets imputed under our model can be investigated in additional subsequent analyses, our method will be useful for preparing data for applications in diverse contexts in population genetics and molecular ecology. PMID:22851645
NASA Technical Reports Server (NTRS)
Hall, Dorothy K.; Foster, James L.; Kumar, Sujay; Chien, Janety Y. L.; Riggs, George A.
2012-01-01
The Air Force Weather Agency (AFWA) -- NASA blended snow-cover product, called ANSA, utilizes Earth Observing System standard snow products from the Moderate- Resolution Imaging Spectroradiometer (MODIS) and the Advanced Microwave Scanning Radiometer for EOS (AMSR-E) to map daily snow cover and snow-water equivalent (SWE) globally. We have compared ANSA-derived SWE with SWE values calculated from snow depths reported at 1500 National Climatic Data Center (NCDC) co-op stations in the Lower Great Lakes Basin. Compared to station data, the ANSA significantly underestimates SWE in densely-forested areas. We use two methods to remove some of the bias observed in forested areas to reduce the root-mean-square error (RMSE) between the ANSA- and station-derived SWE. First, we calculated a 5- year mean ANSA-derived SWE for the winters of 2005-06 through 2009-10, and developed a five-year mean bias-corrected SWE map for each month. For most of the months studied during the five-year period, the 5-year bias correction improved the agreement between the ANSA-derived and station-derived SWE. However, anomalous months such as when there was very little snow on the ground compared to the 5-year mean, or months in which the snow was much greater than the 5-year mean, showed poorer results (as expected). We also used a 7-day running mean (7DRM) bias correction method using days just prior to the day in question to correct the ANSA data. This method was more effective in reducing the RMSE between the ANSA- and co-op-derived SWE values, and in capturing the effects of anomalous snow conditions.
Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media
Cooley, R.L.; Christensen, S.
2006-01-01
Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.
Parametric study of statistical bias in laser Doppler velocimetry
NASA Technical Reports Server (NTRS)
Gould, Richard D.; Stevenson, Warren H.; Thompson, H. Doyle
1989-01-01
Analytical studies have often assumed that LDV velocity bias depends on turbulence intensity in conjunction with one or more characteristic time scales, such as the time between validated signals, the time between data samples, and the integral turbulence time-scale. These parameters are presently varied independently, in an effort to quantify the biasing effect. Neither of the post facto correction methods employed is entirely accurate. The mean velocity bias error is found to be nearly independent of data validation rate.
Howe, Chanelle J.; Cole, Stephen R.; Chmiel, Joan S.; Muñoz, Alvaro
2011-01-01
In time-to-event analyses, artificial censoring with correction for induced selection bias using inverse probability-of-censoring weights can be used to 1) examine the natural history of a disease after effective interventions are widely available, 2) correct bias due to noncompliance with fixed or dynamic treatment regimens, and 3) estimate survival in the presence of competing risks. Artificial censoring entails censoring participants when they meet a predefined study criterion, such as exposure to an intervention, failure to comply, or the occurrence of a competing outcome. Inverse probability-of-censoring weights use measured common predictors of the artificial censoring mechanism and the outcome of interest to determine what the survival experience of the artificially censored participants would be had they never been exposed to the intervention, complied with their treatment regimen, or not developed the competing outcome. Even if all common predictors are appropriately measured and taken into account, in the context of small sample size and strong selection bias, inverse probability-of-censoring weights could fail because of violations in assumptions necessary to correct selection bias. The authors used an example from the Multicenter AIDS Cohort Study, 1984–2008, regarding estimation of long-term acquired immunodeficiency syndrome-free survival to demonstrate the impact of violations in necessary assumptions. Approaches to improve correction methods are discussed. PMID:21289029
NASA Astrophysics Data System (ADS)
Cannon, Alex J.
2018-01-01
Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another—the N-dimensional probability density function transform—is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin, particularly for annual maxima of the FWI distribution and spatiotemporal autocorrelation of precipitation fields.
Effects of diurnal adjustment on biases and trends derived from inter-sensor calibrated AMSU-A data
NASA Astrophysics Data System (ADS)
Chen, H.; Zou, X.; Qin, Z.
2018-03-01
Measurements of brightness temperatures from Advanced Microwave Sounding Unit-A (AMSU-A) temperature sounding instruments onboard NOAA Polarorbiting Operational Environmental Satellites (POES) have been extensively used for studying atmospheric temperature trends over the past several decades. Intersensor biases, orbital drifts and diurnal variations of atmospheric and surface temperatures must be considered before using a merged long-term time series of AMSU-A measurements from NOAA-15, -18, -19 and MetOp-A.We study the impacts of the orbital drift and orbital differences of local equator crossing times (LECTs) on temperature trends derivable from AMSU-A using near-nadir observations from NOAA-15, NOAA-18, NOAA-19, and MetOp-A during 1998-2014 over the Amazon rainforest. The double difference method is firstly applied to estimation of inter-sensor biases between any two satellites during their overlapping time period. The inter-calibrated observations are then used to generate a monthly mean diurnal cycle of brightness temperature for each AMSU-A channel. A diurnal correction is finally applied each channel to obtain AMSU-A data valid at the same local time. Impacts of the inter-sensor bias correction and diurnal correction on the AMSU-A derived long-term atmospheric temperature trends are separately quantified and compared with those derived from original data. It is shown that the orbital drift and differences of LECTamong different POESs induce a large uncertainty in AMSU-A derived long-term warming/cooling trends. After applying an inter-sensor bias correction and a diurnal correction, the warming trends at different local times, which are approximately the same, are smaller by half than the trends derived without applying these corrections.
Some comments on Anderson and Pospahala's correction of bias in line transect sampling
Anderson, D.R.; Burnham, K.P.; Chain, B.R.
1980-01-01
ANDERSON and POSPAHALA (1970) investigated the estimation of wildlife population size using the belt or line transect sampling method and devised a correction for bias, thus leading to an estimator with interesting characteristics. This work was given a uniform mathematical framework in BURNHAM and ANDERSON (1976). In this paper we show that the ANDERSON-POSPAHALA estimator is optimal in the sense of being the (unique) best linear unbiased estimator within the class of estimators which are linear combinations of cell frequencies, provided certain assumptions are met.
Lee, Seung Soo; Lee, Youngjoo; Kim, Namkug; Kim, Seong Who; Byun, Jae Ho; Park, Seong Ho; Lee, Moon-Gyu; Ha, Hyun Kwon
2011-06-01
To compare the accuracy of four chemical shift magnetic resonance imaging (MRI) (CS-MRI) analysis methods and MR spectroscopy (MRS) with and without T2-correction in fat quantification in the presence of excess iron. CS-MRI with six opposed- and in-phase acquisitions and MRS with five-echo acquisitions (TEs of 20, 30, 40, 50, 60 msec) were performed at 1.5 T on phantoms containing various fat fractions (FFs), on phantoms containing various iron concentrations, and in 18 patients with chronic liver disease. For CS-MRI, FFs were estimated with the dual-echo method, with two T2*-correction methods (triple- and multiecho), and with multiinterference methods that corrected for both T2* and spectral interference effects. For MRS, FF was estimated without T2-correction (single-echo MRS) and with T2-correction (multiecho MRS). In the phantoms, T2*- or T2-correction methods for CS-MRI and MRS provided unbiased estimations of FFs (mean bias, -1.1% to 0.5%) regardless of iron concentration, whereas the dual-echo method (-5.5% to -8.4%) and single-echo MRS (12.1% to 37.3%) resulted in large biases in FFs. In patients, the FFs estimated with triple-echo (R = 0.98), multiecho (R = 0.99), and multiinterference (R = 0.99) methods had stronger correlations with multiecho MRS FFs than with the dual-echo method (R = 0.86; P ≤ 0.011). The FFs estimated with multiinterference method showed the closest agreement with multiecho MRS FFs (the 95% limit-of-agreement, -0.2 ± 1.1). T2*- or T2-correction methods are effective in correcting the confounding effects of iron, enabling an accurate fat quantification throughout a wide range of iron concentrations. Spectral modeling of fat may further improve the accuracy of CS-MRI in fat quantification. Copyright © 2011 Wiley-Liss, Inc.
Engemann, Kristine; Enquist, Brian J; Sandel, Brody; Boyle, Brad; Jørgensen, Peter M; Morueta-Holme, Naia; Peet, Robert K; Violle, Cyrille; Svenning, Jens-Christian
2015-01-01
Macro-scale species richness studies often use museum specimens as their main source of information. However, such datasets are often strongly biased due to variation in sampling effort in space and time. These biases may strongly affect diversity estimates and may, thereby, obstruct solid inference on the underlying diversity drivers, as well as mislead conservation prioritization. In recent years, this has resulted in an increased focus on developing methods to correct for sampling bias. In this study, we use sample-size-correcting methods to examine patterns of tropical plant diversity in Ecuador, one of the most species-rich and climatically heterogeneous biodiversity hotspots. Species richness estimates were calculated based on 205,735 georeferenced specimens of 15,788 species using the Margalef diversity index, the Chao estimator, the second-order Jackknife and Bootstrapping resampling methods, and Hill numbers and rarefaction. Species richness was heavily correlated with sampling effort, and only rarefaction was able to remove this effect, and we recommend this method for estimation of species richness with “big data” collections. PMID:25692000
Engemann, Kristine; Enquist, Brian J; Sandel, Brody; Boyle, Brad; Jørgensen, Peter M; Morueta-Holme, Naia; Peet, Robert K; Violle, Cyrille; Svenning, Jens-Christian
2015-02-01
Macro-scale species richness studies often use museum specimens as their main source of information. However, such datasets are often strongly biased due to variation in sampling effort in space and time. These biases may strongly affect diversity estimates and may, thereby, obstruct solid inference on the underlying diversity drivers, as well as mislead conservation prioritization. In recent years, this has resulted in an increased focus on developing methods to correct for sampling bias. In this study, we use sample-size-correcting methods to examine patterns of tropical plant diversity in Ecuador, one of the most species-rich and climatically heterogeneous biodiversity hotspots. Species richness estimates were calculated based on 205,735 georeferenced specimens of 15,788 species using the Margalef diversity index, the Chao estimator, the second-order Jackknife and Bootstrapping resampling methods, and Hill numbers and rarefaction. Species richness was heavily correlated with sampling effort, and only rarefaction was able to remove this effect, and we recommend this method for estimation of species richness with "big data" collections.
CD-SEM real time bias correction using reference metrology based modeling
NASA Astrophysics Data System (ADS)
Ukraintsev, V.; Banke, W.; Zagorodnev, G.; Archie, C.; Rana, N.; Pavlovsky, V.; Smirnov, V.; Briginas, I.; Katnani, A.; Vaid, A.
2018-03-01
Accuracy of patterning impacts yield, IC performance and technology time to market. Accuracy of patterning relies on optical proximity correction (OPC) models built using CD-SEM inputs and intra die critical dimension (CD) control based on CD-SEM. Sub-nanometer measurement uncertainty (MU) of CD-SEM is required for current technologies. Reported design and process related bias variation of CD-SEM is in the range of several nanometers. Reference metrology and numerical modeling are used to correct SEM. Both methods are slow to be used for real time bias correction. We report on real time CD-SEM bias correction using empirical models based on reference metrology (RM) data. Significant amount of currently untapped information (sidewall angle, corner rounding, etc.) is obtainable from SEM waveforms. Using additional RM information provided for specific technology (design rules, materials, processes) CD extraction algorithms can be pre-built and then used in real time for accurate CD extraction from regular CD-SEM images. The art and challenge of SEM modeling is in finding robust correlation between SEM waveform features and bias of CD-SEM as well as in minimizing RM inputs needed to create accurate (within the design and process space) model. The new approach was applied to improve CD-SEM accuracy of 45 nm GATE and 32 nm MET1 OPC 1D models. In both cases MU of the state of the art CD-SEM has been improved by 3x and reduced to a nanometer level. Similar approach can be applied to 2D (end of line, contours, etc.) and 3D (sidewall angle, corner rounding, etc.) cases.
Chen, Yunjie; Zhao, Bo; Zhang, Jianwei; Zheng, Yuhui
2014-09-01
Accurate segmentation of magnetic resonance (MR) images remains challenging mainly due to the intensity inhomogeneity, which is also commonly known as bias field. Recently active contour models with geometric information constraint have been applied, however, most of them deal with the bias field by using a necessary pre-processing step before segmentation of MR data. This paper presents a novel automatic variational method, which can segment brain MR images meanwhile correcting the bias field when segmenting images with high intensity inhomogeneities. We first define a function for clustering the image pixels in a smaller neighborhood. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. In order to reduce the effect of the noise, the local intensity variations are described by the Gaussian distributions with different means and variances. Then, the objective functions are integrated over the entire domain. In order to obtain the global optimal and make the results independent of the initialization of the algorithm, we reconstructed the energy function to be convex and calculated it by using the Split Bregman theory. A salient advantage of our method is that its result is independent of initialization, which allows robust and fully automated application. Our method is able to estimate the bias of quite general profiles, even in 7T MR images. Moreover, our model can also distinguish regions with similar intensity distribution with different variances. The proposed method has been rigorously validated with images acquired on variety of imaging modalities with promising results. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
McAfee, S. A.; DeLaFrance, A.
2017-12-01
Investigating the impacts of climate change often entails using projections from inherently imperfect general circulation models (GCMs) to drive models that simulate biophysical or societal systems in great detail. Error or bias in the GCM output is often assessed in relation to observations, and the projections are adjusted so that the output from impacts models can be compared to historical or observed conditions. Uncertainty in the projections is typically accommodated by running more than one future climate trajectory to account for differing emissions scenarios, model simulations, and natural variability. The current methods for dealing with error and uncertainty treat them as separate problems. In places where observed and/or simulated natural variability is large, however, it may not be possible to identify a consistent degree of bias in mean climate, blurring the lines between model error and projection uncertainty. Here we demonstrate substantial instability in mean monthly temperature bias across a suite of GCMs used in CMIP5. This instability is greatest in the highest latitudes during the cool season, where shifts from average temperatures below to above freezing could have profound impacts. In models with the greatest degree of bias instability, the timing of regional shifts from below to above average normal temperatures in a single climate projection can vary by about three decades, depending solely on the degree of bias assessed. This suggests that current bias correction methods based on comparison to 20- or 30-year normals may be inappropriate, particularly in the polar regions.
Benchmarking protein-protein interface predictions: why you should care about protein size.
Martin, Juliette
2014-07-01
A number of predictive methods have been developed to predict protein-protein binding sites. Each new method is traditionally benchmarked using sets of protein structures of various sizes, and global statistics are used to assess the quality of the prediction. Little attention has been paid to the potential bias due to protein size on these statistics. Indeed, small proteins involve proportionally more residues at interfaces than large ones. If a predictive method is biased toward small proteins, this can lead to an over-estimation of its performance. Here, we investigate the bias due to the size effect when benchmarking protein-protein interface prediction on the widely used docking benchmark 4.0. First, we simulate random scores that favor small proteins over large ones. Instead of the 0.5 AUC (Area Under the Curve) value expected by chance, these biased scores result in an AUC equal to 0.6 using hypergeometric distributions, and up to 0.65 using constant scores. We then use real prediction results to illustrate how to detect the size bias by shuffling, and subsequently correct it using a simple conversion of the scores into normalized ranks. In addition, we investigate the scores produced by eight published methods and show that they are all affected by the size effect, which can change their relative ranking. The size effect also has an impact on linear combination scores by modifying the relative contributions of each method. In the future, systematic corrections should be applied when benchmarking predictive methods using data sets with mixed protein sizes. © 2014 Wiley Periodicals, Inc.
An evaluation of percentile and maximum likelihood estimators of weibull paremeters
Stanley J. Zarnoch; Tommy R. Dell
1985-01-01
Two methods of estimating the three-parameter Weibull distribution were evaluated by computer simulation and field data comparison. Maximum likelihood estimators (MLB) with bias correction were calculated with the computer routine FITTER (Bailey 1974); percentile estimators (PCT) were those proposed by Zanakis (1979). The MLB estimators had superior smaller bias and...
Calibration of remotely sensed proportion or area estimates for misclassification error
Raymond L. Czaplewski; Glenn P. Catts
1992-01-01
Classifications of remotely sensed data contain misclassification errors that bias areal estimates. Monte Carlo techniques were used to compare two statistical methods that correct or calibrate remotely sensed areal estimates for misclassification bias using reference data from an error matrix. The inverse calibration estimator was consistently superior to the...
NASA Astrophysics Data System (ADS)
Manzanas, R.; Lucero, A.; Weisheimer, A.; Gutiérrez, J. M.
2018-02-01
Statistical downscaling methods are popular post-processing tools which are widely used in many sectors to adapt the coarse-resolution biased outputs from global climate simulations to the regional-to-local scale typically required by users. They range from simple and pragmatic Bias Correction (BC) methods, which directly adjust the model outputs of interest (e.g. precipitation) according to the available local observations, to more complex Perfect Prognosis (PP) ones, which indirectly derive local predictions (e.g. precipitation) from appropriate upper-air large-scale model variables (predictors). Statistical downscaling methods have been extensively used and critically assessed in climate change applications; however, their advantages and limitations in seasonal forecasting are not well understood yet. In particular, a key problem in this context is whether they serve to improve the forecast quality/skill of raw model outputs beyond the adjustment of their systematic biases. In this paper we analyze this issue by applying two state-of-the-art BC and two PP methods to downscale precipitation from a multimodel seasonal hindcast in a challenging tropical region, the Philippines. To properly assess the potential added value beyond the reduction of model biases, we consider two validation scores which are not sensitive to changes in the mean (correlation and reliability categories). Our results show that, whereas BC methods maintain or worsen the skill of the raw model forecasts, PP methods can yield significant skill improvement (worsening) in cases for which the large-scale predictor variables considered are better (worse) predicted by the model than precipitation. For instance, PP methods are found to increase (decrease) model reliability in nearly 40% of the stations considered in boreal summer (autumn). Therefore, the choice of a convenient downscaling approach (either BC or PP) depends on the region and the season.
Bruxvoort, Katia; Festo, Charles; Cairns, Matthew; Kalolella, Admirabilis; Mayaya, Frank; Kachur, S. Patrick; Schellenberg, David; Goodman, Catherine
2015-01-01
Background Self-report is the most common and feasible method for assessing patient adherence to medication, but can be prone to recall bias and social desirability bias. Most studies assessing adherence to artemisinin-based combination therapies (ACTs) have relied on self-report. In this study, we use a novel customised electronic monitoring device—termed smart blister packs—to examine the validity of self-reported adherence to artemether-lumefantrine (AL) in southern Tanzania. Methods Smart blister packs were designed to look identical to locally available AL blister packs and to record the date and time each tablet was removed from packaging. Patients obtaining AL at randomly selected health facilities and drug stores were followed up at home three days later and interviewed about each dose of AL taken. Blister packs were requested for pill count and extraction of smart blister pack data. Results Data on adherence from both self-report verified by pill count and smart blister packs were available for 696 of 1,204 patients. There was no difference between methods in the proportion of patients assessed to have completed treatment (64% and 67%, respectively). However, the percentage taking the correct number of pills for each dose at the correct times (timely completion) was higher by self-report than smart blister packs (37% vs. 24%; p<0.0001). By smart blister packs, 64% of patients completing treatment did not take the correct number of pills per dose or did not take each dose at the correct time interval. Conclusion Smart blister packs resulted in lower estimates of timely completion of AL and may be less prone to recall and social desirability bias. They may be useful when data on patterns of adherence are desirable to evaluate treatment outcomes. Improved methods of collecting self-reported data are needed to minimise bias and maximise comparability between studies. PMID:26214848
NASA Technical Reports Server (NTRS)
Radakovich, Jon; Bosilovich, M.; Chern, Jiun-dar; daSilva, Arlindo
2004-01-01
The NASA/NCAR Finite Volume GCM (fvGCM) with the NCAR CLM (Community Land Model) version 2.0 was integrated into the NASA/GMAO Finite Volume Data Assimilation System (fvDAS). A new method was developed for coupled skin temperature assimilation and bias correction where the analysis increment and bias correction term is passed into the CLM2 and considered a forcing term in the solution to the energy balance. For our purposes, the fvDAS CLM2 was run at 1 deg. x 1.25 deg. horizontal resolution with 55 vertical levels. We assimilate the ISCCP-DX (30 km resolution) surface temperature product. The atmospheric analysis was performed 6-hourly, while the skin temperature analysis was performed 3-hourly. The bias correction term, which was updated at the analysis times, was added to the skin temperature tendency equation at every timestep. In this presentation, we focus on the validation of the surface energy budget at the in situ reference sites for the Coordinated Enhanced Observation Period (CEOP). We will concentrate on sites that include independent skin temperature measurements and complete energy budget observations for the month of July 2001. In addition, MODIS skin temperature will be used for validation. Several assimilations were conducted and preliminary results will be presented.
NASA Astrophysics Data System (ADS)
Lorente-Plazas, Raquel; Hacker, Josua P.; Collins, Nancy; Lee, Jared A.
2017-04-01
The impact of assimilating surface observations has been shown in several publications, for improving weather prediction inside of the boundary layer as well as the flow aloft. However, the assimilation of surface observations is often far from optimal due to the presence of both model and observation biases. The sources of these biases can be diverse: an instrumental offset, errors associated to the comparison of point-based observations and grid-cell average, etc. To overcome this challenge, a method was developed using the ensemble Kalman filter. The approach consists on representing each observation bias as a parameter. These bias parameters are added to the forward operator and they extend the state vector. As opposed to the observation bias estimation approaches most common in operational systems (e.g. for satellite radiances), the state vector and parameters are simultaneously updated by applying the Kalman filter equations to the augmented state. The method to estimate and correct the observation bias is evaluated using observing system simulation experiments (OSSEs) with the Weather Research and Forecasting (WRF) model. OSSEs are constructed for the conventional observation network including radiosondes, aircraft observations, atmospheric motion vectors, and surface observations. Three different kinds of biases are added to 2-meter temperature for synthetic METARs. From the simplest to more sophisticated, imposed biases are: (1) a spatially invariant bias, (2) a spatially varying bias proportional to topographic height differences between the model and the observations, and (3) bias that is proportional to the temperature. The target region characterized by complex terrain is the western U.S. on a domain with 30-km grid spacing. Observations are assimilated every 3 hours using an 80-member ensemble during September 2012. Results demonstrate that the approach is able to estimate and correct the bias when it is spatially invariant (experiment 1). More complex bias structure in experiments (2) and (3) are more difficult to estimate, but still possible. Estimated the parameter in experiments with unbiased observations results in spatial and temporal parameter variability about zero, and establishes a threshold on the accuracy of the parameter in further experiments. When the observations are biased, the mean parameter value is close to the true bias, but temporal and spatial variability in the parameter estimates is similar to the parameters used when estimating a zero bias in the observations. The distributions are related to other errors in the forecasts, indicating that the parameters are absorbing some of the forecast error from other sources. In this presentation we elucidate the reasons for the resulting parameter estimates, and their variability.
NASA Astrophysics Data System (ADS)
Johnson, Fiona; Sharma, Ashish
2011-04-01
Empirical scaling approaches for constructing rainfall scenarios from general circulation model (GCM) simulations are commonly used in water resources climate change impact assessments. However, these approaches have a number of limitations, not the least of which is that they cannot account for changes in variability or persistence at annual and longer time scales. Bias correction of GCM rainfall projections offers an attractive alternative to scaling methods as it has similar advantages to scaling in that it is computationally simple, can consider multiple GCM outputs, and can be easily applied to different regions or climatic regimes. In addition, it also allows for interannual variability to evolve according to the GCM simulations, which provides additional scenarios for risk assessments. This paper compares two scaling and four bias correction approaches for estimating changes in future rainfall over Australia and for a case study for water supply from the Warragamba catchment, located near Sydney, Australia. A validation of the various rainfall estimation procedures is conducted on the basis of the latter half of the observational rainfall record. It was found that the method leading to the lowest prediction errors varies depending on the rainfall statistic of interest. The flexibility of bias correction approaches in matching rainfall parameters at different frequencies is demonstrated. The results also indicate that for Australia, the scaling approaches lead to smaller estimates of uncertainty associated with changes to interannual variability for the period 2070-2099 compared to the bias correction approaches. These changes are also highlighted using the case study for the Warragamba Dam catchment.
Mifsud, Borbala; Martincorena, Inigo; Darbo, Elodie; Sugar, Robert; Schoenfelder, Stefan; Fraser, Peter; Luscombe, Nicholas M
2017-01-01
Hi-C is one of the main methods for investigating spatial co-localisation of DNA in the nucleus. However, the raw sequencing data obtained from Hi-C experiments suffer from large biases and spurious contacts, making it difficult to identify true interactions. Existing methods use complex models to account for biases and do not provide a significance threshold for detecting interactions. Here we introduce a simple binomial probabilistic model that resolves complex biases and distinguishes between true and false interactions. The model corrects biases of known and unknown origin and yields a p-value for each interaction, providing a reliable threshold based on significance. We demonstrate this experimentally by testing the method against a random ligation dataset. Our method outperforms previous methods and provides a statistical framework for further data analysis, such as comparisons of Hi-C interactions between different conditions. GOTHiC is available as a BioConductor package (http://www.bioconductor.org/packages/release/bioc/html/GOTHiC.html).
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
2018-06-01
The realized stochastic volatility model has been introduced to estimate more accurate volatility by using both daily returns and realized volatility. The main advantage of the model is that no special bias-correction factor for the realized volatility is required a priori. Instead, the model introduces a bias-correction parameter responsible for the bias hidden in realized volatility. We empirically investigate the bias-correction parameter for realized volatilities calculated at various sampling frequencies for six stocks on the Tokyo Stock Exchange, and then show that the dynamic behavior of the bias-correction parameter as a function of sampling frequency is qualitatively similar to that of the Hansen-Lunde bias-correction factor although their values are substantially different. Under the stochastic diffusion assumption of the return dynamics, we investigate the accuracy of estimated volatilities by examining the standardized returns. We find that while the moments of the standardized returns from low-frequency realized volatilities are consistent with the expectation from the Gaussian variables, the deviation from the expectation becomes considerably large at high frequencies. This indicates that the realized stochastic volatility model itself cannot completely remove bias at high frequencies.
Experimenter Confirmation Bias and the Correction of Science Misconceptions
ERIC Educational Resources Information Center
Allen, Michael; Coole, Hilary
2012-01-01
This paper describes a randomised educational experiment (n = 47) that examined two different teaching methods and compared their effectiveness at correcting one science misconception using a sample of trainee primary school teachers. The treatment was designed to promote engagement with the scientific concept by eliciting emotional responses from…
Atmospheric correction for inland water based on Gordon model
NASA Astrophysics Data System (ADS)
Li, Yunmei; Wang, Haijun; Huang, Jiazhu
2008-04-01
Remote sensing technique is soundly used in water quality monitoring since it can receive area radiation information at the same time. But more than 80% radiance detected by sensors at the top of the atmosphere is contributed by atmosphere not directly by water body. Water radiance information is seriously confused by atmospheric molecular and aerosol scattering and absorption. A slight bias of evaluation for atmospheric influence can deduce large error for water quality evaluation. To inverse water composition accurately we have to separate water and air information firstly. In this paper, we studied on atmospheric correction methods for inland water such as Taihu Lake. Landsat-5 TM image was corrected based on Gordon atmospheric correction model. And two kinds of data were used to calculate Raleigh scattering, aerosol scattering and radiative transmission above Taihu Lake. Meanwhile, the influence of ozone and white cap were revised. One kind of data was synchronization meteorology data, and the other one was synchronization MODIS image. At last, remote sensing reflectance was retrieved from the TM image. The effect of different methods was analyzed using in situ measured water surface spectra. The result indicates that measured and estimated remote sensing reflectance were close for both methods. Compared to the method of using MODIS image, the method of using synchronization meteorology is more accurate. And the bias is close to inland water error criterion accepted by water quality inversing. It shows that this method is suitable for Taihu Lake atmospheric correction for TM image.
Process-based evaluation of the ÖKS15 Austrian climate scenarios: First results
NASA Astrophysics Data System (ADS)
Mendlik, Thomas; Truhetz, Heimo; Jury, Martin; Maraun, Douglas
2017-04-01
The climate scenarios for Austria from the ÖKS15 project consists of 13 downscaled and bias-corrected RCMs from the EURO-CORDEX project. This dataset is meant for the broad public and is now available at the central national archive for climate data (CCCA Data Center). Because of this huge public outreach it is absolutely necessary to objectively discuss the limitations of this dataset and to publish these limitations, which should also be understood by a non-scientific audience. Even though systematical climatological biases have been accounted for by the Scaled-Distribution-Mapping (SDM) bias-correction method, it is not guaranteed that the model biases have been removed for the right reasons. If climate scenarios do not get the patterns of synoptic variability right, biases will still prevail in certain weather patterns. Ultimately this will have consequences for the projected climate change signals. In this study we derive typical weather types in the Alpine Region based on patterns from mean sea level pressure from ERA-INTERIM data and check the occurrence of these synoptic phenomena in EURO-CORDEX data and their corresponding driving GCMs. Based on these weather patterns we analyze the remaining biases of the downscaled and bias-corrected scenarios. We argue that such a process-based evaluation is not only necessary from a scientific point of view, but can also help the broader public to understand the limitations of downscaled climate scenarios, as model errors can be interpreted in terms of everyday observable weather.
Gutiérrez, Gabriel; Millán-Zambrano, Gonzalo; Medina, Daniel A; Jordán-Pla, Antonio; Pérez-Ortín, José E; Peñate, Xenia; Chávez, Sebastián
2017-12-07
TFIIS stimulates RNA cleavage by RNA polymerase II and promotes the resolution of backtracking events. TFIIS acts in the chromatin context, but its contribution to the chromatin landscape has not yet been investigated. Co-transcriptional chromatin alterations include subtle changes in nucleosome positioning, like those expected to be elicited by TFIIS, which are elusive to detect. The most popular method to map nucleosomes involves intensive chromatin digestion by micrococcal nuclease (MNase). Maps based on these exhaustively digested samples miss any MNase-sensitive nucleosomes caused by transcription. In contrast, partial digestion approaches preserve such nucleosomes, but introduce noise due to MNase sequence preferences. A systematic way of correcting this bias for massively parallel sequencing experiments is still missing. To investigate the contribution of TFIIS to the chromatin landscape, we developed a refined nucleosome-mapping method in Saccharomyces cerevisiae. Based on partial MNase digestion and a sequence-bias correction derived from naked DNA cleavage, the refined method efficiently mapped nucleosomes in promoter regions rich in MNase-sensitive structures. The naked DNA correction was also important for mapping gene body nucleosomes, particularly in those genes whose core promoters contain a canonical TATA element. With this improved method, we analyzed the global nucleosomal changes caused by lack of TFIIS. We detected a general increase in nucleosomal fuzziness and more restricted changes in nucleosome occupancy, which concentrated in some gene categories. The TATA-containing genes were preferentially associated with decreased occupancy in gene bodies, whereas the TATA-like genes did so with increased fuzziness. The detected chromatin alterations correlated with functional defects in nascent transcription, as revealed by genomic run-on experiments. The combination of partial MNase digestion and naked DNA correction of the sequence bias is a precise nucleosomal mapping method that does not exclude MNase-sensitive nucleosomes. This method is useful for detecting subtle alterations in nucleosome positioning produced by lack of TFIIS. Their analysis revealed that TFIIS generally contributed to nucleosome positioning in both gene promoters and bodies. The independent effect of lack of TFIIS on nucleosome occupancy and fuzziness supports the existence of alternative chromatin dynamics during transcription elongation.
NASA Astrophysics Data System (ADS)
Mérida, Inés; Reilhac, Anthonin; Redouté, Jérôme; Heckemann, Rolf A.; Costes, Nicolas; Hammers, Alexander
2017-04-01
In simultaneous PET-MR, attenuation maps are not directly available. Essential for absolute radioactivity quantification, they need to be derived from MR or PET data to correct for gamma photon attenuation by the imaged object. We evaluate a multi-atlas attenuation correction method for brain imaging (MaxProb) on static [18F]FDG PET and, for the first time, on dynamic PET, using the serotoninergic tracer [18F]MPPF. A database of 40 MR/CT image pairs (atlases) was used. The MaxProb method synthesises subject-specific pseudo-CTs by registering each atlas to the target subject space. Atlas CT intensities are then fused via label propagation and majority voting. Here, we compared these pseudo-CTs with the real CTs in a leave-one-out design, contrasting the MaxProb approach with a simplified single-atlas method (SingleAtlas). We evaluated the impact of pseudo-CT accuracy on reconstructed PET images, compared to PET data reconstructed with real CT, at the regional and voxel levels for the following: radioactivity images; time-activity curves; and kinetic parameters (non-displaceable binding potential, BPND). On static [18F]FDG, the mean bias for MaxProb ranged between 0 and 1% for 73 out of 84 regions assessed, and exceptionally peaked at 2.5% for only one region. Statistical parametric map analysis of MaxProb-corrected PET data showed significant differences in less than 0.02% of the brain volume, whereas SingleAtlas-corrected data showed significant differences in 20% of the brain volume. On dynamic [18F]MPPF, most regional errors on BPND ranged from -1 to +3% (maximum bias 5%) for the MaxProb method. With SingleAtlas, errors were larger and had higher variability in most regions. PET quantification bias increased over the duration of the dynamic scan for SingleAtlas, but not for MaxProb. We show that this effect is due to the interaction of the spatial tracer-distribution heterogeneity variation over time with the degree of accuracy of the attenuation maps. This work demonstrates that inaccuracies in attenuation maps can induce bias in dynamic brain PET studies. Multi-atlas attenuation correction with MaxProb enables quantification on hybrid PET-MR scanners, eschewing the need for CT.
Mérida, Inés; Reilhac, Anthonin; Redouté, Jérôme; Heckemann, Rolf A; Costes, Nicolas; Hammers, Alexander
2017-04-07
In simultaneous PET-MR, attenuation maps are not directly available. Essential for absolute radioactivity quantification, they need to be derived from MR or PET data to correct for gamma photon attenuation by the imaged object. We evaluate a multi-atlas attenuation correction method for brain imaging (MaxProb) on static [ 18 F]FDG PET and, for the first time, on dynamic PET, using the serotoninergic tracer [ 18 F]MPPF. A database of 40 MR/CT image pairs (atlases) was used. The MaxProb method synthesises subject-specific pseudo-CTs by registering each atlas to the target subject space. Atlas CT intensities are then fused via label propagation and majority voting. Here, we compared these pseudo-CTs with the real CTs in a leave-one-out design, contrasting the MaxProb approach with a simplified single-atlas method (SingleAtlas). We evaluated the impact of pseudo-CT accuracy on reconstructed PET images, compared to PET data reconstructed with real CT, at the regional and voxel levels for the following: radioactivity images; time-activity curves; and kinetic parameters (non-displaceable binding potential, BP ND ). On static [ 18 F]FDG, the mean bias for MaxProb ranged between 0 and 1% for 73 out of 84 regions assessed, and exceptionally peaked at 2.5% for only one region. Statistical parametric map analysis of MaxProb-corrected PET data showed significant differences in less than 0.02% of the brain volume, whereas SingleAtlas-corrected data showed significant differences in 20% of the brain volume. On dynamic [ 18 F]MPPF, most regional errors on BP ND ranged from -1 to +3% (maximum bias 5%) for the MaxProb method. With SingleAtlas, errors were larger and had higher variability in most regions. PET quantification bias increased over the duration of the dynamic scan for SingleAtlas, but not for MaxProb. We show that this effect is due to the interaction of the spatial tracer-distribution heterogeneity variation over time with the degree of accuracy of the attenuation maps. This work demonstrates that inaccuracies in attenuation maps can induce bias in dynamic brain PET studies. Multi-atlas attenuation correction with MaxProb enables quantification on hybrid PET-MR scanners, eschewing the need for CT.
Biases in cost measurement for economic evaluation studies in health care.
Jacobs, P; Baladi, J F
1996-01-01
This paper addresses the issue of biases in cost measures which used in economic evaluation studies. The basic measure of hospital costs which is used by most investigators is unit cost. Focusing on this measure, a set of criteria which the basic measures must fulfil in order to approximate the marginal cost (MC) of a service for the relevant product, in the representative site, was identified. Then four distinct biases--a scale bias, a case mix bias, a methods bias and a site selection bias--each of which reflects the divergence of the unit cost measure from the desired MC measure, were identified. Measures are proposed for several of these biases and it is suggested how they can be corrected.
A comparison of methods to estimate future sub-daily design rainfall
NASA Astrophysics Data System (ADS)
Li, J.; Johnson, F.; Evans, J.; Sharma, A.
2017-12-01
Warmer temperatures are expected to increase extreme short-duration rainfall due to the increased moisture-holding capacity of the atmosphere. While attention has been paid to the impacts of climate change on future design rainfalls at daily or longer time scales, the potential changes in short duration design rainfalls have been often overlooked due to the limited availability of sub-daily projections and observations. This study uses a high-resolution regional climate model (RCM) to predict the changes in sub-daily design rainfalls for the Greater Sydney region in Australia. Sixteen methods for predicting changes to sub-daily future extremes are assessed based on different options for bias correction, disaggregation and frequency analysis. A Monte Carlo cross-validation procedure is employed to evaluate the skill of each method in estimating the design rainfall for the current climate. It is found that bias correction significantly improves the accuracy of the design rainfall estimated for the current climate. For 1 h events, bias correcting the hourly annual maximum rainfall simulated by the RCM produces design rainfall closest to observations, whereas for multi-hour events, disaggregating the daily rainfall total is recommended. This suggests that the RCM fails to simulate the observed multi-duration rainfall persistence, which is a common issue for most climate models. Despite the significant differences in the estimated design rainfalls between different methods, all methods lead to an increase in design rainfalls across the majority of the study region.
NASA Astrophysics Data System (ADS)
Zhang, Shengjun; Li, Jiancheng; Jin, Taoyong; Che, Defu
2018-04-01
Marine gravity anomaly derived from satellite altimetry can be computed using either sea surface height or sea surface slope measurements. Here we consider the slope method and evaluate the errors in the slope of the corrections supplied with the Jason-1 geodetic mission data. The slope corrections are divided into three groups based on whether they are small, comparable, or large with respect to the 1 microradian error in the current sea surface slope models. (1) The small and thus negligible corrections include dry tropospheric correction, inverted barometer correction, solid earth tide and geocentric pole tide. (2) The moderately important corrections include wet tropospheric correction, dual-frequency ionospheric correction and sea state bias. The radiometer measurements are more preferred than model values in the geophysical data records for constraining wet tropospheric effect owing to the highly variable water-vapor structure in atmosphere. The items of dual-frequency ionospheric correction and sea state bias should better not be directly added to range observations for obtaining sea surface slopes since their inherent errors may cause abnormal sea surface slopes and along-track smoothing with uniform distribution weight in certain width is an effective strategy for avoiding introducing extra noises. The slopes calculated from radiometer wet tropospheric corrections, and along-track smoothed dual-frequency ionospheric corrections, sea state bias are generally within ±0.5 microradians and no larger than 1 microradians. (3) Ocean tide has the largest influence on obtaining sea surface slopes while most of ocean tide slopes distribute within ±3 microradians. Larger ocean tide slopes mostly occur over marginal and island-surrounding seas, and extra tidal models with better precision or with extending process (e.g. Got-e) are strongly recommended for updating corrections in geophysical data records.
Development of an Empirical Local Magnitude Formula for Northern Oklahoma
NASA Astrophysics Data System (ADS)
Spriggs, N.; Karimi, S.; Moores, A. O.
2015-12-01
In this paper we focus on determining a local magnitude formula for northern Oklahoma that is unbiased with distance by empirically constraining the attenuation properties within the region of interest based on the amplitude of observed seismograms. For regional networks detecting events over several hundred kilometres, distance correction terms play an important role in determining the magnitude of an event. Standard distance correction terms such as Hutton and Boore (1987) may have a significant bias with distance if applied in a region with different attenuation properties, resulting in an incorrect magnitude. We have presented data from a regional network of broadband seismometers installed in bedrock in northern Oklahoma. The events with magnitude in the range of 2.0 and 4.5, distributed evenly across this network are considered. We find that existing models show a bias with respect to hypocentral distance. Observed amplitude measurements demonstrate that there is a significant Moho bounce effect that mandates the use of a trilinear attenuation model in order to avoid bias in the distance correction terms. We present two different approaches of local magnitude calibration. The first maintains the classic definition of local magnitude as proposed by Richter. The second method calibrates local magnitude so that it agrees with moment magnitude where a regional moment tensor can be computed. To this end, regional moment tensor solutions and moment magnitudes are computed for events with magnitude larger than 3.5 to allow calibration of local magnitude to moment magnitude. For both methods the new formula results in magnitudes systematically lower than previous values computed with Eaton's (1992) model. We compare the resulting magnitudes and discuss the benefits and drawbacks of each method. Our results highlight the importance of correct calibration of the distance correction terms for accurate local magnitude assessment in regional networks.
Lead burdens and behavioral impairments of the lined shore crab Pachygrapsus crassipes
Hui, Clifford A.
2002-01-01
Unless a correction is made, population estimates derived from a sample of belt transects will be biased if a fraction of, the individuals on the sample transects are not counted. An approach, useful for correcting this bias when sampling immotile populations using transects of a fixed width, is presented. The method assumes that a searcher's ability to find objects near the center of the transect is nearly perfect. The method utilizes a mathematical equation, estimated from the data, to represent the searcher's inability to find all objects at increasing distances from the center of the transect. An example of the analysis of data, formation of the equation, and application is presented using waterfowl nesting data collected in Colorado.
Different hunting strategies select for different weights in red deer.
Martínez, María; Rodríguez-Vigal, Carlos; Jones, Owen R; Coulson, Tim; San Miguel, Alfonso
2005-09-22
Much insight can be derived from records of shot animals. Most researchers using such data assume that their data represents a random sample of a particular demographic class. However, hunters typically select a non-random subset of the population and hunting is, therefore, not a random process. Here, with red deer (Cervus elaphus) hunting data from a ranch in Toledo, Spain, we demonstrate that data collection methods have a significant influence upon the apparent relationship between age and weight. We argue that a failure to correct for such methodological bias may have significant consequences for the interpretation of analyses involving weight or correlated traits such as breeding success, and urge researchers to explore methods to identify and correct for such bias in their data.
Staiger, Douglas O; Sharp, Sandra M; Gottlieb, Daniel J; Bevan, Gwyn; McPherson, Klim; Welch, H Gilbert
2013-01-01
Objective To determine the bias associated with frequency of visits by physicians in adjusting for illness, using diagnoses recorded in administrative databases. Setting Claims data from the US Medicare program for services provided in 2007 among 306 US hospital referral regions. Design Cross sectional analysis. Participants 20% sample of fee for service Medicare beneficiaries residing in the United States in 2007 (n=5 153 877). Main outcome measures The effect of illness adjustment on regional mortality and spending rates using standard and visit corrected illness methods for adjustment. The standard method adjusts using comorbidity measures based on diagnoses listed in administrative databases; the modified method corrects these measures for the frequency of visits by physicians. Three conventions for measuring comorbidity are used: the Charlson comorbidity index, Iezzoni chronic conditions, and hierarchical condition categories risk scores. Results The visit corrected Charlson comorbidity index explained more of the variation in age, sex, and race mortality across the 306 hospital referral regions than did the standard index (R2=0.21 v 0.11, P<0.001) and, compared with sex and race adjusted mortality, reduced regional variation, whereas adjustment using the standard Charlson comorbidity index increased it. Although visit corrected and age, sex, and race adjusted mortality rates were similar in hospital referral regions with the highest and lowest fifths of visits, adjustment using the standard index resulted in a rate that was 18% lower in the highest fifth (46.4 v 56.3 deaths per 1000, P<0.001). Age, sex, and race adjusted spending as well as visit corrected spending was more than 30% greater in the highest fifth of visits than in the lowest fifth, but only 12% greater after adjustment using the standard index. Similar results were obtained using the Iezzoni and the hierarchical condition categories conventions for measuring comorbidity. Conclusion The rates of visits by physicians introduce substantial bias when regional mortality and spending rates are adjusted for illness using comorbidity measures based on the observed number of diagnoses recorded in Medicare’s administrative database. Adjusting without correction for regional variation in visit rates tends to make regions with high rates of visits seem to have lower mortality and lower costs, and vice versa. Visit corrected comorbidity measures better explain variation in age, sex, and race mortality than observed measures, and reduce observational intensity bias. PMID:23430282
Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models
ERIC Educational Resources Information Center
Raykov, Tenko
2005-01-01
A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…
A retrieval-based approach to eliminating hindsight bias.
Van Boekel, Martin; Varma, Keisha; Varma, Sashank
2017-03-01
Individuals exhibit hindsight bias when they are unable to recall their original responses to novel questions after correct answers are provided to them. Prior studies have eliminated hindsight bias by modifying the conditions under which original judgments or correct answers are encoded. Here, we explored whether hindsight bias can be eliminated by manipulating the conditions that hold at retrieval. Our retrieval-based approach predicts that if the conditions at retrieval enable sufficient discrimination of memory representations of original judgments from memory representations of correct answers, then hindsight bias will be reduced or eliminated. Experiment 1 used the standard memory design to replicate the hindsight bias effect in middle-school students. Experiments 2 and 3 modified the retrieval phase of this design, instructing participants beforehand that they would be recalling both their original judgments and the correct answers. As predicted, this enabled participants to form compound retrieval cues that discriminated original judgment traces from correct answer traces, and eliminated hindsight bias. Experiment 4 found that when participants were not instructed beforehand that they would be making both recalls, they did not form discriminating retrieval cues, and hindsight bias returned. These experiments delineate the retrieval conditions that produce-and fail to produce-hindsight bias.
Validation of satellite-based rainfall in Kalahari
NASA Astrophysics Data System (ADS)
Lekula, Moiteela; Lubczynski, Maciek W.; Shemang, Elisha M.; Verhoef, Wouter
2018-06-01
Water resources management in arid and semi-arid areas is hampered by insufficient rainfall data, typically obtained from sparsely distributed rain gauges. Satellite-based rainfall estimates (SREs) are alternative sources of such data in these areas. In this study, daily rainfall estimates from FEWS-RFE∼11 km, TRMM-3B42∼27 km, CMOPRH∼27 km and CMORPH∼8 km were evaluated against nine, daily rain gauge records in Central Kalahari Basin (CKB), over a five-year period, 01/01/2001-31/12/2005. The aims were to evaluate the daily rainfall detection capabilities of the four SRE algorithms, analyze the spatio-temporal variability of rainfall in the CKB and perform bias-correction of the four SREs. Evaluation methods included scatter plot analysis, descriptive statistics, categorical statistics and bias decomposition. The spatio-temporal variability of rainfall, was assessed using the SREs' mean annual rainfall, standard deviation, coefficient of variation and spatial correlation functions. Bias correction of the four SREs was conducted using a Time-Varying Space-Fixed bias-correction scheme. The results underlined the importance of validating daily SREs, as they had different rainfall detection capabilities in the CKB. The FEWS-RFE∼11 km performed best, providing better results of descriptive and categorical statistics than the other three SREs, although bias decomposition showed that all SREs underestimated rainfall. The analysis showed that the most reliable SREs performance analysis indicator were the frequency of "miss" rainfall events and the "miss-bias", as they directly indicated SREs' sensitivity and bias of rainfall detection, respectively. The Time Varying and Space Fixed (TVSF) bias-correction scheme, improved some error measures but resulted in the reduction of the spatial correlation distance, thus increased, already high, spatial rainfall variability of all the four SREs. This study highlighted SREs as valuable source of daily rainfall data providing good spatio-temporal data coverage especially suitable for areas with limited rain gauges, such as the CKB, but also emphasized SREs' drawbacks, creating avenue for follow up research.
Length bias correction in one-day cross-sectional assessments - The nutritionDay study.
Frantal, Sophie; Pernicka, Elisabeth; Hiesmayr, Michael; Schindler, Karin; Bauer, Peter
2016-04-01
A major problem occurring in cross-sectional studies is sampling bias. Length of hospital stay (LOS) differs strongly between patients and causes a length bias as patients with longer LOS are more likely to be included and are therefore overrepresented in this type of study. To adjust for the length bias higher weights are allocated to patients with shorter LOS. We determined the effect of length-bias adjustment in two independent populations. Length-bias correction is applied to the data of the nutritionDay project, a one-day multinational cross-sectional audit capturing data on disease and nutrition of patients admitted to hospital wards with right-censoring after 30 days follow-up. We applied the weighting method for estimating the distribution function of patient baseline variables based on the method of non-parametric maximum likelihood. Results are validated using data from all patients admitted to the General Hospital of Vienna between 2005 and 2009, where the distribution of LOS can be assumed to be known. Additionally, a simplified calculation scheme for estimating the adjusted distribution function of LOS is demonstrated on a small patient example. The crude median (lower quartile; upper quartile) LOS in the cross-sectional sample was 14 (8; 24) and decreased to 7 (4; 12) when adjusted. Hence, adjustment for length bias in cross-sectional studies is essential to get appropriate estimates. Copyright © 2015 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
Bias Correction of MODIS AOD using DragonNET to obtain improved estimation of PM2.5
NASA Astrophysics Data System (ADS)
Gross, B.; Malakar, N. K.; Atia, A.; Moshary, F.; Ahmed, S. A.; Oo, M. M.
2014-12-01
MODIS AOD retreivals using the Dark Target algorithm is strongly affected by the underlying surface reflection properties. In particular, the operational algorithms make use of surface parameterizations trained on global datasets and therefore do not account properly for urban surface differences. This parameterization continues to show an underestimation of the surface reflection which results in a general over-biasing in AOD retrievals. Recent results using the Dragon-Network datasets as well as high resolution retrievals in the NYC area illustrate that this is even more significant at the newest C006 3 km retrievals. In the past, we used AERONET observation in the City College to obtain bias-corrected AOD, but the homogeneity assumptions using only one site for the region is clearly an issue. On the other hand, DragonNET observations provide ample opportunities to obtain better tuning the surface corrections while also providing better statistical validation. In this study we present a neural network method to obtain bias correction of the MODIS AOD using multiple factors including surface reflectivity at 2130nm, sun-view geometrical factors and land-class information. These corrected AOD's are then used together with additional WRF meteorological factors to improve estimates of PM2.5. Efforts to explore the portability to other urban areas will be discussed. In addition, annual surface ratio maps will be developed illustrating that among the land classes, the urban pixels constitute the largest deviations from the operational model.
Robust mislabel logistic regression without modeling mislabel probabilities.
Hung, Hung; Jou, Zhi-Yu; Huang, Su-Yun
2018-03-01
Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence. Our proposal possesses two advantageous features: (1) It does not need to model the mislabel probabilities. (2) The minimum γ-divergence estimation leads to a weighted estimating equation without the need to include any bias correction term, that is, it is automatically bias-corrected. These features make the proposed γ-logistic regression more robust in model fitting and more intuitive for model interpretation through a simple weighting scheme. Our method is also easy to implement, and two types of algorithms are included. Simulation studies and the Pima data application are presented to demonstrate the performance of γ-logistic regression. © 2017, The International Biometric Society.
How and how much does RAD-seq bias genetic diversity estimates?
Cariou, Marie; Duret, Laurent; Charlat, Sylvain
2016-11-08
RAD-seq is a powerful tool, increasingly used in population genomics. However, earlier studies have raised red flags regarding possible biases associated with this technique. In particular, polymorphism on restriction sites results in preferential sampling of closely related haplotypes, so that RAD data tends to underestimate genetic diversity. Here we (1) clarify the theoretical basis of this bias, highlighting the potential confounding effects of population structure and selection, (2) confront predictions to real data from in silico digestion of full genomes and (3) provide a proof of concept toward an ABC-based correction of the RAD-seq bias. Under a neutral and panmictic model, we confirm the previously established relationship between the true polymorphism and its RAD-based estimation, showing a more pronounced bias when polymorphism is high. Using more elaborate models, we show that selection, resulting in heterogeneous levels of polymorphism along the genome, exacerbates the bias and leads to a more pronounced underestimation. On the contrary, spatial genetic structure tends to reduce the bias. We confront the neutral and panmictic model to "ideal" empirical data (in silico RAD-sequencing) using full genomes from natural populations of the fruit fly Drosophila melanogaster and the fungus Shizophyllum commune, harbouring respectively moderate and high genetic diversity. In D. melanogaster, predictions fit the model, but the small difference between the true and RAD polymorphism makes this comparison insensitive to deviations from the model. In the highly polymorphic fungus, the model captures a large part of the bias but makes inaccurate predictions. Accordingly, ABC corrections based on this model improve the estimations, albeit with some imprecisions. The RAD-seq underestimation of genetic diversity associated with polymorphism in restriction sites becomes more pronounced when polymorphism is high. In practice, this means that in many systems where polymorphism does not exceed 2 %, the bias is of minor importance in the face of other sources of uncertainty, such as heterogeneous bases composition or technical artefacts. The neutral panmictic model provides a practical mean to correct the bias through ABC, albeit with some imprecisions. More elaborate ABC methods might integrate additional parameters, such as population structure and selection, but their opposite effects could hinder accurate corrections.
Nonlinear vs. linear biasing in Trp-cage folding simulations
NASA Astrophysics Data System (ADS)
Spiwok, Vojtěch; Oborský, Pavel; Pazúriková, Jana; Křenek, Aleš; Králová, Blanka
2015-03-01
Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200 ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energy minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.
Nonlinear vs. linear biasing in Trp-cage folding simulations.
Spiwok, Vojtěch; Oborský, Pavel; Pazúriková, Jana; Křenek, Aleš; Králová, Blanka
2015-03-21
Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200 ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energy minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.
NASA Astrophysics Data System (ADS)
Worqlul, Abeyou W.; Ayana, Essayas K.; Maathuis, Ben H. P.; MacAlister, Charlotte; Philpot, William D.; Osorio Leyton, Javier M.; Steenhuis, Tammo S.
2018-01-01
In many developing countries and remote areas of important ecosystems, good quality precipitation data are neither available nor readily accessible. Satellite observations and processing algorithms are being extensively used to produce satellite rainfall products (SREs). Nevertheless, these products are prone to systematic errors and need extensive validation before to be usable for streamflow simulations. In this study, we investigated and corrected the bias of Multi-Sensor Precipitation Estimate-Geostationary (MPEG) data. The corrected MPEG dataset was used as input to a semi-distributed hydrological model Hydrologiska Byråns Vattenbalansavdelning (HBV) for simulation of discharge of the Gilgel Abay and Gumara watersheds in the Upper Blue Nile basin, Ethiopia. The result indicated that the MPEG satellite rainfall captured 81% and 78% of the gauged rainfall variability with a consistent bias of underestimating the gauged rainfall by 60%. A linear bias correction applied significantly reduced the bias while maintaining the coefficient of correlation. The simulated flow using bias corrected MPEG SRE resulted in a simulated flow comparable to the gauge rainfall for both watersheds. The study indicated the potential of MPEG SRE in water budget studies after applying a linear bias correction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Priel, Nadav; Landsman, Hagar; Manfredini, Alessandro
We propose a safeguard procedure for statistical inference that provides universal protection against mismodeling of the background. The method quantifies and incorporates the signal-like residuals of the background model into the likelihood function, using information available in a calibration dataset. This prevents possible false discovery claims that may arise through unknown mismodeling, and corrects the bias in limit setting created by overestimated or underestimated background. We demonstrate how the method removes the bias created by an incomplete background model using three realistic case studies.
Influence of Signal Intensity Non-Uniformity on Brain Volumetry Using an Atlas-Based Method
Abe, Osamu; Miyati, Tosiaki; Kabasawa, Hiroyuki; Takao, Hidemasa; Hayashi, Naoto; Kurosu, Tomomi; Iwatsubo, Takeshi; Yamashita, Fumio; Matsuda, Hiroshi; Mori, Harushi; Kunimatsu, Akira; Aoki, Shigeki; Ino, Kenji; Yano, Keiichi; Ohtomo, Kuni
2012-01-01
Objective Many studies have reported pre-processing effects for brain volumetry; however, no study has investigated whether non-parametric non-uniform intensity normalization (N3) correction processing results in reduced system dependency when using an atlas-based method. To address this shortcoming, the present study assessed whether N3 correction processing provides reduced system dependency in atlas-based volumetry. Materials and Methods Contiguous sagittal T1-weighted images of the brain were obtained from 21 healthy participants, by using five magnetic resonance protocols. After image preprocessing using the Statistical Parametric Mapping 5 software, we measured the structural volume of the segmented images with the WFU-PickAtlas software. We applied six different bias-correction levels (Regularization 10, Regularization 0.0001, Regularization 0, Regularization 10 with N3, Regularization 0.0001 with N3, and Regularization 0 with N3) to each set of images. The structural volume change ratio (%) was defined as the change ratio (%) = (100 × [measured volume - mean volume of five magnetic resonance protocols] / mean volume of five magnetic resonance protocols) for each bias-correction level. Results A low change ratio was synonymous with lower system dependency. The results showed that the images with the N3 correction had a lower change ratio compared with those without the N3 correction. Conclusion The present study is the first atlas-based volumetry study to show that the precision of atlas-based volumetry improves when using N3-corrected images. Therefore, correction for signal intensity non-uniformity is strongly advised for multi-scanner or multi-site imaging trials. PMID:22778560
A minimalist approach to bias estimation for passive sensor measurements with targets of opportunity
NASA Astrophysics Data System (ADS)
Belfadel, Djedjiga; Osborne, Richard W.; Bar-Shalom, Yaakov
2013-09-01
In order to carry out data fusion, registration error correction is crucial in multisensor systems. This requires estimation of the sensor measurement biases. It is important to correct for these bias errors so that the multiple sensor measurements and/or tracks can be referenced as accurately as possible to a common tracking coordinate system. This paper provides a solution for bias estimation for the minimum number of passive sensors (two), when only targets of opportunity are available. The sensor measurements are assumed time-coincident (synchronous) and perfectly associated. Since these sensors provide only line of sight (LOS) measurements, the formation of a single composite Cartesian measurement obtained from fusing the LOS measurements from different sensors is needed to avoid the need for nonlinear filtering. We evaluate the Cramer-Rao Lower Bound (CRLB) on the covariance of the bias estimate, i.e., the quantification of the available information about the biases. Statistical tests on the results of simulations show that this method is statistically efficient, even for small sample sizes (as few as two sensors and six points on the trajectory of a single target of opportunity). We also show that the RMS position error is significantly improved with bias estimation compared with the target position estimation using the original biased measurements.
Correcting AUC for Measurement Error.
Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang
2015-12-01
Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.
Guo, Ying; Little, Roderick J; McConnell, Daniel S
2012-01-01
Covariate measurement error is common in epidemiologic studies. Current methods for correcting measurement error with information from external calibration samples are insufficient to provide valid adjusted inferences. We consider the problem of estimating the regression of an outcome Y on covariates X and Z, where Y and Z are observed, X is unobserved, but a variable W that measures X with error is observed. Information about measurement error is provided in an external calibration sample where data on X and W (but not Y and Z) are recorded. We describe a method that uses summary statistics from the calibration sample to create multiple imputations of the missing values of X in the regression sample, so that the regression coefficients of Y on X and Z and associated standard errors can be estimated using simple multiple imputation combining rules, yielding valid statistical inferences under the assumption of a multivariate normal distribution. The proposed method is shown by simulation to provide better inferences than existing methods, namely the naive method, classical calibration, and regression calibration, particularly for correction for bias and achieving nominal confidence levels. We also illustrate our method with an example using linear regression to examine the relation between serum reproductive hormone concentrations and bone mineral density loss in midlife women in the Michigan Bone Health and Metabolism Study. Existing methods fail to adjust appropriately for bias due to measurement error in the regression setting, particularly when measurement error is substantial. The proposed method corrects this deficiency.
NASA Astrophysics Data System (ADS)
Abitew, T. A.; van Griensven, A.; Bauwens, W.
2015-12-01
Evapotranspiration is the main process in hydrology (on average around 60%), though has not received as much attention in the evaluation and calibration of hydrological models. In this study, Remote Sensing (RS) derived Evapotranspiration (ET) is used to improve the spatially distributed processes of ET of SWAT model application in the upper Mara basin (Kenya) and the Blue Nile basin (Ethiopia). The RS derived ET data is obtained from recently compiled global datasets (continuously monthly data at 1 km resolution from MOD16NBI,SSEBop,ALEXI,CMRSET models) and from regionally applied Energy Balance Models (for several cloud free days). The RS-RT data is used in different forms: Method 1) to evaluate spatially distributed evapotransiration model resultsMethod 2) to calibrate the evotranspiration processes in hydrological modelMethod 3) to bias-correct the evapotranpiration in hydrological model during simulation after changing the SWAT codesAn inter-comparison of the RS-ET products shows that at present there is a significant bias, but at the same time an agreement on the spatial variability of ET. The ensemble mean of different ET products seems the most realistic estimation and was further used in this study.The results show that:Method 1) the spatially mapped evapotranspiration of hydrological models shows clear differences when compared to RS derived evapotranspiration (low correlations). Especially evapotranspiration in forested areas is strongly underestimated compared to other land covers.Method 2) Calibration allows to improve the correlations between the RS and hydrological model results to some extent.Method 3) Bias-corrections are efficient in producing (sesonal or annual) evapotranspiration maps from hydrological models which are very similar to the patterns obtained from RS data.Though the bias-correction is very efficient, it is advised to improve the model results by better representing the ET processes by improved plant/crop computations, improved agricultural management practices or by providing improved meteorological data.
Correcting for particle counting bias error in turbulent flow
NASA Technical Reports Server (NTRS)
Edwards, R. V.; Baratuci, W.
1985-01-01
An ideal seeding device is proposed generating particles that exactly follow the flow out are still a major source of error, i.e., with a particle counting bias wherein the probability of measuring velocity is a function of velocity. The error in the measured mean can be as much as 25%. Many schemes have been put forward to correct for this error, but there is not universal agreement as to the acceptability of any one method. In particular it is sometimes difficult to know if the assumptions required in the analysis are fulfilled by any particular flow measurement system. To check various correction mechanisms in an ideal way and to gain some insight into how to correct with the fewest initial assumptions, a computer simulation is constructed to simulate laser anemometer measurements in a turbulent flow. That simulator and the results of its use are discussed.
Culpepper, Steven Andrew
2016-06-01
Standardized tests are frequently used for selection decisions, and the validation of test scores remains an important area of research. This paper builds upon prior literature about the effect of nonlinearity and heteroscedasticity on the accuracy of standard formulas for correcting correlations in restricted samples. Existing formulas for direct range restriction require three assumptions: (1) the criterion variable is missing at random; (2) a linear relationship between independent and dependent variables; and (3) constant error variance or homoscedasticity. The results in this paper demonstrate that the standard approach for correcting restricted correlations is severely biased in cases of extreme monotone quadratic nonlinearity and heteroscedasticity. This paper offers at least three significant contributions to the existing literature. First, a method from the econometrics literature is adapted to provide more accurate estimates of unrestricted correlations. Second, derivations establish bounds on the degree of bias attributed to quadratic functions under the assumption of a monotonic relationship between test scores and criterion measurements. New results are presented on the bias associated with using the standard range restriction correction formula, and the results show that the standard correction formula yields estimates of unrestricted correlations that deviate by as much as 0.2 for high to moderate selectivity. Third, Monte Carlo simulation results demonstrate that the new procedure for correcting restricted correlations provides more accurate estimates in the presence of quadratic and heteroscedastic test score and criterion relationships.
Junk, J; Ulber, B; Vidal, S; Eickermann, M
2015-11-01
Agricultural production is directly affected by projected increases in air temperature and changes in precipitation. A multi-model ensemble of regional climate change projections indicated shifts towards higher air temperatures and changing precipitation patterns during the summer and winter seasons up to the year 2100 for the region of Goettingen (Lower Saxony, Germany). A second major controlling factor of the agricultural production is the infestation level by pests. Based on long-term field surveys and meteorological observations, a calibration of an existing model describing the migration of the pest insect Ceutorhynchus napi was possible. To assess the impacts of climate on pests under projected changing environmental conditions, we combined the results of regional climate models with the phenological model to describe the crop invasion of this species. In order to reduce systematic differences between the output of the regional climate models and observational data sets, two different bias correction methods were applied: a linear correction for air temperature and a quantile mapping approach for precipitation. Only the results derived from the bias-corrected output of the regional climate models showed satisfying results. An earlier onset, as well as a prolongation of the possible time window for the immigration of Ceutorhynchus napi, was projected by the majority of the ensemble members.
NASA Astrophysics Data System (ADS)
Junk, J.; Ulber, B.; Vidal, S.; Eickermann, M.
2015-11-01
Agricultural production is directly affected by projected increases in air temperature and changes in precipitation. A multi-model ensemble of regional climate change projections indicated shifts towards higher air temperatures and changing precipitation patterns during the summer and winter seasons up to the year 2100 for the region of Goettingen (Lower Saxony, Germany). A second major controlling factor of the agricultural production is the infestation level by pests. Based on long-term field surveys and meteorological observations, a calibration of an existing model describing the migration of the pest insect Ceutorhynchus napi was possible. To assess the impacts of climate on pests under projected changing environmental conditions, we combined the results of regional climate models with the phenological model to describe the crop invasion of this species. In order to reduce systematic differences between the output of the regional climate models and observational data sets, two different bias correction methods were applied: a linear correction for air temperature and a quantile mapping approach for precipitation. Only the results derived from the bias-corrected output of the regional climate models showed satisfying results. An earlier onset, as well as a prolongation of the possible time window for the immigration of Ceutorhynchus napi, was projected by the majority of the ensemble members.
Hydraulic correction method (HCM) to enhance the efficiency of SRTM DEM in flood modeling
NASA Astrophysics Data System (ADS)
Chen, Huili; Liang, Qiuhua; Liu, Yong; Xie, Shuguang
2018-04-01
Digital Elevation Model (DEM) is one of the most important controlling factors determining the simulation accuracy of hydraulic models. However, the currently available global topographic data is confronted with limitations for application in 2-D hydraulic modeling, mainly due to the existence of vegetation bias, random errors and insufficient spatial resolution. A hydraulic correction method (HCM) for the SRTM DEM is proposed in this study to improve modeling accuracy. Firstly, we employ the global vegetation corrected DEM (i.e. Bare-Earth DEM), developed from the SRTM DEM to include both vegetation height and SRTM vegetation signal. Then, a newly released DEM, removing both vegetation bias and random errors (i.e. Multi-Error Removed DEM), is employed to overcome the limitation of height errors. Last, an approach to correct the Multi-Error Removed DEM is presented to account for the insufficiency of spatial resolution, ensuring flow connectivity of the river networks. The approach involves: (a) extracting river networks from the Multi-Error Removed DEM using an automated algorithm in ArcGIS; (b) correcting the location and layout of extracted streams with the aid of Google Earth platform and Remote Sensing imagery; and (c) removing the positive biases of the raised segment in the river networks based on bed slope to generate the hydraulically corrected DEM. The proposed HCM utilizes easily available data and tools to improve the flow connectivity of river networks without manual adjustment. To demonstrate the advantages of HCM, an extreme flood event in Huifa River Basin (China) is simulated on the original DEM, Bare-Earth DEM, Multi-Error removed DEM, and hydraulically corrected DEM using an integrated hydrologic-hydraulic model. A comparative analysis is subsequently performed to assess the simulation accuracy and performance of four different DEMs and favorable results have been obtained on the corrected DEM.
Assimilation of SMOS Retrievals in the Land Information System
NASA Technical Reports Server (NTRS)
Blankenship, Clay B.; Case, Jonathan L.; Zavodsky, Bradley T.; Crosson, William L.
2016-01-01
The Soil Moisture and Ocean Salinity (SMOS) satellite provides retrievals of soil moisture in the upper 5 cm with a 30-50 km resolution and a mission accuracy requirement of 0.04 cm(sub 3 cm(sub -3). These observations can be used to improve land surface model soil moisture states through data assimilation. In this paper, SMOS soil moisture retrievals are assimilated into the Noah land surface model via an Ensemble Kalman Filter within the NASA Land Information System. Bias correction is implemented using Cumulative Distribution Function (CDF) matching, with points aggregated by either land cover or soil type to reduce sampling error in generating the CDFs. An experiment was run for the warm season of 2011 to test SMOS data assimilation and to compare assimilation methods. Verification of soil moisture analyses in the 0-10 cm upper layer and root zone (0-1 m) was conducted using in situ measurements from several observing networks in the central and southeastern United States. This experiment showed that SMOS data assimilation significantly increased the anomaly correlation of Noah soil moisture with station measurements from 0.45 to 0.57 in the 0-10 cm layer. Time series at specific stations demonstrate the ability of SMOS DA to increase the dynamic range of soil moisture in a manner consistent with station measurements. Among the bias correction methods, the correction based on soil type performed best at bias reduction but also reduced correlations. The vegetation-based correction did not produce any significant differences compared to using a simple uniform correction curve.
Assimilation of SMOS Retrievals in the Land Information System
Blankenship, Clay B.; Case, Jonathan L.; Zavodsky, Bradley T.; Crosson, William L.
2018-01-01
The Soil Moisture and Ocean Salinity (SMOS) satellite provides retrievals of soil moisture in the upper 5 cm with a 30-50 km resolution and a mission accuracy requirement of 0.04 cm3 cm−3. These observations can be used to improve land surface model soil moisture states through data assimilation. In this paper, SMOS soil moisture retrievals are assimilated into the Noah land surface model via an Ensemble Kalman Filter within the NASA Land Information System. Bias correction is implemented using Cumulative Distribution Function (CDF) matching, with points aggregated by either land cover or soil type to reduce sampling error in generating the CDFs. An experiment was run for the warm season of 2011 to test SMOS data assimilation and to compare assimilation methods. Verification of soil moisture analyses in the 0-10 cm upper layer and root zone (0-1 m) was conducted using in situ measurements from several observing networks in the central and southeastern United States. This experiment showed that SMOS data assimilation significantly increased the anomaly correlation of Noah soil moisture with station measurements from 0.45 to 0.57 in the 0-10 cm layer. Time series at specific stations demonstrate the ability of SMOS DA to increase the dynamic range of soil moisture in a manner consistent with station measurements. Among the bias correction methods, the correction based on soil type performed best at bias reduction but also reduced correlations. The vegetation-based correction did not produce any significant differences compared to using a simple uniform correction curve. PMID:29367795
Naturally Biased? In Search for Reaction Time Evidence for a Natural Number Bias in Adults
ERIC Educational Resources Information Center
Vamvakoussi, Xenia; Van Dooren, Wim; Verschaffel, Lieven
2012-01-01
A major source of errors in rational number tasks is the inappropriate application of natural number rules. We hypothesized that this is an instance of intuitive reasoning and thus can persist in adults, even when they respond correctly. This was tested by means of a reaction time method, relying on a dual process perspective that differentiates…
Verification of ECMWF System 4 for seasonal hydrological forecasting in a northern climate
NASA Astrophysics Data System (ADS)
Bazile, Rachel; Boucher, Marie-Amélie; Perreault, Luc; Leconte, Robert
2017-11-01
Hydropower production requires optimal dam and reservoir management to prevent flooding damage and avoid operation losses. In a northern climate, where spring freshet constitutes the main inflow volume, seasonal forecasts can help to establish a yearly strategy. Long-term hydrological forecasts often rely on past observations of streamflow or meteorological data. Another alternative is to use ensemble meteorological forecasts produced by climate models. In this paper, those produced by the ECMWF (European Centre for Medium-Range Forecast) System 4 are examined and bias is characterized. Bias correction, through the linear scaling method, improves the performance of the raw ensemble meteorological forecasts in terms of continuous ranked probability score (CRPS). Then, three seasonal ensemble hydrological forecasting systems are compared: (1) the climatology of simulated streamflow, (2) the ensemble hydrological forecasts based on climatology (ESP) and (3) the hydrological forecasts based on bias-corrected ensemble meteorological forecasts from System 4 (corr-DSP). Simulated streamflow computed using observed meteorological data is used as benchmark. Accounting for initial conditions is valuable even for long-term forecasts. ESP and corr-DSP both outperform the climatology of simulated streamflow for lead times from 1 to 5 months depending on the season and watershed. Integrating information about future meteorological conditions also improves monthly volume forecasts. For the 1-month lead time, a gain exists for almost all watersheds during winter, summer and fall. However, volume forecasts performance for spring varies from one watershed to another. For most of them, the performance is close to the performance of ESP. For longer lead times, the CRPS skill score is mostly in favour of ESP, even if for many watersheds, ESP and corr-DSP have comparable skill. Corr-DSP appears quite reliable but, in some cases, under-dispersion or bias is observed. A more complex bias-correction method should be further investigated to remedy this weakness and take more advantage of the ensemble forecasts produced by the climate model. Overall, in this study, bias-corrected ensemble meteorological forecasts appear to be an interesting source of information for hydrological forecasting for lead times up to 1 month. They could also complement ESP for longer lead times.
Unbiased methods for removing systematics from galaxy clustering measurements
NASA Astrophysics Data System (ADS)
Elsner, Franz; Leistedt, Boris; Peiris, Hiranya V.
2016-02-01
Measuring the angular clustering of galaxies as a function of redshift is a powerful method for extracting information from the three-dimensional galaxy distribution. The precision of such measurements will dramatically increase with ongoing and future wide-field galaxy surveys. However, these are also increasingly sensitive to observational and astrophysical contaminants. Here, we study the statistical properties of three methods proposed for controlling such systematics - template subtraction, basic mode projection, and extended mode projection - all of which make use of externally supplied template maps, designed to characterize and capture the spatial variations of potential systematic effects. Based on a detailed mathematical analysis, and in agreement with simulations, we find that the template subtraction method in its original formulation returns biased estimates of the galaxy angular clustering. We derive closed-form expressions that should be used to correct results for this shortcoming. Turning to the basic mode projection algorithm, we prove it to be free of any bias, whereas we conclude that results computed with extended mode projection are biased. Within a simplified setup, we derive analytical expressions for the bias and discuss the options for correcting it in more realistic configurations. Common to all three methods is an increased estimator variance induced by the cleaning process, albeit at different levels. These results enable unbiased high-precision clustering measurements in the presence of spatially varying systematics, an essential step towards realizing the full potential of current and planned galaxy surveys.
Different hunting strategies select for different weights in red deer
Martínez, María; Rodríguez-Vigal, Carlos; Jones, Owen R; Coulson, Tim; Miguel, Alfonso San
2005-01-01
Much insight can be derived from records of shot animals. Most researchers using such data assume that their data represents a random sample of a particular demographic class. However, hunters typically select a non-random subset of the population and hunting is, therefore, not a random process. Here, with red deer (Cervus elaphus) hunting data from a ranch in Toledo, Spain, we demonstrate that data collection methods have a significant influence upon the apparent relationship between age and weight. We argue that a failure to correct for such methodological bias may have significant consequences for the interpretation of analyses involving weight or correlated traits such as breeding success, and urge researchers to explore methods to identify and correct for such bias in their data. PMID:17148205
NASA Technical Reports Server (NTRS)
Thompson, J. M.; Russell, J. W.; Blanchard, R. C.
1987-01-01
This report presents a process for extracting the aerodynamic accelerations of the Shuttle Orbiter Vehicle from the High Resolution Accelerometer Package (HiRAP) flight data during reentry. The methods for obtaining low-level aerodynamic accelerations, principally in the rarefied flow regime, are applied to 10 Orbiter flights. The extraction process is presented using data obtained from Space Transportation System Flight 32 (Mission 61-C) as a typical example. This process involves correcting the HiRAP measurements for the effects of temperature bias and instrument offset from the Orbiter center of gravity, and removing acceleration data during times they are affected by thruster firings. The corrected data are then made continuous and smooth and are further enhanced by refining the temperature bias correction and removing effects of the auxiliary power unit actuation. The resulting data are the current best estimate of the Orbiter aerodynamic accelerations during reentry and will be used for further analyses of the Orbiter aerodynamics and the upper atmosphere characteristics.
Bakbergenuly, Ilyas; Kulinskaya, Elena; Morgenthaler, Stephan
2016-07-01
We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group-level studies or in meta-analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log-odds and arcsine transformations of the estimated probability p̂, both for single-group studies and in combining results from several groups or studies in meta-analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta-analysis and result in abysmal coverage of the combined effect for large K. We also propose bias-correction for the arcsine transformation. Our simulations demonstrate that this bias-correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta-analyses of prevalence. © 2016 The Authors. Biometrical Journal Published by Wiley-VCH Verlag GmbH & Co. KGaA.
A Variational Approach to Simultaneous Image Segmentation and Bias Correction.
Zhang, Kaihua; Liu, Qingshan; Song, Huihui; Li, Xuelong
2015-08-01
This paper presents a novel variational approach for simultaneous estimation of bias field and segmentation of images with intensity inhomogeneity. We model intensity of inhomogeneous objects to be Gaussian distributed with different means and variances, and then introduce a sliding window to map the original image intensity onto another domain, where the intensity distribution of each object is still Gaussian but can be better separated. The means of the Gaussian distributions in the transformed domain can be adaptively estimated by multiplying the bias field with a piecewise constant signal within the sliding window. A maximum likelihood energy functional is then defined on each local region, which combines the bias field, the membership function of the object region, and the constant approximating the true signal from its corresponding object. The energy functional is then extended to the whole image domain by the Bayesian learning approach. An efficient iterative algorithm is proposed for energy minimization, via which the image segmentation and bias field correction are simultaneously achieved. Furthermore, the smoothness of the obtained optimal bias field is ensured by the normalized convolutions without extra cost. Experiments on real images demonstrated the superiority of the proposed algorithm to other state-of-the-art representative methods.
NASA Astrophysics Data System (ADS)
Nguyen, Huong Giang T.; Horn, Jarod C.; Thommes, Matthias; van Zee, Roger D.; Espinal, Laura
2017-12-01
Addressing reproducibility issues in adsorption measurements is critical to accelerating the path to discovery of new industrial adsorbents and to understanding adsorption processes. A National Institute of Standards and Technology Reference Material, RM 8852 (ammonium ZSM-5 zeolite), and two gravimetric instruments with asymmetric two-beam balances were used to measure high-pressure adsorption isotherms. This work demonstrates how common approaches to buoyancy correction, a key factor in obtaining the mass change due to surface excess gas uptake from the apparent mass change, can impact the adsorption isotherm data. Three different approaches to buoyancy correction were investigated and applied to the subcritical CO2 and supercritical N2 adsorption isotherms at 293 K. It was observed that measuring a collective volume for all balance components for the buoyancy correction (helium method) introduces an inherent bias in temperature partition when there is a temperature gradient (i.e. analysis temperature is not equal to instrument air bath temperature). We demonstrate that a blank subtraction is effective in mitigating the biases associated with temperature partitioning, instrument calibration, and the determined volumes of the balance components. In general, the manual and subtraction methods allow for better treatment of the temperature gradient during buoyancy correction. From the study, best practices specific to asymmetric two-beam balances and more general recommendations for measuring isotherms far from critical temperatures using gravimetric instruments are offered.
Nguyen, Huong Giang T; Horn, Jarod C; Thommes, Matthias; van Zee, Roger D; Espinal, Laura
2017-12-01
Addressing reproducibility issues in adsorption measurements is critical to accelerating the path to discovery of new industrial adsorbents and to understanding adsorption processes. A National Institute of Standards and Technology Reference Material, RM 8852 (ammonium ZSM-5 zeolite), and two gravimetric instruments with asymmetric two-beam balances were used to measure high-pressure adsorption isotherms. This work demonstrates how common approaches to buoyancy correction, a key factor in obtaining the mass change due to surface excess gas uptake from the apparent mass change, can impact the adsorption isotherm data. Three different approaches to buoyancy correction were investigated and applied to the subcritical CO 2 and supercritical N 2 adsorption isotherms at 293 K. It was observed that measuring a collective volume for all balance components for the buoyancy correction (helium method) introduces an inherent bias in temperature partition when there is a temperature gradient (i.e. analysis temperature is not equal to instrument air bath temperature). We demonstrate that a blank subtraction is effective in mitigating the biases associated with temperature partitioning, instrument calibration, and the determined volumes of the balance components. In general, the manual and subtraction methods allow for better treatment of the temperature gradient during buoyancy correction. From the study, best practices specific to asymmetric two-beam balances and more general recommendations for measuring isotherms far from critical temperatures using gravimetric instruments are offered.
Correction for Guessing in the Framework of the 3PL Item Response Theory
ERIC Educational Resources Information Center
Chiu, Ting-Wei
2010-01-01
Guessing behavior is an important topic with regard to assessing proficiency on multiple choice tests, particularly for examinees at lower levels of proficiency due to greater the potential for systematic error or bias which that inflates observed test scores. Methods that incorporate a correction for guessing on high-stakes tests generally rely…
Malyarenko, Dariya I; Ross, Brian D; Chenevert, Thomas L
2014-03-01
Gradient nonlinearity of MRI systems leads to spatially dependent b-values and consequently high non-uniformity errors (10-20%) in apparent diffusion coefficient (ADC) measurements over clinically relevant field-of-views. This work seeks practical correction procedure that effectively reduces observed ADC bias for media of arbitrary anisotropy in the fewest measurements. All-inclusive bias analysis considers spatial and time-domain cross-terms for diffusion and imaging gradients. The proposed correction is based on rotation of the gradient nonlinearity tensor into the diffusion gradient frame where spatial bias of b-matrix can be approximated by its Euclidean norm. Correction efficiency of the proposed procedure is numerically evaluated for a range of model diffusion tensor anisotropies and orientations. Spatial dependence of nonlinearity correction terms accounts for the bulk (75-95%) of ADC bias for FA = 0.3-0.9. Residual ADC non-uniformity errors are amplified for anisotropic diffusion. This approximation obviates need for full diffusion tensor measurement and diagonalization to derive a corrected ADC. Practical scenarios are outlined for implementation of the correction on clinical MRI systems. The proposed simplified correction algorithm appears sufficient to control ADC non-uniformity errors in clinical studies using three orthogonal diffusion measurements. The most efficient reduction of ADC bias for anisotropic medium is achieved with non-lab-based diffusion gradients. Copyright © 2013 Wiley Periodicals, Inc.
Impact of bias-corrected reanalysis-derived lateral boundary conditions on WRF simulations
NASA Astrophysics Data System (ADS)
Moalafhi, Ditiro Benson; Sharma, Ashish; Evans, Jason Peter; Mehrotra, Rajeshwar; Rocheta, Eytan
2017-08-01
Lateral and lower boundary conditions derived from a suitable global reanalysis data set form the basis for deriving a dynamically consistent finer resolution downscaled product for climate and hydrological assessment studies. A problem with this, however, is that systematic biases have been noted to be present in the global reanalysis data sets that form these boundaries, biases which can be carried into the downscaled simulations thereby reducing their accuracy or efficacy. In this work, three Weather Research and Forecasting (WRF) model downscaling experiments are undertaken to investigate the impact of bias correcting European Centre for Medium range Weather Forecasting Reanalysis ERA-Interim (ERA-I) atmospheric temperature and relative humidity using Atmospheric Infrared Sounder (AIRS) satellite data. The downscaling is performed over a domain centered over southern Africa between the years 2003 and 2012. The sample mean and the mean as well as standard deviation at each grid cell for each variable are used for bias correction. The resultant WRF simulations of near-surface temperature and precipitation are evaluated seasonally and annually against global gridded observational data sets and compared with ERA-I reanalysis driving field. The study reveals inconsistencies between the impact of the bias correction prior to downscaling and the resultant model simulations after downscaling. Mean and standard deviation bias-corrected WRF simulations are, however, found to be marginally better than mean only bias-corrected WRF simulations and raw ERA-I reanalysis-driven WRF simulations. Performances, however, differ when assessing different attributes in the downscaled field. This raises questions about the efficacy of the correction procedures adopted.
BeiDou Geostationary Satellite Code Bias Modeling Using Fengyun-3C Onboard Measurements.
Jiang, Kecai; Li, Min; Zhao, Qile; Li, Wenwen; Guo, Xiang
2017-10-27
This study validated and investigated elevation- and frequency-dependent systematic biases observed in ground-based code measurements of the Chinese BeiDou navigation satellite system, using the onboard BeiDou code measurement data from the Chinese meteorological satellite Fengyun-3C. Particularly for geostationary earth orbit satellites, sky-view coverage can be achieved over the entire elevation and azimuth angle ranges with the available onboard tracking data, which is more favorable to modeling code biases. Apart from the BeiDou-satellite-induced biases, the onboard BeiDou code multipath effects also indicate pronounced near-field systematic biases that depend only on signal frequency and the line-of-sight directions. To correct these biases, we developed a proposed code correction model by estimating the BeiDou-satellite-induced biases as linear piece-wise functions in different satellite groups and the near-field systematic biases in a grid approach. To validate the code bias model, we carried out orbit determination using single-frequency BeiDou data with and without code bias corrections applied. Orbit precision statistics indicate that those code biases can seriously degrade single-frequency orbit determination. After the correction model was applied, the orbit position errors, 3D root mean square, were reduced from 150.6 to 56.3 cm.
BeiDou Geostationary Satellite Code Bias Modeling Using Fengyun-3C Onboard Measurements
Jiang, Kecai; Li, Min; Zhao, Qile; Li, Wenwen; Guo, Xiang
2017-01-01
This study validated and investigated elevation- and frequency-dependent systematic biases observed in ground-based code measurements of the Chinese BeiDou navigation satellite system, using the onboard BeiDou code measurement data from the Chinese meteorological satellite Fengyun-3C. Particularly for geostationary earth orbit satellites, sky-view coverage can be achieved over the entire elevation and azimuth angle ranges with the available onboard tracking data, which is more favorable to modeling code biases. Apart from the BeiDou-satellite-induced biases, the onboard BeiDou code multipath effects also indicate pronounced near-field systematic biases that depend only on signal frequency and the line-of-sight directions. To correct these biases, we developed a proposed code correction model by estimating the BeiDou-satellite-induced biases as linear piece-wise functions in different satellite groups and the near-field systematic biases in a grid approach. To validate the code bias model, we carried out orbit determination using single-frequency BeiDou data with and without code bias corrections applied. Orbit precision statistics indicate that those code biases can seriously degrade single-frequency orbit determination. After the correction model was applied, the orbit position errors, 3D root mean square, were reduced from 150.6 to 56.3 cm. PMID:29076998
Malyarenko, Dariya I; Wilmes, Lisa J; Arlinghaus, Lori R; Jacobs, Michael A; Huang, Wei; Helmer, Karl G; Taouli, Bachir; Yankeelov, Thomas E; Newitt, David; Chenevert, Thomas L
2016-12-01
Previous research has shown that system-dependent gradient nonlinearity (GNL) introduces a significant spatial bias (nonuniformity) in apparent diffusion coefficient (ADC) maps. Here, the feasibility of centralized retrospective system-specific correction of GNL bias for quantitative diffusion-weighted imaging (DWI) in multisite clinical trials is demonstrated across diverse scanners independent of the scanned object. Using corrector maps generated from system characterization by ice-water phantom measurement completed in the previous project phase, GNL bias correction was performed for test ADC measurements from an independent DWI phantom (room temperature agar) at two offset locations in the bore. The precomputed three-dimensional GNL correctors were retrospectively applied to test DWI scans by the central analysis site. The correction was blinded to reference DWI of the agar phantom at magnet isocenter where the GNL bias is negligible. The performance was evaluated from changes in ADC region of interest histogram statistics before and after correction with respect to the unbiased reference ADC values provided by sites. Both absolute error and nonuniformity of the ADC map induced by GNL (median, 12%; range, -35% to +10%) were substantially reduced by correction (7-fold in median and 3-fold in range). The residual ADC nonuniformity errors were attributed to measurement noise and other non-GNL sources. Correction of systematic GNL bias resulted in a 2-fold decrease in technical variability across scanners (down to site temperature range). The described validation of GNL bias correction marks progress toward implementation of this technology in multicenter trials that utilize quantitative DWI.
Malyarenko, Dariya I.; Wilmes, Lisa J.; Arlinghaus, Lori R.; Jacobs, Michael A.; Huang, Wei; Helmer, Karl G.; Taouli, Bachir; Yankeelov, Thomas E.; Newitt, David; Chenevert, Thomas L.
2017-01-01
Previous research has shown that system-dependent gradient nonlinearity (GNL) introduces a significant spatial bias (nonuniformity) in apparent diffusion coefficient (ADC) maps. Here, the feasibility of centralized retrospective system-specific correction of GNL bias for quantitative diffusion-weighted imaging (DWI) in multisite clinical trials is demonstrated across diverse scanners independent of the scanned object. Using corrector maps generated from system characterization by ice-water phantom measurement completed in the previous project phase, GNL bias correction was performed for test ADC measurements from an independent DWI phantom (room temperature agar) at two offset locations in the bore. The precomputed three-dimensional GNL correctors were retrospectively applied to test DWI scans by the central analysis site. The correction was blinded to reference DWI of the agar phantom at magnet isocenter where the GNL bias is negligible. The performance was evaluated from changes in ADC region of interest histogram statistics before and after correction with respect to the unbiased reference ADC values provided by sites. Both absolute error and nonuniformity of the ADC map induced by GNL (median, 12%; range, −35% to +10%) were substantially reduced by correction (7-fold in median and 3-fold in range). The residual ADC nonuniformity errors were attributed to measurement noise and other non-GNL sources. Correction of systematic GNL bias resulted in a 2-fold decrease in technical variability across scanners (down to site temperature range). The described validation of GNL bias correction marks progress toward implementation of this technology in multicenter trials that utilize quantitative DWI. PMID:28105469
Bolte, John F B
2016-09-01
Personal exposure measurements of radio frequency electromagnetic fields are important for epidemiological studies and developing prediction models. Minimizing biases and uncertainties and handling spatial and temporal variability are important aspects of these measurements. This paper reviews the lessons learnt from testing the different types of exposimeters and from personal exposure measurement surveys performed between 2005 and 2015. Applying them will improve the comparability and ranking of exposure levels for different microenvironments, activities or (groups of) people, such that epidemiological studies are better capable of finding potential weak correlations with health effects. Over 20 papers have been published on how to prevent biases and minimize uncertainties due to: mechanical errors; design of hardware and software filters; anisotropy; and influence of the body. A number of biases can be corrected for by determining multiplicative correction factors. In addition a good protocol on how to wear the exposimeter, a sufficiently small sampling interval and sufficiently long measurement duration will minimize biases. Corrections to biases are possible for: non-detects through detection limit, erroneous manufacturer calibration and temporal drift. Corrections not deemed necessary, because no significant biases have been observed, are: linearity in response and resolution. Corrections difficult to perform after measurements are for: modulation/duty cycle sensitivity; out of band response aka cross talk; temperature and humidity sensitivity. Corrections not possible to perform after measurements are for: multiple signals detection in one band; flatness of response within a frequency band; anisotropy to waves of different elevation angle. An analysis of 20 microenvironmental surveys showed that early studies using exposimeters with logarithmic detectors, overestimated exposure to signals with bursts, such as in uplink signals from mobile phones and WiFi appliances. Further, the possible corrections for biases have not been fully applied. The main findings are that if the biases are not corrected for, the actual exposure will on average be underestimated. Copyright © 2016 Elsevier Ltd. All rights reserved.
Using Audit Information to Adjust Parameter Estimates for Data Errors in Clinical Trials
Shepherd, Bryan E.; Shaw, Pamela A.; Dodd, Lori E.
2013-01-01
Background Audits are often performed to assess the quality of clinical trial data, but beyond detecting fraud or sloppiness, the audit data is generally ignored. In earlier work using data from a non-randomized study, Shepherd and Yu (2011) developed statistical methods to incorporate audit results into study estimates, and demonstrated that audit data could be used to eliminate bias. Purpose In this manuscript we examine the usefulness of audit-based error-correction methods in clinical trial settings where a continuous outcome is of primary interest. Methods We demonstrate the bias of multiple linear regression estimates in general settings with an outcome that may have errors and a set of covariates for which some may have errors and others, including treatment assignment, are recorded correctly for all subjects. We study this bias under different assumptions including independence between treatment assignment, covariates, and data errors (conceivable in a double-blinded randomized trial) and independence between treatment assignment and covariates but not data errors (possible in an unblinded randomized trial). We review moment-based estimators to incorporate the audit data and propose new multiple imputation estimators. The performance of estimators is studied in simulations. Results When treatment is randomized and unrelated to data errors, estimates of the treatment effect using the original error-prone data (i.e., ignoring the audit results) are unbiased. In this setting, both moment and multiple imputation estimators incorporating audit data are more variable than standard analyses using the original data. In contrast, in settings where treatment is randomized but correlated with data errors and in settings where treatment is not randomized, standard treatment effect estimates will be biased. And in all settings, parameter estimates for the original, error-prone covariates will be biased. Treatment and covariate effect estimates can be corrected by incorporating audit data using either the multiple imputation or moment-based approaches. Bias, precision, and coverage of confidence intervals improve as the audit size increases. Limitations The extent of bias and the performance of methods depend on the extent and nature of the error as well as the size of the audit. This work only considers methods for the linear model. Settings much different than those considered here need further study. Conclusions In randomized trials with continuous outcomes and treatment assignment independent of data errors, standard analyses of treatment effects will be unbiased and are recommended. However, if treatment assignment is correlated with data errors or other covariates, naive analyses may be biased. In these settings, and when covariate effects are of interest, approaches for incorporating audit results should be considered. PMID:22848072
Comparison of sEMG processing methods during whole-body vibration exercise.
Lienhard, Karin; Cabasson, Aline; Meste, Olivier; Colson, Serge S
2015-12-01
The objective was to investigate the influence of surface electromyography (sEMG) processing methods on the quantification of muscle activity during whole-body vibration (WBV) exercises. sEMG activity was recorded while the participants performed squats on the platform with and without WBV. The spikes observed in the sEMG spectrum at the vibration frequency and its harmonics were deleted using state-of-the-art methods, i.e. (1) a band-stop filter, (2) a band-pass filter, and (3) spectral linear interpolation. The same filtering methods were applied on the sEMG during the no-vibration trial. The linear interpolation method showed the highest intraclass correlation coefficients (no vibration: 0.999, WBV: 0.757-0.979) with the comparison measure (unfiltered sEMG during the no-vibration trial), followed by the band-stop filter (no vibration: 0.929-0.975, WBV: 0.661-0.938). While both methods introduced a systematic bias (P < 0.001), the error increased with increasing mean values to a higher degree for the band-stop filter. After adjusting the sEMG(RMS) during WBV for the bias, the performance of the interpolation method and the band-stop filter was comparable. The band-pass filter was in poor agreement with the other methods (ICC: 0.207-0.697), unless the sEMG(RMS) was corrected for the bias (ICC ⩾ 0.931, %LOA ⩽ 32.3). In conclusion, spectral linear interpolation or a band-stop filter centered at the vibration frequency and its multiple harmonics should be applied to delete the artifacts in the sEMG signals during WBV. With the use of a band-stop filter it is recommended to correct the sEMG(RMS) for the bias as this procedure improved its performance. Copyright © 2015 Elsevier Ltd. All rights reserved.
A forward bias method for lag correction of an a-Si flat panel detector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Starman, Jared; Tognina, Carlo; Partain, Larry
2012-01-15
Purpose: Digital a-Si flat panel (FP) x-ray detectors can exhibit detector lag, or residual signal, of several percent that can cause ghosting in projection images or severe shading artifacts, known as the radar artifact, in cone-beam computed tomography (CBCT) reconstructions. A major contributor to detector lag is believed to be defect states, or traps, in the a-Si layer of the FP. Software methods to characterize and correct for the detector lag exist, but they may make assumptions such as system linearity and time invariance, which may not be true. The purpose of this work is to investigate a new hardwaremore » based method to reduce lag in an a-Si FP and to evaluate its effectiveness at removing shading artifacts in CBCT reconstructions. The feasibility of a novel, partially hardware based solution is also examined. Methods: The proposed hardware solution for lag reduction requires only a minor change to the FP. For pulsed irradiation, the proposed method inserts a new operation step between the readout and data collection stages. During this new stage the photodiode is operated in a forward bias mode, which fills the defect states with charge. A Varian 4030CB panel was modified to allow for operation in the forward bias mode. The contrast of residual lag ghosts was measured for lag frames 2 and 100 after irradiation ceased for standard and forward bias modes. Detector step response, lag, SNR, modulation transfer function (MTF), and detective quantum efficiency (DQE) measurements were made with standard and forward bias firmware. CBCT data of pelvic and head phantoms were also collected. Results: Overall, the 2nd and 100th detector lag frame residual signals were reduced 70%-88% using the new method. SNR, MTF, and DQE measurements show a small decrease in collected signal and a small increase in noise. The forward bias hardware successfully reduced the radar artifact in the CBCT reconstruction of the pelvic and head phantoms by 48%-81%. Conclusions: Overall, the forward bias method has been found to greatly reduce detector lag ghosts in projection data and the radar artifact in CBCT reconstructions. The method is limited to improvements of the a-Si photodiode response only. A future hybrid mode may overcome any limitations of this method.« less
A new dynamical downscaling approach with GCM bias corrections and spectral nudging
NASA Astrophysics Data System (ADS)
Xu, Zhongfeng; Yang, Zong-Liang
2015-04-01
To improve confidence in regional projections of future climate, a new dynamical downscaling (NDD) approach with both general circulation model (GCM) bias corrections and spectral nudging is developed and assessed over North America. GCM biases are corrected by adjusting GCM climatological means and variances based on reanalysis data before the GCM output is used to drive a regional climate model (RCM). Spectral nudging is also applied to constrain RCM-based biases. Three sets of RCM experiments are integrated over a 31 year period. In the first set of experiments, the model configurations are identical except that the initial and lateral boundary conditions are derived from either the original GCM output, the bias-corrected GCM output, or the reanalysis data. The second set of experiments is the same as the first set except spectral nudging is applied. The third set of experiments includes two sensitivity runs with both GCM bias corrections and nudging where the nudging strength is progressively reduced. All RCM simulations are assessed against North American Regional Reanalysis. The results show that NDD significantly improves the downscaled mean climate and climate variability relative to other GCM-driven RCM downscaling approach in terms of climatological mean air temperature, geopotential height, wind vectors, and surface air temperature variability. In the NDD approach, spectral nudging introduces the effects of GCM bias corrections throughout the RCM domain rather than just limiting them to the initial and lateral boundary conditions, thereby minimizing climate drifts resulting from both the GCM and RCM biases.
Electrostatic focal spot correction for x-ray tubes operating in strong magnetic fields
Lillaney, Prasheel; Shin, Mihye; Hinshaw, Waldo; Fahrig, Rebecca
2014-01-01
Purpose: A close proximity hybrid x-ray/magnetic resonance (XMR) imaging system offers several critical advantages over current XMR system installations that have large separation distances (∼5 m) between the imaging fields of view. The two imaging systems can be placed in close proximity to each other if an x-ray tube can be designed to be immune to the magnetic fringe fields outside of the MR bore. One of the major obstacles to robust x-ray tube design is correcting for the effects of the MR fringe field on the x-ray tube focal spot. Any fringe field component orthogonal to the x-ray tube electric field leads to electron drift altering the path of the electron trajectories. Methods: The method proposed in this study to correct for the electron drift utilizes an external electric field in the direction of the drift. The electric field is created using two electrodes that are positioned adjacent to the cathode. These electrodes are biased with positive and negative potential differences relative to the cathode. The design of the focusing cup assembly is constrained primarily by the strength of the MR fringe field and high voltage standoff distances between the anode, cathode, and the bias electrodes. From these constraints, a focusing cup design suitable for the close proximity XMR system geometry is derived, and a finite element model of this focusing cup geometry is simulated to demonstrate efficacy. A Monte Carlo simulation is performed to determine any effects of the modified focusing cup design on the output x-ray energy spectrum. Results: An orthogonal fringe field magnitude of 65 mT can be compensated for using bias voltages of +15 and −20 kV. These bias voltages are not sufficient to completely correct for larger orthogonal field magnitudes. Using active shielding coils in combination with the bias electrodes provides complete correction at an orthogonal field magnitude of 88.1 mT. Introducing small fields (<10 mT) parallel to the x-ray tube electric field in addition to the orthogonal field does not affect the electrostatic correction technique. However, rotation of the x-ray tube by 30° toward the MR bore increases the parallel magnetic field magnitude (∼72 mT). The presence of this larger parallel field along with the orthogonal field leads to incomplete correction. Monte Carlo simulations demonstrate that the mean energy of the x-ray spectrum is not noticeably affected by the electrostatic correction, but the output flux is reduced by 7.5%. Conclusions: The maximum orthogonal magnetic field magnitude that can be compensated for using the proposed design is 65 mT. Larger orthogonal field magnitudes cannot be completely compensated for because a pure electrostatic approach is limited by the dielectric strength of the vacuum inside the x-ray tube insert. The electrostatic approach also suffers from limitations when there are strong magnetic fields in both the orthogonal and parallel directions because the electrons prefer to stay aligned with the parallel magnetic field. These challenging field conditions can be addressed by using a hybrid correction approach that utilizes both active shielding coils and biasing electrodes. PMID:25370658
Estimation of suspended-sediment rating curves and mean suspended-sediment loads
Crawford, Charles G.
1991-01-01
A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.
Method for guessing the response of a physical system to an arbitrary input
Wolpert, David H.
1996-01-01
Stacked generalization is used to minimize the generalization errors of one or more generalizers acting on a known set of input values and output values representing a physical manifestation and a transformation of that manifestation, e.g., hand-written characters to ASCII characters, spoken speech to computer command, etc. Stacked generalization acts to deduce the biases of the generalizer(s) with respect to a known learning set and then correct for those biases. This deduction proceeds by generalizing in a second space whose inputs are the guesses of the original generalizers when taught with part of the learning set and trying to guess the rest of it, and whose output is the correct guess. Stacked generalization can be used to combine multiple generalizers or to provide a correction to a guess from a single generalizer.
Mirzazadeh, Ali; Nedjat, Saharnaz; Navadeh, Soodabeh; Haghdoost, Aliakbar; Mansournia, Mohammad-Ali; McFarland, Willi; Mohammad, Kazem
2014-01-01
In a national, facility-based survey of female sex workers in 14 cities of Iran (N = 872), HIV prevalence was measured at 4.5 % (95 % CI, 2.4-8.3) overall and at 11.2 % (95 % CI, 3.4-18.9) for FSW with a history of injection drug use. Using methods to correct for biases in reporting sensitive information, the estimate of unprotected sex in last act was 35.8 %, ever injecting drugs was 37.6 %, sexually transmitted disease symptoms was 82.1 %, and not testing for HIV in the last year was 64.0 %. The amount of bias correction ranged from <1 to >30 %, in parallel with the level of stigma associated with each behavior. Considering the current upward trajectory of HIV infection in the Middle East and North Africa region, as well as the ongoing high level of risky behaviors and considerable underreporting of many such behaviors in surveys, bias corrections may be needed, especially in the context of Iran, to obtain more accurate information to guide prevention and care responses to stop the growing HIV epidemic in this vulnerable group of women.
Restoration of MRI Data for Field Nonuniformities using High Order Neighborhood Statistics
Hadjidemetriou, Stathis; Studholme, Colin; Mueller, Susanne; Weiner, Michael; Schuff, Norbert
2007-01-01
MRI at high magnetic fields (> 3.0 T ) is complicated by strong inhomogeneous radio-frequency fields, sometimes termed the “bias field”. These lead to nonuniformity of image intensity, greatly complicating further analysis such as registration and segmentation. Existing methods for bias field correction are effective for 1.5 T or 3.0 T MRI, but are not completely satisfactory for higher field data. This paper develops an effective bias field correction for high field MRI based on the assumption that the nonuniformity is smoothly varying in space. Also, nonuniformity is quantified and unmixed using high order neighborhood statistics of intensity cooccurrences. They are computed within spherical windows of limited size over the entire image. The restoration is iterative and makes use of a novel stable stopping criterion that depends on the scaled entropy of the cooccurrence statistics, which is a non monotonic function of the iterations; the Shannon entropy of the cooccurrence statistics normalized to the effective dynamic range of the image. The algorithm restores whole head data, is robust to intense nonuniformities present in high field acquisitions, and is robust to variations in anatomy. This algorithm significantly improves bias field correction in comparison to N3 on phantom 1.5 T head data and high field 4 T human head data. PMID:18193095
A Bayesian approach to truncated data sets: An application to Malmquist bias in Supernova Cosmology
NASA Astrophysics Data System (ADS)
March, Marisa Cristina
2018-01-01
A problem commonly encountered in statistical analysis of data is that of truncated data sets. A truncated data set is one in which a number of data points are completely missing from a sample, this is in contrast to a censored sample in which partial information is missing from some data points. In astrophysics this problem is commonly seen in a magnitude limited survey such that the survey is incomplete at fainter magnitudes, that is, certain faint objects are simply not observed. The effect of this `missing data' is manifested as Malmquist bias and can result in biases in parameter inference if it is not accounted for. In Frequentist methodologies the Malmquist bias is often corrected for by analysing many simulations and computing the appropriate correction factors. One problem with this methodology is that the corrections are model dependent. In this poster we derive a Bayesian methodology for accounting for truncated data sets in problems of parameter inference and model selection. We first show the methodology for a simple Gaussian linear model and then go on to show the method for accounting for a truncated data set in the case for cosmological parameter inference with a magnitude limited supernova Ia survey.
Assessing the Assessment Methods: Climate Change and Hydrologic Impacts
NASA Astrophysics Data System (ADS)
Brekke, L. D.; Clark, M. P.; Gutmann, E. D.; Mizukami, N.; Mendoza, P. A.; Rasmussen, R.; Ikeda, K.; Pruitt, T.; Arnold, J. R.; Rajagopalan, B.
2014-12-01
The Bureau of Reclamation, the U.S. Army Corps of Engineers, and other water management agencies have an interest in developing reliable, science-based methods for incorporating climate change information into longer-term water resources planning. Such assessments must quantify projections of future climate and hydrology, typically relying on some form of spatial downscaling and bias correction to produce watershed-scale weather information that subsequently drives hydrology and other water resource management analyses (e.g., water demands, water quality, and environmental habitat). Water agencies continue to face challenging method decisions in these endeavors: (1) which downscaling method should be applied and at what resolution; (2) what observational dataset should be used to drive downscaling and hydrologic analysis; (3) what hydrologic model(s) should be used and how should these models be configured and calibrated? There is a critical need to understand the ramification of these method decisions, as they affect the signal and uncertainties produced by climate change assessments and, thus, adaptation planning. This presentation summarizes results from a three-year effort to identify strengths and weaknesses of widely applied methods for downscaling climate projections and assessing hydrologic conditions. Methods were evaluated from two perspectives: historical fidelity, and tendency to modulate a global climate model's climate change signal. On downscaling, four methods were applied at multiple resolutions: statistically using Bias Correction Spatial Disaggregation, Bias Correction Constructed Analogs, and Asynchronous Regression; dynamically using the Weather Research and Forecasting model. Downscaling results were then used to drive hydrologic analyses over the contiguous U.S. using multiple models (VIC, CLM, PRMS), with added focus placed on case study basins within the Colorado Headwaters. The presentation will identify which types of climate changes are expressed robustly across methods versus those that are sensitive to method choice; which method choices seem relatively more important; and where strategic investments in research and development can substantially improve guidance on climate change provided to water managers.
Causes of model dry and warm bias over central U.S. and impact on climate projections.
Lin, Yanluan; Dong, Wenhao; Zhang, Minghua; Xie, Yuanyu; Xue, Wei; Huang, Jianbin; Luo, Yong
2017-10-12
Climate models show a conspicuous summer warm and dry bias over the central United States. Using results from 19 climate models in the Coupled Model Intercomparison Project Phase 5 (CMIP5), we report a persistent dependence of warm bias on dry bias with the precipitation deficit leading the warm bias over this region. The precipitation deficit is associated with the widespread failure of models in capturing strong rainfall events in summer over the central U.S. A robust linear relationship between the projected warming and the present-day warm bias enables us to empirically correct future temperature projections. By the end of the 21st century under the RCP8.5 scenario, the corrections substantially narrow the intermodel spread of the projections and reduce the projected temperature by 2.5 K, resulting mainly from the removal of the warm bias. Instead of a sharp decrease, after this correction the projected precipitation is nearly neutral for all scenarios.Climate models repeatedly show a warm and dry bias over the central United States, but the origin of this bias remains unclear. Here the authors associate this bias to precipitation deficits in models and after applying a correction, projected precipitation in this region shows no significant changes.
NASA Astrophysics Data System (ADS)
Tian, D.; Medina, H.
2017-12-01
Post-processing of medium range reference evapotranspiration (ETo) forecasts based on numerical weather prediction (NWP) models has the potential of improving the quality and utility of these forecasts. This work compares the performance of several post-processing methods for correcting ETo forecasts over the continental U.S. generated from The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) database using data from Europe (EC), the United Kingdom (MO), and the United States (NCEP). The pondered post-processing techniques are: simple bias correction, the use of multimodels, the Ensemble Model Output Statistics (EMOS, Gneitting et al., 2005) and the Bayesian Model Averaging (BMA, Raftery et al., 2005). ETo estimates based on quality-controlled U.S. Regional Climate Reference Network measurements, and computed with the FAO 56 Penman Monteith equation, are adopted as baseline. EMOS and BMA are generally the most efficient post-processing techniques of the ETo forecasts. Nevertheless, the simple bias correction of the best model is commonly much more rewarding than using multimodel raw forecasts. Our results demonstrate the potential of different forecasting and post-processing frameworks in operational evapotranspiration and irrigation advisory systems at national scale.
Nonlinear vs. linear biasing in Trp-cage folding simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spiwok, Vojtěch, E-mail: spiwokv@vscht.cz; Oborský, Pavel; Králová, Blanka
2015-03-21
Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200 ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energymore » minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.« less
Effect of signal intensity and camera quantization on laser speckle contrast analysis
Song, Lipei; Elson, Daniel S.
2012-01-01
Laser speckle contrast analysis (LASCA) is limited to being a qualitative method for the measurement of blood flow and tissue perfusion as it is sensitive to the measurement configuration. The signal intensity is one of the parameters that can affect the contrast values due to the quantization of the signals by the camera and analog-to-digital converter (ADC). In this paper we deduce the theoretical relationship between signal intensity and contrast values based on the probability density function (PDF) of the speckle pattern and simplify it to a rational function. A simple method to correct this contrast error is suggested. The experimental results demonstrate that this relationship can effectively compensate the bias in contrast values induced by the quantized signal intensity and correct for bias induced by signal intensity variations across the field of view. PMID:23304650
Kitchen, Robert R; Sabine, Vicky S; Sims, Andrew H; Macaskill, E Jane; Renshaw, Lorna; Thomas, Jeremy S; van Hemert, Jano I; Dixon, J Michael; Bartlett, John M S
2010-02-24
Microarray technology is a popular means of producing whole genome transcriptional profiles, however high cost and scarcity of mRNA has led many studies to be conducted based on the analysis of single samples. We exploit the design of the Illumina platform, specifically multiple arrays on each chip, to evaluate intra-experiment technical variation using repeated hybridisations of universal human reference RNA (UHRR) and duplicate hybridisations of primary breast tumour samples from a clinical study. A clear batch-specific bias was detected in the measured expressions of both the UHRR and clinical samples. This bias was found to persist following standard microarray normalisation techniques. However, when mean-centering or empirical Bayes batch-correction methods (ComBat) were applied to the data, inter-batch variation in the UHRR and clinical samples were greatly reduced. Correlation between replicate UHRR samples improved by two orders of magnitude following batch-correction using ComBat (ranging from 0.9833-0.9991 to 0.9997-0.9999) and increased the consistency of the gene-lists from the duplicate clinical samples, from 11.6% in quantile normalised data to 66.4% in batch-corrected data. The use of UHRR as an inter-batch calibrator provided a small additional benefit when used in conjunction with ComBat, further increasing the agreement between the two gene-lists, up to 74.1%. In the interests of practicalities and cost, these results suggest that single samples can generate reliable data, but only after careful compensation for technical bias in the experiment. We recommend that investigators appreciate the propensity for such variation in the design stages of a microarray experiment and that the use of suitable correction methods become routine during the statistical analysis of the data.
2010-01-01
Background Microarray technology is a popular means of producing whole genome transcriptional profiles, however high cost and scarcity of mRNA has led many studies to be conducted based on the analysis of single samples. We exploit the design of the Illumina platform, specifically multiple arrays on each chip, to evaluate intra-experiment technical variation using repeated hybridisations of universal human reference RNA (UHRR) and duplicate hybridisations of primary breast tumour samples from a clinical study. Results A clear batch-specific bias was detected in the measured expressions of both the UHRR and clinical samples. This bias was found to persist following standard microarray normalisation techniques. However, when mean-centering or empirical Bayes batch-correction methods (ComBat) were applied to the data, inter-batch variation in the UHRR and clinical samples were greatly reduced. Correlation between replicate UHRR samples improved by two orders of magnitude following batch-correction using ComBat (ranging from 0.9833-0.9991 to 0.9997-0.9999) and increased the consistency of the gene-lists from the duplicate clinical samples, from 11.6% in quantile normalised data to 66.4% in batch-corrected data. The use of UHRR as an inter-batch calibrator provided a small additional benefit when used in conjunction with ComBat, further increasing the agreement between the two gene-lists, up to 74.1%. Conclusion In the interests of practicalities and cost, these results suggest that single samples can generate reliable data, but only after careful compensation for technical bias in the experiment. We recommend that investigators appreciate the propensity for such variation in the design stages of a microarray experiment and that the use of suitable correction methods become routine during the statistical analysis of the data. PMID:20181233
Bias correction of daily satellite precipitation data using genetic algorithm
NASA Astrophysics Data System (ADS)
Pratama, A. W.; Buono, A.; Hidayat, R.; Harsa, H.
2018-05-01
Climate Hazards Group InfraRed Precipitation with Stations (CHIRPS) was producted by blending Satellite-only Climate Hazards Group InfraRed Precipitation (CHIRP) with Stasion observations data. The blending process was aimed to reduce bias of CHIRP. However, Biases of CHIRPS on statistical moment and quantil values were high during wet season over Java Island. This paper presented a bias correction scheme to adjust statistical moment of CHIRP using observation precipitation data. The scheme combined Genetic Algorithm and Nonlinear Power Transformation, the results was evaluated based on different season and different elevation level. The experiment results revealed that the scheme robustly reduced bias on variance around 100% reduction and leaded to reduction of first, and second quantile biases. However, bias on third quantile only reduced during dry months. Based on different level of elevation, the performance of bias correction process is only significantly different on skewness indicators.
Banerjee, Abhirup; Maji, Pradipta
2015-12-01
The segmentation of brain MR images into different tissue classes is an important task for automatic image analysis technique, particularly due to the presence of intensity inhomogeneity artifact in MR images. In this regard, this paper presents a novel approach for simultaneous segmentation and bias field correction in brain MR images. It integrates judiciously the concept of rough sets and the merit of a novel probability distribution, called stomped normal (SN) distribution. The intensity distribution of a tissue class is represented by SN distribution, where each tissue class consists of a crisp lower approximation and a probabilistic boundary region. The intensity distribution of brain MR image is modeled as a mixture of finite number of SN distributions and one uniform distribution. The proposed method incorporates both the expectation-maximization and hidden Markov random field frameworks to provide an accurate and robust segmentation. The performance of the proposed approach, along with a comparison with related methods, is demonstrated on a set of synthetic and real brain MR images for different bias fields and noise levels.
Benchmarking by HbA1c in a national diabetes quality register--does measurement bias matter?
Carlsen, Siri; Thue, Geir; Cooper, John Graham; Røraas, Thomas; Gøransson, Lasse Gunnar; Løvaas, Karianne; Sandberg, Sverre
2015-08-01
Bias in HbA1c measurement could give a wrong impression of the standard of care when benchmarking diabetes care. The aim of this study was to evaluate how measurement bias in HbA1c results may influence the benchmarking process performed by a national diabetes register. Using data from 2012 from the Norwegian Diabetes Register for Adults, we included HbA1c results from 3584 patients with type 1 diabetes attending 13 hospital clinics, and 1366 patients with type 2 diabetes attending 18 GP offices. Correction factors for HbA1c were obtained by comparing the results of the hospital laboratories'/GP offices' external quality assurance scheme with the target value from a reference method. Compared with the uncorrected yearly median HbA1c values for hospital clinics and GP offices, EQA corrected HbA1c values were within ±0.2% (2 mmol/mol) for all but one hospital clinic whose value was reduced by 0.4% (4 mmol/mol). Three hospital clinics reduced the proportion of patients with poor glycemic control, one by 9% and two by 4%. For most participants in our study, correcting for measurement bias had little effect on the yearly median HbA1c value or the percentage of patients achieving glycemic goals. However, at three hospital clinics correcting for measurement bias had an important effect on HbA1c benchmarking results especially with regard to percentages of patients achieving glycemic targets. The analytical quality of HbA1c should be taken into account when comparing benchmarking results.
Unbiased estimates of galaxy scaling relations from photometric redshift surveys
NASA Astrophysics Data System (ADS)
Rossi, Graziano; Sheth, Ravi K.
2008-06-01
Many physical properties of galaxies correlate with one another, and these correlations are often used to constrain galaxy formation models. Such correlations include the colour-magnitude relation, the luminosity-size relation, the fundamental plane, etc. However, the transformation from observable (e.g. angular size, apparent brightness) to physical quantity (physical size, luminosity) is often distance dependent. Noise in the distance estimate will lead to biased estimates of these correlations, thus compromising the ability of photometric redshift surveys to constrain galaxy formation models. We describe two methods which can remove this bias. One is a generalization of the Vmax method, and the other is a maximum-likelihood approach. We illustrate their effectiveness by studying the size-luminosity relation in a mock catalogue, although both methods can be applied to other scaling relations as well. We show that if one simply uses photometric redshifts one obtains a biased relation; our methods correct for this bias and recover the true relation.
A study examining the bias of albumin and albumin/creatinine ratio measurements in urine.
Jacobson, Beryl E; Seccombe, David W; Katayev, Alex; Levin, Adeera
2015-10-01
The objective of the study was to examine the bias of albumin and albumin/creatinine (ACR) measurements in urine. Pools of normal human urine were augmented with purified human serum albumin to generate a series of 12 samples covering the clinical range of interest for the measurement of ACR. Albumin and creatinine concentrations in these samples were analyzed three times on each of 3 days by 24 accredited laboratories in Canada and the USA. Reference values (RV) for albumin measurements were assigned by a liquid chromatography-tandem mass spectrometry (LC-MS/MS) comparative method and gravimetrically. Ten random urine samples (check samples) were analyzed as singlets and albumin and ACR values reported according to the routine practices of each laboratory. Augmented urine pools were shown to be commutable. Gravimetrically assigned target values were corrected for the presence of endogenous albumin using the LC-MS/MS comparative method. There was excellent agreement between the RVs as assigned by these two methods. All laboratory medians demonstrated a negative bias for the measurement of albumin in urine over the concentration range examined. The magnitude of this bias tended to decrease with increasing albumin concentrations. At baseline, only 10% of the patient ACR values met a performance limit of RV ± 15%. This increased to 84% and 86% following post-analytical correction for albumin and creatinine calibration bias, respectively. International organizations should take a leading role in the standardization of albumin measurements in urine. In the interim, accuracy based urine quality control samples may be used by clinical laboratories for monitoring the accuracy of their urinary albumin measurements.
NASA Astrophysics Data System (ADS)
Johnstone, Samuel; Hourigan, Jeremy; Gallagher, Christopher
2013-05-01
Heterogeneous concentrations of α-producing nuclides in apatite have been recognized through a variety of methods. The presence of zonation in apatite complicates both traditional α-ejection corrections and diffusive models, both of which operate under the assumption of homogeneous concentrations. In this work we develop a method for measuring radial concentration profiles of 238U and 232Th in apatite by laser ablation ICP-MS depth profiling. We then focus on one application of this method, removing bias introduced by applying inappropriate α-ejection corrections. Formal treatment of laser ablation ICP-MS depth profile calibration for apatite includes construction and calibration of matrix-matched standards and quantification of rates of elemental fractionation. From this we conclude that matrix-matched standards provide more robust monitors of fractionation rate and concentrations than doped silicate glass standards. We apply laser ablation ICP-MS depth profiling to apatites from three unknown populations and small, intact crystals of Durango fluorapatite. Accurate and reproducible Durango apatite dates suggest that prolonged exposure to laser drilling does not impact cooling ages. Intracrystalline concentrations vary by at least a factor of 2 in the majority of the samples analyzed, but concentration variation only exceeds 5x in 5 grains and 10x in 1 out of the 63 grains analyzed. Modeling of synthetic concentration profiles suggests that for concentration variations of 2x and 10x individual homogeneous versus zonation dependent α-ejection corrections could lead to age bias of >5% and >20%, respectively. However, models based on measured concentration profiles only generated biases exceeding 5% in 13 of the 63 cases modeled. Application of zonation dependent α-ejection corrections did not significantly reduce the age dispersion present in any of the populations studied. This suggests that factors beyond homogeneous α-ejection corrections are the dominant source of overdispersion in apatite (U-Th)/He cooling ages.
NASA Astrophysics Data System (ADS)
Rogers, Jeffrey N.; Parrish, Christopher E.; Ward, Larry G.; Burdick, David M.
2018-03-01
Salt marsh vegetation tends to increase vertical uncertainty in light detection and ranging (lidar) derived elevation data, often causing the data to become ineffective for analysis of topographic features governing tidal inundation or vegetation zonation. Previous attempts at improving lidar data collected in salt marsh environments range from simply computing and subtracting the global elevation bias to more complex methods such as computing vegetation-specific, constant correction factors. The vegetation specific corrections can be used along with an existing habitat map to apply separate corrections to different areas within a study site. It is hypothesized here that correcting salt marsh lidar data by applying location-specific, point-by-point corrections, which are computed from lidar waveform-derived features, tidal-datum based elevation, distance from shoreline and other lidar digital elevation model based variables, using nonparametric regression will produce better results. The methods were developed and tested using full-waveform lidar and ground truth for three marshes in Cape Cod, Massachusetts, U.S.A. Five different model algorithms for nonparametric regression were evaluated, with TreeNet's stochastic gradient boosting algorithm consistently producing better regression and classification results. Additionally, models were constructed to predict the vegetative zone (high marsh and low marsh). The predictive modeling methods used in this study estimated ground elevation with a mean bias of 0.00 m and a standard deviation of 0.07 m (0.07 m root mean square error). These methods appear very promising for correction of salt marsh lidar data and, importantly, do not require an existing habitat map, biomass measurements, or image based remote sensing data such as multi/hyperspectral imagery.
NASA Astrophysics Data System (ADS)
Beria, H.; Nanda, T., Sr.; Chatterjee, C.
2015-12-01
High resolution satellite precipitation products such as Tropical Rainfall Measuring Mission (TRMM), Climate Forecast System Reanalysis (CFSR), European Centre for Medium-Range Weather Forecasts (ECMWF), etc., offer a promising alternative to flood forecasting in data scarce regions. At the current state-of-art, these products cannot be used in the raw form for flood forecasting, even at smaller lead times. In the current study, these precipitation products are bias corrected using statistical techniques, such as additive and multiplicative bias corrections, and wavelet multi-resolution analysis (MRA) with India Meteorological Department (IMD) gridded precipitation product,obtained from gauge-based rainfall estimates. Neural network based rainfall-runoff modeling using these bias corrected products provide encouraging results for flood forecasting upto 48 hours lead time. We will present various statistical and graphical interpretations of catchment response to high rainfall events using both the raw and bias corrected precipitation products at different lead times.
Fennema-Notestine, Christine; Ozyurt, I. Burak; Clark, Camellia P.; Morris, Shaunna; Bischoff-Grethe, Amanda; Bondi, Mark W.; Jernigan, Terry L.; Fischl, Bruce; Segonne, Florent; Shattuck, David W.; Leahy, Richard M.; Rex, David E.; Toga, Arthur W.; Zou, Kelly H.; BIRN, Morphometry; Brown, Gregory G.
2008-01-01
Performance of automated methods to isolate brain from nonbrain tissues in magnetic resonance (MR) structural images may be influenced by MR signal inhomogeneities, type of MR image set, regional anatomy, and age and diagnosis of subjects studied. The present study compared the performance of four methods: Brain Extraction Tool (BET; Smith [2002]: Hum Brain Mapp 17:143–155); 3dIntracranial (Ward [1999] Milwaukee: Biophysics Research Institute, Medical College of Wisconsin; in AFNI); a Hybrid Watershed algorithm (HWA, Segonne et al. [2004] Neuroimage 22:1060–1075; in FreeSurfer); and Brain Surface Extractor (BSE, Sandor and Leahy [1997] IEEE Trans Med Imag 16:41–54; Shattuck et al. [2001] Neuroimage 13:856 – 876) to manually stripped images. The methods were applied to uncorrected and bias-corrected datasets; Legacy and Contemporary T1-weighted image sets; and four diagnostic groups (depressed, Alzheimer’s, young and elderly control). To provide a criterion for outcome assessment, two experts manually stripped six sagittal sections for each dataset in locations where brain and nonbrain tissue are difficult to distinguish. Methods were compared on Jaccard similarity coefficients, Hausdorff distances, and an Expectation-Maximization algorithm. Methods tended to perform better on contemporary datasets; bias correction did not significantly improve method performance. Mesial sections were most difficult for all methods. Although AD image sets were most difficult to strip, HWA and BSE were more robust across diagnostic groups compared with 3dIntracranial and BET. With respect to specificity, BSE tended to perform best across all groups, whereas HWA was more sensitive than other methods. The results of this study may direct users towards a method appropriate to their T1-weighted datasets and improve the efficiency of processing for large, multisite neuroimaging studies. PMID:15986433
Correcting for Optimistic Prediction in Small Data Sets
Smith, Gordon C. S.; Seaman, Shaun R.; Wood, Angela M.; Royston, Patrick; White, Ian R.
2014-01-01
The C statistic is a commonly reported measure of screening test performance. Optimistic estimation of the C statistic is a frequent problem because of overfitting of statistical models in small data sets, and methods exist to correct for this issue. However, many studies do not use such methods, and those that do correct for optimism use diverse methods, some of which are known to be biased. We used clinical data sets (United Kingdom Down syndrome screening data from Glasgow (1991–2003), Edinburgh (1999–2003), and Cambridge (1990–2006), as well as Scottish national pregnancy discharge data (2004–2007)) to evaluate different approaches to adjustment for optimism. We found that sample splitting, cross-validation without replication, and leave-1-out cross-validation produced optimism-adjusted estimates of the C statistic that were biased and/or associated with greater absolute error than other available methods. Cross-validation with replication, bootstrapping, and a new method (leave-pair-out cross-validation) all generated unbiased optimism-adjusted estimates of the C statistic and had similar absolute errors in the clinical data set. Larger simulation studies confirmed that all 3 methods performed similarly with 10 or more events per variable, or when the C statistic was 0.9 or greater. However, with lower events per variable or lower C statistics, bootstrapping tended to be optimistic but with lower absolute and mean squared errors than both methods of cross-validation. PMID:24966219
Leaché, Adam D.; Banbury, Barbara L.; Felsenstein, Joseph; de Oca, Adrián nieto-Montes; Stamatakis, Alexandros
2015-01-01
Single nucleotide polymorphisms (SNPs) are useful markers for phylogenetic studies owing in part to their ubiquity throughout the genome and ease of collection. Restriction site associated DNA sequencing (RADseq) methods are becoming increasingly popular for SNP data collection, but an assessment of the best practises for using these data in phylogenetics is lacking. We use computer simulations, and new double digest RADseq (ddRADseq) data for the lizard family Phrynosomatidae, to investigate the accuracy of RAD loci for phylogenetic inference. We compare the two primary ways RAD loci are used during phylogenetic analysis, including the analysis of full sequences (i.e., SNPs together with invariant sites), or the analysis of SNPs on their own after excluding invariant sites. We find that using full sequences rather than just SNPs is preferable from the perspectives of branch length and topological accuracy, but not of computational time. We introduce two new acquisition bias corrections for dealing with alignments composed exclusively of SNPs, a conditional likelihood method and a reconstituted DNA approach. The conditional likelihood method conditions on the presence of variable characters only (the number of invariant sites that are unsampled but known to exist is not considered), while the reconstituted DNA approach requires the user to specify the exact number of unsampled invariant sites prior to the analysis. Under simulation, branch length biases increase with the amount of missing data for both acquisition bias correction methods, but branch length accuracy is much improved in the reconstituted DNA approach compared to the conditional likelihood approach. Phylogenetic analyses of the empirical data using concatenation or a coalescent-based species tree approach provide strong support for many of the accepted relationships among phrynosomatid lizards, suggesting that RAD loci contain useful phylogenetic signal across a range of divergence times despite the presence of missing data. Phylogenetic analysis of RAD loci requires careful attention to model assumptions, especially if downstream analyses depend on branch lengths. PMID:26227865
NASA Technical Reports Server (NTRS)
Ahmed, Kazi Farzan; Wang, Guiling; Silander, John; Wilson, Adam M.; Allen, Jenica M.; Horton, Radley; Anyah, Richard
2013-01-01
Statistical downscaling can be used to efficiently downscale a large number of General Circulation Model (GCM) outputs to a fine temporal and spatial scale. To facilitate regional impact assessments, this study statistically downscales (to 1/8deg spatial resolution) and corrects the bias of daily maximum and minimum temperature and daily precipitation data from six GCMs and four Regional Climate Models (RCMs) for the northeast United States (US) using the Statistical Downscaling and Bias Correction (SDBC) approach. Based on these downscaled data from multiple models, five extreme indices were analyzed for the future climate to quantify future changes of climate extremes. For a subset of models and indices, results based on raw and bias corrected model outputs for the present-day climate were compared with observations, which demonstrated that bias correction is important not only for GCM outputs, but also for RCM outputs. For future climate, bias correction led to a higher level of agreements among the models in predicting the magnitude and capturing the spatial pattern of the extreme climate indices. We found that the incorporation of dynamical downscaling as an intermediate step does not lead to considerable differences in the results of statistical downscaling for the study domain.
Confidence Interval Coverage for Cohen's Effect Size Statistic
ERIC Educational Resources Information Center
Algina, James; Keselman, H. J.; Penfield, Randall D.
2006-01-01
Kelley compared three methods for setting a confidence interval (CI) around Cohen's standardized mean difference statistic: the noncentral-"t"-based, percentile (PERC) bootstrap, and biased-corrected and accelerated (BCA) bootstrap methods under three conditions of nonnormality, eight cases of sample size, and six cases of population…
NASA Astrophysics Data System (ADS)
Rushi, B. R.; Ellenburg, W. L.; Adams, E. C.; Flores, A.; Limaye, A. S.; Valdés-Pineda, R.; Roy, T.; Valdés, J. B.; Mithieu, F.; Omondi, S.
2017-12-01
SERVIR, a joint NASA-USAID initiative, works to build capacity in Earth observation technologies in developing countries for improved environmental decision making in the arena of: weather and climate, water and disasters, food security and land use/land cover. SERVIR partners with leading regional organizations in Eastern and Southern Africa, Hindu Kush-Himalaya, Mekong region, and West Africa to achieve its objectives. SERVIR develops hydrological applications to address specific needs articulated by key stakeholders and daily rainfall estimates are a vital input for these applications. Satellite-derived rainfall is subjected to systemic biases which need to be corrected before it can be used for any hydrologic application such as real-time or seasonal forecasting. SERVIR and the SWAAT team at the University of Arizona, have co-developed an open-source and user friendly tool of rainfall bias correction approaches for SPPs. Bias correction tools were developed based on Linear Scaling and Quantile Mapping techniques. A set of SPPs, such as PERSIANN-CCS, TMPA-RT, and CMORPH, are bias corrected using Climate Hazards Group InfraRed Precipitation with Station (CHIRPS) data which incorporates ground based precipitation observations. This bias correction tools also contains a component, which is included to improve monthly mean of CHIRPS using precipitation products of the Global Surface Summary of the Day (GSOD) database developed by the National Climatic Data Center (NCDC). This tool takes input from command-line which makes it user-friendly and applicable in any operating platform without prior programming skills. This presentation will focus on this bias-correction tool for SPPs, including application scenarios.
An advanced method to assess the diet of free-ranging large carnivores based on scats.
Wachter, Bettina; Blanc, Anne-Sophie; Melzheimer, Jörg; Höner, Oliver P; Jago, Mark; Hofer, Heribert
2012-01-01
The diet of free-ranging carnivores is an important part of their ecology. It is often determined from prey remains in scats. In many cases, scat analyses are the most efficient method but they require correction for potential biases. When the diet is expressed as proportions of consumed mass of each prey species, the consumed prey mass to excrete one scat needs to be determined and corrected for prey body mass because the proportion of digestible to indigestible matter increases with prey body mass. Prey body mass can be corrected for by conducting feeding experiments using prey of various body masses and fitting a regression between consumed prey mass to excrete one scat and prey body mass (correction factor 1). When the diet is expressed as proportions of consumed individuals of each prey species and includes prey animals not completely consumed, the actual mass of each prey consumed by the carnivore needs to be controlled for (correction factor 2). No previous study controlled for this second bias. Here we use an extended series of feeding experiments on a large carnivore, the cheetah (Acinonyx jubatus), to establish both correction factors. In contrast to previous studies which fitted a linear regression for correction factor 1, we fitted a biologically more meaningful exponential regression model where the consumed prey mass to excrete one scat reaches an asymptote at large prey sizes. Using our protocol, we also derive correction factor 1 and 2 for other carnivore species and apply them to published studies. We show that the new method increases the number and proportion of consumed individuals in the diet for large prey animals compared to the conventional method. Our results have important implications for the interpretation of scat-based studies in feeding ecology and the resolution of human-wildlife conflicts for the conservation of large carnivores.
An Advanced Method to Assess the Diet of Free-Ranging Large Carnivores Based on Scats
Wachter, Bettina; Blanc, Anne-Sophie; Melzheimer, Jörg; Höner, Oliver P.; Jago, Mark; Hofer, Heribert
2012-01-01
Background The diet of free-ranging carnivores is an important part of their ecology. It is often determined from prey remains in scats. In many cases, scat analyses are the most efficient method but they require correction for potential biases. When the diet is expressed as proportions of consumed mass of each prey species, the consumed prey mass to excrete one scat needs to be determined and corrected for prey body mass because the proportion of digestible to indigestible matter increases with prey body mass. Prey body mass can be corrected for by conducting feeding experiments using prey of various body masses and fitting a regression between consumed prey mass to excrete one scat and prey body mass (correction factor 1). When the diet is expressed as proportions of consumed individuals of each prey species and includes prey animals not completely consumed, the actual mass of each prey consumed by the carnivore needs to be controlled for (correction factor 2). No previous study controlled for this second bias. Methodology/Principal Findings Here we use an extended series of feeding experiments on a large carnivore, the cheetah (Acinonyx jubatus), to establish both correction factors. In contrast to previous studies which fitted a linear regression for correction factor 1, we fitted a biologically more meaningful exponential regression model where the consumed prey mass to excrete one scat reaches an asymptote at large prey sizes. Using our protocol, we also derive correction factor 1 and 2 for other carnivore species and apply them to published studies. We show that the new method increases the number and proportion of consumed individuals in the diet for large prey animals compared to the conventional method. Conclusion/Significance Our results have important implications for the interpretation of scat-based studies in feeding ecology and the resolution of human-wildlife conflicts for the conservation of large carnivores. PMID:22715373
A modified method for MRF segmentation and bias correction of MR image with intensity inhomogeneity.
Xie, Mei; Gao, Jingjing; Zhu, Chongjin; Zhou, Yan
2015-01-01
Markov random field (MRF) model is an effective method for brain tissue classification, which has been applied in MR image segmentation for decades. However, it falls short of the expected classification in MR images with intensity inhomogeneity for the bias field is not considered in the formulation. In this paper, we propose an interleaved method joining a modified MRF classification and bias field estimation in an energy minimization framework, whose initial estimation is based on k-means algorithm in view of prior information on MRI. The proposed method has a salient advantage of overcoming the misclassifications from the non-interleaved MRF classification for the MR image with intensity inhomogeneity. In contrast to other baseline methods, experimental results also have demonstrated the effectiveness and advantages of our algorithm via its applications in the real and the synthetic MR images.
Investigating bias in squared regression structure coefficients
Nimon, Kim F.; Zientek, Linda R.; Thompson, Bruce
2015-01-01
The importance of structure coefficients and analogs of regression weights for analysis within the general linear model (GLM) has been well-documented. The purpose of this study was to investigate bias in squared structure coefficients in the context of multiple regression and to determine if a formula that had been shown to correct for bias in squared Pearson correlation coefficients and coefficients of determination could be used to correct for bias in squared regression structure coefficients. Using data from a Monte Carlo simulation, this study found that squared regression structure coefficients corrected with Pratt's formula produced less biased estimates and might be more accurate and stable estimates of population squared regression structure coefficients than estimates with no such corrections. While our findings are in line with prior literature that identified multicollinearity as a predictor of bias in squared regression structure coefficients but not coefficients of determination, the findings from this study are unique in that the level of predictive power, number of predictors, and sample size were also observed to contribute bias in squared regression structure coefficients. PMID:26217273
NASA Astrophysics Data System (ADS)
Sandage, Allan
1999-12-01
Relative, reduced to absolute, magnitude distributions are obtained for Sb, Sbc, and Sc galaxies in the flux-limited Revised Shapley-Ames Catalog (RSA2) for each van den Bergh luminosity class (L), within each Hubble type (T). The method to isolate bias-free subsets of the total sample is via Spaenhauer diagrams, as in previous papers of this series. The distance-limited type and class-specific luminosity functions are normalized to numbers of galaxies per unit volume (105 Mpc3), rather than being left as relative functions, as in Paper V. The functions are calculated using kinematic absolute magnitudes, based on an arbitrary trial value of H0=50. Gaussian fits to the individual normalized functions are listed for each T and L subclass. As in Paper V, the data can be freed from the T and L dependencies by applying a correction of 0.23T+0.5L to the individual absolute magnitudes. Here, T=3 for Sb, 4 for Sbc, and 5 for Sc galaxies, and the L values range from 1 to 6 as the luminosity class changes from I to III-IV. The total luminosity function, obtained by combining the volume-normalized Sb, Sbc, and Sc individual luminosity functions, each corrected for the T and L dependencies, has an rms dispersion of 0.67 mag, similar to much of the Tully-Fisher parameter space. Absolute calibration of the trial kinematic absolute magnitudes is made using 27 galaxies with known T and L that also have Cepheid distances. This permits the systematic correction to the H0=50 kinematic absolute magnitudes of 0.22+/-0.12 mag, givingH0=55+/-3(internal) km s-1 Mpc-1 . The Cepheid distances are based on the Madore/Freedman Cepheid period-luminosity (PL) zero point that requires (m-M)0=18.50 for the LMC. Using the modern LMC modulus of (m-M)0=18.58 requires a 4% decrease in H0, giving a final value of H0=53+/-7 (external) by this method. These values of H0, based here on the method of luminosity functions, are in good agreement with (1) H0=55+/-5 by Theureau and coworkers from their bias-corrected Tully-Fisher method of ``normalized distances'' for field galaxies; (2) H0=56+/-4 from the method through the Virgo Cluster, as corrected to the global kinematic frame (Tammann and coworkers); and (3) H0=58+/-5 from Cepheid-calibrated Type Ia supernovae (Saha and coworkers). Our value here also disagrees with the final value from the NASA ``Key Project'' group value of H0=70+/-7. Analysis of the total flux-limited sample of Sb, Sbc, and Sc galaxies in the RSA2 by the present method, but uncorrected for selection bias, would give an incorrect value of H0=71 using the same Cepheid calibration. The effect of the bias is pernicious at the 30% level; either it must be corrected by the methods in the papers of this series, or the data must be restricted to the distance-limited subset of any sample, as is done here.
Mark L. Messonnier; John C. Bergstrom; Chrisopher M. Cornwell; R. Jeff Teasley; H. Ken Cordell
2000-01-01
Simple nonresponse and selection biases that may occur in survey research such as contingent valuation applications are discussed and tested. Correction mechanisms for these types of biases are demonstrated. Results indicate the importance of testing and correcting for unit and item nonresponse bias in contingent valuation survey data. When sample nonresponse and...
Scene-based nonuniformity correction technique for infrared focal-plane arrays.
Liu, Yong-Jin; Zhu, Hong; Zhao, Yi-Gong
2009-04-20
A scene-based nonuniformity correction algorithm is presented to compensate for the gain and bias nonuniformity in infrared focal-plane array sensors, which can be separated into three parts. First, an interframe-prediction method is used to estimate the true scene, since nonuniformity correction is a typical blind-estimation problem and both scene values and detector parameters are unavailable. Second, the estimated scene, along with its corresponding observed data obtained by detectors, is employed to update the gain and the bias by means of a line-fitting technique. Finally, with these nonuniformity parameters, the compensated output of each detector is obtained by computing a very simple formula. The advantages of the proposed algorithm lie in its low computational complexity and storage requirements and ability to capture temporal drifts in the nonuniformity parameters. The performance of every module is demonstrated with simulated and real infrared image sequences. Experimental results indicate that the proposed algorithm exhibits a superior correction effect.
An empirical determination of the effects of sea state bias on Seasat altimetry
NASA Technical Reports Server (NTRS)
Born, G. H.; Richards, M. A.; Rosborough, G. W.
1982-01-01
A linear empirical model has been developed for the correction of sea state bias effects, in Seasat altimetry data altitude measurements, that are due to (1) electromagnetic bias caused by the fact that ocean wave troughs reflect the altimeter signal more strongly than the crests, shifting the apparent mean sea level toward the wave troughs, and (2) an independent instrument-related bias resulting from the inability of height corrections applied in the ground processor to compensate for simplifying assumptions made for the processor aboard Seasat. After applying appropriate corrections to the altimetry data, an empirical model for the sea state bias is obtained by differencing significant wave height and height measurements from coincident ground tracks. Height differences are minimized by solving for the coefficient of a linear relationship between height differences and wave height differences that minimize the height differences. In more than 50% of the 36 cases examined, 7% of the value of significant wave height should be subtracted for sea state bias correction.
De Kesel, Pieter M M; Capiau, Sara; Stove, Veronique V; Lambert, Willy E; Stove, Christophe P
2014-10-01
Although dried blood spot (DBS) sampling is increasingly receiving interest as a potential alternative to traditional blood sampling, the impact of hematocrit (Hct) on DBS results is limiting its final breakthrough in routine bioanalysis. To predict the Hct of a given DBS, potassium (K(+)) proved to be a reliable marker. The aim of this study was to evaluate whether application of an algorithm, based upon predicted Hct or K(+) concentrations as such, allowed correction for the Hct bias. Using validated LC-MS/MS methods, caffeine, chosen as a model compound, was determined in whole blood and corresponding DBS samples with a broad Hct range (0.18-0.47). A reference subset (n = 50) was used to generate an algorithm based on K(+) concentrations in DBS. Application of the developed algorithm on an independent test set (n = 50) alleviated the assay bias, especially at lower Hct values. Before correction, differences between DBS and whole blood concentrations ranged from -29.1 to 21.1%. The mean difference, as obtained by Bland-Altman comparison, was -6.6% (95% confidence interval (CI), -9.7 to -3.4%). After application of the algorithm, differences between corrected and whole blood concentrations lay between -19.9 and 13.9% with a mean difference of -2.1% (95% CI, -4.5 to 0.3%). The same algorithm was applied to a separate compound, paraxanthine, which was determined in 103 samples (Hct range, 0.17-0.47), yielding similar results. In conclusion, a K(+)-based algorithm allows correction for the Hct bias in the quantitative analysis of caffeine and its metabolite paraxanthine.
Assessment of bias in US waterfowl harvest estimates
Padding, Paul I.; Royle, J. Andrew
2012-01-01
Context. North American waterfowl managers have long suspected that waterfowl harvest estimates derived from national harvest surveys in the USA are biased high. Survey bias can be evaluated by comparing survey results with like estimates from independent sources. Aims. We used band-recovery data to assess the magnitude of apparent bias in duck and goose harvest estimates, using mallards (Anas platyrhynchos) and Canada geese (Branta canadensis) as representatives of ducks and geese, respectively. Methods. We compared the number of reported mallard and Canada goose band recoveries, adjusted for band reporting rates, with the estimated harvests of banded mallards and Canada geese from the national harvest surveys. Weused the results of those comparisons to develop correction factors that can be applied to annual duck and goose harvest estimates of the national harvest survey. Key results. National harvest survey estimates of banded mallards harvested annually averaged 1.37 times greater than those calculated from band-recovery data, whereas Canada goose harvest estimates averaged 1.50 or 1.63 times greater than comparable band-recovery estimates, depending on the harvest survey methodology used. Conclusions. Duck harvest estimates produced by the national harvest survey from 1971 to 2010 should be reduced by a factor of 0.73 (95% CI = 0.71–0.75) to correct for apparent bias. Survey-specific correction factors of 0.67 (95% CI = 0.65–0.69) and 0.61 (95% CI = 0.59–0.64) should be applied to the goose harvest estimates for 1971–2001 (duck stamp-based survey) and 1999–2010 (HIP-based survey), respectively. Implications. Although this apparent bias likely has not influenced waterfowl harvest management policy in the USA, it does have negative impacts on some applications of harvest estimates, such as indirect estimation of population size. For those types of analyses, we recommend applying the appropriate correction factor to harvest estimates.
Influence of signal intensity non-uniformity on brain volumetry using an atlas-based method.
Goto, Masami; Abe, Osamu; Miyati, Tosiaki; Kabasawa, Hiroyuki; Takao, Hidemasa; Hayashi, Naoto; Kurosu, Tomomi; Iwatsubo, Takeshi; Yamashita, Fumio; Matsuda, Hiroshi; Mori, Harushi; Kunimatsu, Akira; Aoki, Shigeki; Ino, Kenji; Yano, Keiichi; Ohtomo, Kuni
2012-01-01
Many studies have reported pre-processing effects for brain volumetry; however, no study has investigated whether non-parametric non-uniform intensity normalization (N3) correction processing results in reduced system dependency when using an atlas-based method. To address this shortcoming, the present study assessed whether N3 correction processing provides reduced system dependency in atlas-based volumetry. Contiguous sagittal T1-weighted images of the brain were obtained from 21 healthy participants, by using five magnetic resonance protocols. After image preprocessing using the Statistical Parametric Mapping 5 software, we measured the structural volume of the segmented images with the WFU-PickAtlas software. We applied six different bias-correction levels (Regularization 10, Regularization 0.0001, Regularization 0, Regularization 10 with N3, Regularization 0.0001 with N3, and Regularization 0 with N3) to each set of images. The structural volume change ratio (%) was defined as the change ratio (%) = (100 × [measured volume - mean volume of five magnetic resonance protocols] / mean volume of five magnetic resonance protocols) for each bias-correction level. A low change ratio was synonymous with lower system dependency. The results showed that the images with the N3 correction had a lower change ratio compared with those without the N3 correction. The present study is the first atlas-based volumetry study to show that the precision of atlas-based volumetry improves when using N3-corrected images. Therefore, correction for signal intensity non-uniformity is strongly advised for multi-scanner or multi-site imaging trials.
NASA Astrophysics Data System (ADS)
Hudson, P.; Botzen, W. J. W.; Kreibich, H.; Bubeck, P.; Aerts, J. C. J. H.
2014-01-01
The employment of damage mitigation measures by individuals is an important component of integrated flood risk management. In order to promote efficient damage mitigation measures, accurate estimates of their damage mitigation potential are required. That is, for correctly assessing the damage mitigation measures' effectiveness from survey data, one needs to control for sources of bias. A biased estimate can occur if risk characteristics differ between individuals who have, or have not, implemented mitigation measures. This study removed this bias by applying an econometric evaluation technique called Propensity Score Matching to a survey of German households along along two major rivers major rivers that were flooded in 2002, 2005 and 2006. The application of this method detected substantial overestimates of mitigation measures' effectiveness if bias is not controlled for, ranging from nearly € 1700 to € 15 000 per measure. Bias-corrected effectiveness estimates of several mitigation measures show that these measures are still very effective since they prevent between € 6700-14 000 of flood damage. This study concludes with four main recommendations regarding how to better apply Propensity Score Matching in future studies, and makes several policy recommendations.
NASA Astrophysics Data System (ADS)
Hudson, P.; Botzen, W. J. W.; Kreibich, H.; Bubeck, P.; Aerts, J. C. J. H.
2014-07-01
The employment of damage mitigation measures (DMMs) by individuals is an important component of integrated flood risk management. In order to promote efficient damage mitigation measures, accurate estimates of their damage mitigation potential are required. That is, for correctly assessing the damage mitigation measures' effectiveness from survey data, one needs to control for sources of bias. A biased estimate can occur if risk characteristics differ between individuals who have, or have not, implemented mitigation measures. This study removed this bias by applying an econometric evaluation technique called propensity score matching (PSM) to a survey of German households along three major rivers that were flooded in 2002, 2005, and 2006. The application of this method detected substantial overestimates of mitigation measures' effectiveness if bias is not controlled for, ranging from nearly EUR 1700 to 15 000 per measure. Bias-corrected effectiveness estimates of several mitigation measures show that these measures are still very effective since they prevent between EUR 6700 and 14 000 of flood damage per flood event. This study concludes with four main recommendations regarding how to better apply propensity score matching in future studies, and makes several policy recommendations.
Estimating unbiased magnitudes for the announced DPRK nuclear tests, 2006-2016
NASA Astrophysics Data System (ADS)
Peacock, Sheila; Bowers, David
2017-04-01
The seismic disturbances generated from the five (2006-2016) announced nuclear test explosions by the Democratic People's Republic of Korea (DPRK) are of moderate magnitude (body-wave magnitude mb 4-5) by global earthquake standards. An upward bias of network mean mb of low- to moderate-magnitude events is long established, and is caused by the censoring of readings from stations where the signal was below noise level at the time of the predicted arrival. This sampling bias can be overcome by maximum-likelihood methods using station thresholds at detecting (and non-detecting) stations. Bias in the mean mb can also be introduced by differences in the network of stations recording each explosion - this bias can reduced by using station corrections. We apply a maximum-likelihood (JML) inversion that jointly estimates station corrections and unbiased network mb for the five DPRK explosions recorded by the CTBTO International Monitoring Network (IMS) of seismic stations. The thresholds can either be directly measured from the noise preceding the observed signal, or determined by statistical analysis of bulletin amplitudes. The network mb of the first and smallest explosion is reduced significantly relative to the mean mb (to < 4.0 mb) by removal of the censoring bias.
NASA Astrophysics Data System (ADS)
Wang, W.; Zender, C. S.; van As, D.; Smeets, P.; van den Broeke, M.
2015-12-01
Surface melt and mass loss of Greenland Ice Sheet may play crucial roles in global climate change due to their positive feedbacks and large fresh water storage. With few other regular meteorological observations available in this extreme environment, measurements from Automatic Weather Stations (AWS) are the primary data source for the surface energy budget studies, and for validating satellite observations and model simulations. However, station tilt, due to surface melt and compaction, results in considerable biases in the radiation and thus albedo measurements by AWS. In this study, we identify the tilt-induced biases in the climatology of surface radiative flux and albedo, and then correct them based on geometrical principles. Over all the AWS from the Greenland Climate Network (GC-Net), the Kangerlussuaq transect (K-transect) and the Programme for Monitoring of the Greenland Ice Sheet (PROMICE), only ~15% of clear days have the correct solar noon time, with the largest bias to be 3 hours. Absolute hourly biases in the magnitude of surface insolation can reach up to 200 W/m2, with daily average exceeding 100 W/m2. The biases are larger in the accumulation zone due to the systematic tilt at each station, although variabilities of tilt angles are larger in the ablation zone. Averaged over the whole Greenland Ice Sheet in the melting season, the absolute bias in insolation is ~23 W/m2, enough to melt 0.51 m snow water equivalent. We estimate the tilt angles and their directions by comparing the simulated insolation at a horizontal surface with the observed insolation by these tilted AWS under clear-sky conditions. Our correction reduces the RMSE against satellite measurements and reanalysis by ~30 W/m2 relative to the uncorrected data, with correlation coefficients over 0.95 for both references. The corrected diurnal changes of albedo are more smooth, with consistent semi-smiling patterns (see Fig. 1). The seasonal cycles and annual variabilities of albedo are in a better agreement with previous studies (see Fig. 2 and 3). The consistent tilt-corrected shortwave radiation dataset derived here will provide better observations and validations for surface energy budget studies on Greenland Ice Sheet, including albedo variation, surface melt simulations and cloud radiative forcing estimates.
Yao, Ning; Li, Yi; Li, Na; Yang, Daqing; Ayantobo, Olusola Olaitan
2018-10-15
The accuracy of gauge-measured precipitation (P m ) affects drought assessment since drought severity changes due to precipitation bias correction. This research investigates how drought severity changes as the result of bias-corrected precipitation (P c ) using the Erinc's index I m and standardized precipitation evapotranspiration index (SPEI). Daily and monthly P c values at 552 sites in China were determined using daily P m and wind speed and air temperature data over 1961-2015. P c -based I m values were generally larger than P m -based I m for most sub-regions in China. The increased P c and P c -based I m values indicated wetter climate conditions than previously reported for China. After precipitation bias-correction, Climate types changed, e.g., 20 sites from severe-arid to arid, and 11 sites from arid to semi-arid. However, the changes in SPEI were not that obvious due to precipitation bias correction because the standardized index SPEI removed the effects of mean precipitation values. In conclusion, precipitation bias in different sub-regions of China changed the spatial and temporal characteristics of drought assessment. Copyright © 2018 Elsevier B.V. All rights reserved.
Kwan, Johnny S H; Kung, Annie W C; Sham, Pak C
2011-09-01
Selective genotyping can increase power in quantitative trait association. One example of selective genotyping is two-tail extreme selection, but simple linear regression analysis gives a biased genetic effect estimate. Here, we present a simple correction for the bias.
NASA Astrophysics Data System (ADS)
Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.
2017-12-01
Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub-grid scale physical parameterizations, more accurate discretization of the model dynamics, boundary conditions, radiative transfer codes, and other potential model improvements which can then replace the empirical correction scheme. The analysis increments also provide guidance in testing new physical parameterizations.
Two-compartment modeling of tissue microcirculation revisited.
Brix, Gunnar; Salehi Ravesh, Mona; Griebel, Jürgen
2017-05-01
Conventional two-compartment modeling of tissue microcirculation is used for tracer kinetic analysis of dynamic contrast-enhanced (DCE) computed tomography or magnetic resonance imaging studies although it is well-known that the underlying assumption of an instantaneous mixing of the administered contrast agent (CA) in capillaries is far from being realistic. It was thus the aim of the present study to provide theoretical and computational evidence in favor of a conceptually alternative modeling approach that makes it possible to characterize the bias inherent to compartment modeling and, moreover, to approximately correct for it. Starting from a two-region distributed-parameter model that accounts for spatial gradients in CA concentrations within blood-tissue exchange units, a modified lumped two-compartment exchange model was derived. It has the same analytical structure as the conventional two-compartment model, but indicates that the apparent blood flow identifiable from measured DCE data is substantially overestimated, whereas the three other model parameters (i.e., the permeability-surface area product as well as the volume fractions of the plasma and interstitial distribution space) are unbiased. Furthermore, a simple formula was derived to approximately compute a bias-corrected flow from the estimates of the apparent flow and permeability-surface area product obtained by model fitting. To evaluate the accuracy of the proposed modeling and bias correction method, representative noise-free DCE curves were analyzed. They were simulated for 36 microcirculation and four input scenarios by an axially distributed reference model. As analytically proven, the considered two-compartment exchange model is structurally identifiable from tissue residue data. The apparent flow values estimated for the 144 simulated tissue/input scenarios were considerably biased. After bias-correction, the deviations between estimated and actual parameter values were (11.2 ± 6.4) % (vs. (105 ± 21) % without correction) for the flow, (3.6 ± 6.1) % for the permeability-surface area product, (5.8 ± 4.9) % for the vascular volume and (2.5 ± 4.1) % for the interstitial volume; with individual deviations of more than 20% being the exception and just marginal. Increasing the duration of CA administration only had a statistically significant but opposite effect on the accuracy of the estimated flow (declined) and intravascular volume (improved). Physiologically well-defined tissue parameters are structurally identifiable and accurately estimable from DCE data by the conceptually modified two-compartment model in combination with the bias correction. The accuracy of the bias-corrected flow is nearly comparable to that of the three other (theoretically unbiased) model parameters. As compared to conventional two-compartment modeling, this feature constitutes a major advantage for tracer kinetic analysis of both preclinical and clinical DCE imaging studies. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Stisen, S.; Højberg, A. L.; Troldborg, L.; Refsgaard, J. C.; Christensen, B. S. B.; Olsen, M.; Henriksen, H. J.
2012-11-01
Precipitation gauge catch correction is often given very little attention in hydrological modelling compared to model parameter calibration. This is critical because significant precipitation biases often make the calibration exercise pointless, especially when supposedly physically-based models are in play. This study addresses the general importance of appropriate precipitation catch correction through a detailed modelling exercise. An existing precipitation gauge catch correction method addressing solid and liquid precipitation is applied, both as national mean monthly correction factors based on a historic 30 yr record and as gridded daily correction factors based on local daily observations of wind speed and temperature. The two methods, named the historic mean monthly (HMM) and the time-space variable (TSV) correction, resulted in different winter precipitation rates for the period 1990-2010. The resulting precipitation datasets were evaluated through the comprehensive Danish National Water Resources model (DK-Model), revealing major differences in both model performance and optimised model parameter sets. Simulated stream discharge is improved significantly when introducing the TSV correction, whereas the simulated hydraulic heads and multi-annual water balances performed similarly due to recalibration adjusting model parameters to compensate for input biases. The resulting optimised model parameters are much more physically plausible for the model based on the TSV correction of precipitation. A proxy-basin test where calibrated DK-Model parameters were transferred to another region without site specific calibration showed better performance for parameter values based on the TSV correction. Similarly, the performances of the TSV correction method were superior when considering two single years with a much dryer and a much wetter winter, respectively, as compared to the winters in the calibration period (differential split-sample tests). We conclude that TSV precipitation correction should be carried out for studies requiring a sound dynamic description of hydrological processes, and it is of particular importance when using hydrological models to make predictions for future climates when the snow/rain composition will differ from the past climate. This conclusion is expected to be applicable for mid to high latitudes, especially in coastal climates where winter precipitation types (solid/liquid) fluctuate significantly, causing climatological mean correction factors to be inadequate.
Correcting Memory Improves Accuracy of Predicted Task Duration
ERIC Educational Resources Information Center
Roy, Michael M.; Mitten, Scott T.; Christenfeld, Nicholas J. S.
2008-01-01
People are often inaccurate in predicting task duration. The memory bias explanation holds that this error is due to people having incorrect memories of how long previous tasks have taken, and these biased memories cause biased predictions. Therefore, the authors examined the effect on increasing predictive accuracy of correcting memory through…
Bakbergenuly, Ilyas; Morgenthaler, Stephan
2016-01-01
We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group‐level studies or in meta‐analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log‐odds and arcsine transformations of the estimated probability p^, both for single‐group studies and in combining results from several groups or studies in meta‐analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta‐analysis and result in abysmal coverage of the combined effect for large K. We also propose bias‐correction for the arcsine transformation. Our simulations demonstrate that this bias‐correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta‐analyses of prevalence. PMID:27192062
Supernovae as probes of cosmic parameters: estimating the bias from under-dense lines of sight
DOE Office of Scientific and Technical Information (OSTI.GOV)
Busti, V.C.; Clarkson, C.; Holanda, R.F.L., E-mail: vinicius.busti@uct.ac.za, E-mail: holanda@uepb.edu.br, E-mail: chris.clarkson@uct.ac.za
2013-11-01
Correctly interpreting observations of sources such as type Ia supernovae (SNe Ia) require knowledge of the power spectrum of matter on AU scales — which is very hard to model accurately. Because under-dense regions account for much of the volume of the universe, light from a typical source probes a mean density significantly below the cosmic mean. The relative sparsity of sources implies that there could be a significant bias when inferring distances of SNe Ia, and consequently a bias in cosmological parameter estimation. While the weak lensing approximation should in principle give the correct prediction for this, linear perturbationmore » theory predicts an effectively infinite variance in the convergence for ultra-narrow beams. We attempt to quantify the effect typically under-dense lines of sight might have in parameter estimation by considering three alternative methods for estimating distances, in addition to the usual weak lensing approximation. We find in each case this not only increases the errors in the inferred density parameters, but also introduces a bias in the posterior value.« less
Dry Bias and Variability in Vaisala RS80-H Radiosondes: The ARM Experience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, David D.; Lesht, B. M.; Clough, Shepard A.
2003-01-02
Thousands of comparisons between total precipitable water vapor (PWV) obtained from radiosonde (Vaisala RS80-H) profiles and PWV retrieved from a collocated microwave radiometer (MWR) were made at the Atmospheric Radiation Measurement (ARM) Program's Southern Great Plains Cloud and Radiation Testbed (SGP/CART) site in northern Oklahoma from 1994 to 2000. These comparisons show that the RS80-H radiosonde has an approximate 5% dry bias compared to the MWR. This observation is consistent with interpretations of Vaisala RS80 radiosonde data obtained during the Tropical Ocean and Global Atmosphere Coupled Ocean-Atmosphere Response Experiment (TOGA/COARE). In addition to the dry bias, analysis of the PWVmore » comparisons as well as of data obtained from dual-sonde soundings done at the SGP show that the calibration of the radiosonde humidity measurements varies considerably both when the radiosondes come from different calibration batches and when the radiosondes come from the same calibration batch. This variability can result in peak-to-peak differences between radiosondes of greater than 25% in PWV. Because accurate representation of the vertical profile of water vapor is critical for ARM's science objectives, we have developed an empirical method for correcting the radiosonde humidity profiles that is based on a constant scaling factor. By using an independent set of observations and radiative transfer models to test the correction, we show that the constant humidity scaling method appears both to improve the accuracy and reduce the uncertainty of the radiosonde data. We also used the ARM data to examine a different, physically-based, correction scheme that was developed recently by scientists from Vaisala and the National Center for Atmospheric Research (NCAR). This scheme, which addresses the dry bias problem as well as other calibration-related problems with the RS80-H sensor, results in excellent agreement between the PWV retrieved from the MWR and integrated from the corrected radiosonde. However, because the physically-based correction scheme does not address the apparently random calibration variations we observe, it does not reduce the variability either between radiosonde calibration batches or within individual calibration batches.« less
NASA Astrophysics Data System (ADS)
Sippel, S.; Otto, F. E. L.; Forkel, M.; Allen, M. R.; Guillod, B. P.; Heimann, M.; Reichstein, M.; Seneviratne, S. I.; Kirsten, T.; Mahecha, M. D.
2015-12-01
Understanding, quantifying and attributing the impacts of climatic extreme events and variability is crucial for societal adaptation in a changing climate. However, climate model simulations generated for this purpose typically exhibit pronounced biases in their output that hinders any straightforward assessment of impacts. To overcome this issue, various bias correction strategies are routinely used to alleviate climate model deficiencies most of which have been criticized for physical inconsistency and the non-preservation of the multivariate correlation structure. We assess how biases and their correction affect the quantification and attribution of simulated extremes and variability in i) climatological variables and ii) impacts on ecosystem functioning as simulated by a terrestrial biosphere model. Our study demonstrates that assessments of simulated climatic extreme events and impacts in the terrestrial biosphere are highly sensitive to bias correction schemes with major implications for the detection and attribution of these events. We introduce a novel ensemble-based resampling scheme based on a large regional climate model ensemble generated by the distributed weather@home setup[1], which fully preserves the physical consistency and multivariate correlation structure of the model output. We use extreme value statistics to show that this procedure considerably improves the representation of climatic extremes and variability. Subsequently, biosphere-atmosphere carbon fluxes are simulated using a terrestrial ecosystem model (LPJ-GSI) to further demonstrate the sensitivity of ecosystem impacts to the methodology of bias correcting climate model output. We find that uncertainties arising from bias correction schemes are comparable in magnitude to model structural and parameter uncertainties. The present study consists of a first attempt to alleviate climate model biases in a physically consistent way and demonstrates that this yields improved simulations of climate extremes and associated impacts. [1] http://www.climateprediction.net/weatherathome/
Li, Peng; Redden, David T.
2014-01-01
SUMMARY The sandwich estimator in generalized estimating equations (GEE) approach underestimates the true variance in small samples and consequently results in inflated type I error rates in hypothesis testing. This fact limits the application of the GEE in cluster-randomized trials (CRTs) with few clusters. Under various CRT scenarios with correlated binary outcomes, we evaluate the small sample properties of the GEE Wald tests using bias-corrected sandwich estimators. Our results suggest that the GEE Wald z test should be avoided in the analyses of CRTs with few clusters even when bias-corrected sandwich estimators are used. With t-distribution approximation, the Kauermann and Carroll (KC)-correction can keep the test size to nominal levels even when the number of clusters is as low as 10, and is robust to the moderate variation of the cluster sizes. However, in cases with large variations in cluster sizes, the Fay and Graubard (FG)-correction should be used instead. Furthermore, we derive a formula to calculate the power and minimum total number of clusters one needs using the t test and KC-correction for the CRTs with binary outcomes. The power levels as predicted by the proposed formula agree well with the empirical powers from the simulations. The proposed methods are illustrated using real CRT data. We conclude that with appropriate control of type I error rates under small sample sizes, we recommend the use of GEE approach in CRTs with binary outcomes due to fewer assumptions and robustness to the misspecification of the covariance structure. PMID:25345738
An entropy correction method for unsteady full potential flows with strong shocks
NASA Technical Reports Server (NTRS)
Whitlow, W., Jr.; Hafez, M. M.; Osher, S. J.
1986-01-01
An entropy correction method for the unsteady full potential equation is presented. The unsteady potential equation is modified to account for entropy jumps across shock waves. The conservative form of the modified equation is solved in generalized coordinates using an implicit, approximate factorization method. A flux-biasing differencing method, which generates the proper amounts of artificial viscosity in supersonic regions, is used to discretize the flow equations in space. Comparisons between the present method and solutions of the Euler equations and between the present method and experimental data are presented. The comparisons show that the present method more accurately models solutions of the Euler equations and experiment than does the isentropic potential formulation.
The Role of Response Bias in Perceptual Learning
2015-01-01
Sensory judgments improve with practice. Such perceptual learning is often thought to reflect an increase in perceptual sensitivity. However, it may also represent a decrease in response bias, with unpracticed observers acting in part on a priori hunches rather than sensory evidence. To examine whether this is the case, 55 observers practiced making a basic auditory judgment (yes/no amplitude-modulation detection or forced-choice frequency/amplitude discrimination) over multiple days. With all tasks, bias was present initially, but decreased with practice. Notably, this was the case even on supposedly “bias-free,” 2-alternative forced-choice, tasks. In those tasks, observers did not favor the same response throughout (stationary bias), but did favor whichever response had been correct on previous trials (nonstationary bias). Means of correcting for bias are described. When applied, these showed that at least 13% of perceptual learning on a forced-choice task was due to reduction in bias. In other situations, changes in bias were shown to obscure the true extent of learning, with changes in estimated sensitivity increasing once bias was corrected for. The possible causes of bias and the implications for our understanding of perceptual learning are discussed. PMID:25867609
Conducting Meta-Analyses Based on p Values
van Aert, Robbie C. M.; Wicherts, Jelte M.; van Assen, Marcel A. L. M.
2016-01-01
Because of overwhelming evidence of publication bias in psychology, techniques to correct meta-analytic estimates for such bias are greatly needed. The methodology on which the p-uniform and p-curve methods are based has great promise for providing accurate meta-analytic estimates in the presence of publication bias. However, in this article, we show that in some situations, p-curve behaves erratically, whereas p-uniform may yield implausible estimates of negative effect size. Moreover, we show that (and explain why) p-curve and p-uniform result in overestimation of effect size under moderate-to-large heterogeneity and may yield unpredictable bias when researchers employ p-hacking. We offer hands-on recommendations on applying and interpreting results of meta-analyses in general and p-uniform and p-curve in particular. Both methods as well as traditional methods are applied to a meta-analysis on the effect of weight on judgments of importance. We offer guidance for applying p-uniform or p-curve using R and a user-friendly web application for applying p-uniform. PMID:27694466
Komiskey, Matthew J.; Stuntebeck, Todd D.; Cox, Amanda L.; Frame, Dennis R.
2013-01-01
The effects of longitudinal slope on the estimation of discharge in a 0.762-meter (m) (depth at flume entrance) H flume were tested under controlled conditions with slopes from −8 to +8 percent and discharges from 1.2 to 323 liters per second. Compared to the stage-discharge rating for a longitudinal flume slope of zero, computed discharges were negatively biased (maximum −31 percent) when the flume was sloped downward from the front (entrance) to the back (exit), and positively biased (maximum 44 percent) when the flume was sloped upward. Biases increased with greater flume slopes and with lower discharges. A linear empirical relation was developed to compute a corrected reference stage for a 0.762-m H flume using measured stage and flume slope. The reference stage was then used to determine a corrected discharge from the stage-discharge rating. A dimensionally homogeneous correction equation also was developed, which could theoretically be used for all standard H-flume sizes. Use of the corrected discharge computation method for a sloped H flume was determined to have errors ranging from −2.2 to 4.6 percent compared to the H-flume measured discharge at a level position. These results emphasize the importance of the measurement of and the correction for flume slope during an edge-of-field study if the most accurate discharge estimates are desired.
Anderson, Samantha F; Maxwell, Scott E
2017-01-01
Psychology is undergoing a replication crisis. The discussion surrounding this crisis has centered on mistrust of previous findings. Researchers planning replication studies often use the original study sample effect size as the basis for sample size planning. However, this strategy ignores uncertainty and publication bias in estimated effect sizes, resulting in overly optimistic calculations. A psychologist who intends to obtain power of .80 in the replication study, and performs calculations accordingly, may have an actual power lower than .80. We performed simulations to reveal the magnitude of the difference between actual and intended power based on common sample size planning strategies and assessed the performance of methods that aim to correct for effect size uncertainty and/or bias. Our results imply that even if original studies reflect actual phenomena and were conducted in the absence of questionable research practices, popular approaches to designing replication studies may result in a low success rate, especially if the original study is underpowered. Methods correcting for bias and/or uncertainty generally had higher actual power, but were not a panacea for an underpowered original study. Thus, it becomes imperative that 1) original studies are adequately powered and 2) replication studies are designed with methods that are more likely to yield the intended level of power.
A toolkit for measurement error correction, with a focus on nutritional epidemiology
Keogh, Ruth H; White, Ian R
2014-01-01
Exposure measurement error is a problem in many epidemiological studies, including those using biomarkers and measures of dietary intake. Measurement error typically results in biased estimates of exposure-disease associations, the severity and nature of the bias depending on the form of the error. To correct for the effects of measurement error, information additional to the main study data is required. Ideally, this is a validation sample in which the true exposure is observed. However, in many situations, it is not feasible to observe the true exposure, but there may be available one or more repeated exposure measurements, for example, blood pressure or dietary intake recorded at two time points. The aim of this paper is to provide a toolkit for measurement error correction using repeated measurements. We bring together methods covering classical measurement error and several departures from classical error: systematic, heteroscedastic and differential error. The correction methods considered are regression calibration, which is already widely used in the classical error setting, and moment reconstruction and multiple imputation, which are newer approaches with the ability to handle differential error. We emphasize practical application of the methods in nutritional epidemiology and other fields. We primarily consider continuous exposures in the exposure-outcome model, but we also outline methods for use when continuous exposures are categorized. The methods are illustrated using the data from a study of the association between fibre intake and colorectal cancer, where fibre intake is measured using a diet diary and repeated measures are available for a subset. © 2014 The Authors. PMID:24497385
Härkänen, Tommi; Kaikkonen, Risto; Virtala, Esa; Koskinen, Seppo
2014-11-06
To assess the nonresponse rates in a questionnaire survey with respect to administrative register data, and to correct the bias statistically. The Finnish Regional Health and Well-being Study (ATH) in 2010 was based on a national sample and several regional samples. Missing data analysis was based on socio-demographic register data covering the whole sample. Inverse probability weighting (IPW) and doubly robust (DR) methods were estimated using the logistic regression model, which was selected using the Bayesian information criteria. The crude, weighted and true self-reported turnout in the 2008 municipal election and prevalences of entitlements to specially reimbursed medication, and the crude and weighted body mass index (BMI) means were compared. The IPW method appeared to remove a relatively large proportion of the bias compared to the crude prevalence estimates of the turnout and the entitlements to specially reimbursed medication. Several demographic factors were shown to be associated with missing data, but few interactions were found. Our results suggest that the IPW method can improve the accuracy of results of a population survey, and the model selection provides insight into the structure of missing data. However, health-related missing data mechanisms are beyond the scope of statistical methods, which mainly rely on socio-demographic information to correct the results.
Forecasts Recent NCEP NAM-CMAQ AQF Reports EPA CMAQ Bibliography 2016-2017 Huang, J., et al., 2017: Wea Stajner, I., et al., 2016: EGU: NAQFC Overview Huang, J., et al. 2016: AMS: Bias Correction Stajner, I, et . Huang, J., et al.,(2015): CMAS, Testing of two bias correction approaches for reducing biases of
Empirical Validation of a Procedure to Correct Position and Stimulus Biases in Matching-to-Sample
ERIC Educational Resources Information Center
Kangas, Brian D.; Branch, Marc N.
2008-01-01
The development of position and stimulus biases often occurs during initial training on matching-to-sample tasks. Furthermore, without intervention, these biases can be maintained via intermittent reinforcement provided by matching-to-sample contingencies. The present study evaluated the effectiveness of a correction procedure designed to…
To Duc, Khanh
2017-11-18
Receiver operating characteristic (ROC) surface analysis is usually employed to assess the accuracy of a medical diagnostic test when there are three ordered disease status (e.g. non-diseased, intermediate, diseased). In practice, verification bias can occur due to missingness of the true disease status and can lead to a distorted conclusion on diagnostic accuracy. In such situations, bias-corrected inference tools are required. This paper introduce an R package, named bcROCsurface, which provides utility functions for verification bias-corrected ROC surface analysis. The shiny web application of the correction for verification bias in estimation of the ROC surface analysis is also developed. bcROCsurface may become an important tool for the statistical evaluation of three-class diagnostic markers in presence of verification bias. The R package, readme and example data are available on CRAN. The web interface enables users less familiar with R to evaluate the accuracy of diagnostic tests, and can be found at http://khanhtoduc.shinyapps.io/bcROCsurface_shiny/ .
On the Limitations of Variational Bias Correction
NASA Technical Reports Server (NTRS)
Moradi, Isaac; Mccarty, Will; Gelaro, Ronald
2018-01-01
Satellite radiances are the largest dataset assimilated into Numerical Weather Prediction (NWP) models, however the data are subject to errors and uncertainties that need to be accounted for before assimilating into the NWP models. Variational bias correction uses the time series of observation minus background to estimate the observations bias. This technique does not distinguish between the background error, forward operator error, and observations error so that all these errors are summed up together and counted as observation error. We identify some sources of observations errors (e.g., antenna emissivity, non-linearity in the calibration, and antenna pattern) and show the limitations of variational bias corrections on estimating these errors.
NASA Astrophysics Data System (ADS)
Jayasekera, D. L.; Kaluarachchi, J.; Kim, U.
2011-12-01
Rural river basins with sufficient water availability to maintain economic livelihoods can be affected with seasonal fluctuations of precipitation and sometimes by droughts. In addition, climate change impacts can also alter future water availability. General Circulation Models (GCMs) provide credible quantitative estimates of future climate conditions but such estimates are often characterized by bias and coarse scale resolution making it necessary to downscale the outputs for use in regional hydrologic models. This study develops a methodology to downscale and project future monthly precipitation in moderate scale basins where data are limited. A stochastic framework for single-site and multi-site generation of weekly rainfall is developed while preserving the historical temporal and spatial correlation structures. The spatial correlations in the simulated occurrences and the amounts are induced using spatially correlated yet serially independent random numbers. This method is applied to generate weekly precipitation data for a 100-year period in the Nam Ngum River Basin (NNRB) that has a land area of 16,780 km2 located in Lao P.D.R. This method is developed and applied using precipitation data from 1961 to 2000 for 10 selected weather stations that represents the basin rainfall characteristics. Bias-correction method, based on fitted theoretical probability distribution transformations, is applied to improve monthly mean frequency, intensity and the amount of raw GCM precipitation predicted at a given weather station using CGCM3.1 and ECHAM5 for SRES A2 emission scenario. Bias-correction procedure adjusts GCM precipitation to approximate the long-term frequency and the intensity distribution observed at a given weather station. Index of agreement and mean absolute error are determined to assess the overall ability and performance of the bias correction method. The generated precipitation series aggregated at monthly time step was perturbed by the change factors estimated using the corrected GCM and baseline scenarios for future time periods of 2011-2050 and 2051-2090. A network based hydrologic and water resources model, WEAP, was used to simulate the current water allocation and management practices to identify the impacts of climate change in the 20th century. The results of this work are used to identify the multiple challenges faced by stakeholders and planners in water allocation for competing demands in the presence of climate change impacts.
Dye bias correction in dual-labeled cDNA microarray gene expression measurements.
Rosenzweig, Barry A; Pine, P Scott; Domon, Olen E; Morris, Suzanne M; Chen, James J; Sistare, Frank D
2004-01-01
A significant limitation to the analytical accuracy and precision of dual-labeled spotted cDNA microarrays is the signal error due to dye bias. Transcript-dependent dye bias may be due to gene-specific differences of incorporation of two distinctly different chemical dyes and the resultant differential hybridization efficiencies of these two chemically different targets for the same probe. Several approaches were used to assess and minimize the effects of dye bias on fluorescent hybridization signals and maximize the experimental design efficiency of a cell culture experiment. Dye bias was measured at the individual transcript level within each batch of simultaneously processed arrays by replicate dual-labeled split-control sample hybridizations and accounted for a significant component of fluorescent signal differences. This transcript-dependent dye bias alone could introduce unacceptably high numbers of both false-positive and false-negative signals. We found that within a given set of concurrently processed hybridizations, the bias is remarkably consistent and therefore measurable and correctable. The additional microarrays and reagents required for paired technical replicate dye-swap corrections commonly performed to control for dye bias could be costly to end users. Incorporating split-control microarrays within a set of concurrently processed hybridizations to specifically measure dye bias can eliminate the need for technical dye swap replicates and reduce microarray and reagent costs while maintaining experimental accuracy and technical precision. These data support a practical and more efficient experimental design to measure and mathematically correct for dye bias. PMID:15033598
Experimenter Confirmation Bias and the Correction of Science Misconceptions
NASA Astrophysics Data System (ADS)
Allen, Michael; Coole, Hilary
2012-06-01
This paper describes a randomised educational experiment ( n = 47) that examined two different teaching methods and compared their effectiveness at correcting one science misconception using a sample of trainee primary school teachers. The treatment was designed to promote engagement with the scientific concept by eliciting emotional responses from learners that were triggered by their own confirmation biases. The treatment group showed superior learning gains to control at post-test immediately after the lesson, although benefits had dissipated after 6 weeks. Findings are discussed with reference to the conceptual change paradigm and to the importance of feeling emotion during a learning experience, having implications for the teaching of pedagogies to adults that have been previously shown to be successful with children.
NASA Astrophysics Data System (ADS)
Charrier, Jessica G.; McFall, Alexander S.; Vu, Kennedy K.-T.; Baroi, James; Olea, Catalina; Hasson, Alam; Anastasio, Cort
2016-11-01
The dithiothreitol (DTT) assay is widely used to measure the oxidative potential of particulate matter. Results are typically presented in mass-normalized units (e.g., pmols DTT lost per minute per microgram PM) to allow for comparison among samples. Use of this unit assumes that the mass-normalized DTT response is constant and independent of the mass concentration of PM added to the DTT assay. However, based on previous work that identified non-linear DTT responses for copper and manganese, this basic assumption (that the mass-normalized DTT response is independent of the concentration of PM added to the assay) should not be true for samples where Cu and Mn contribute significantly to the DTT signal. To test this we measured the DTT response at multiple PM concentrations for eight ambient particulate samples collected at two locations in California. The results confirm that for samples with significant contributions from Cu and Mn, the mass-normalized DTT response can strongly depend on the concentration of PM added to the assay, varying by up to an order of magnitude for PM concentrations between 2 and 34 μg mL-1. This mass dependence confounds useful interpretation of DTT assay data in samples with significant contributions from Cu and Mn, requiring additional quality control steps to check for this bias. To minimize this problem, we discuss two methods to correct the mass-normalized DTT result and we apply those methods to our samples. We find that it is possible to correct the mass-normalized DTT result, although the correction methods have some drawbacks and add uncertainty to DTT analyses. More broadly, other DTT-active species might also have non-linear concentration-responses in the assay and cause a bias. In addition, the same problem of Cu- and Mn-mediated bias in mass-normalized DTT results might affect other measures of acellular redox activity in PM and needs to be addressed.
O'Brien, D J; León-Vintró, L; McClean, B
2016-01-01
The use of radiotherapy fields smaller than 3 cm in diameter has resulted in the need for accurate detector correction factors for small field dosimetry. However, published factors do not always agree and errors introduced by biased reference detectors, inaccurate Monte Carlo models, or experimental errors can be difficult to distinguish. The aim of this study was to provide a robust set of detector-correction factors for a range of detectors using numerical, empirical, and semiempirical techniques under the same conditions and to examine the consistency of these factors between techniques. Empirical detector correction factors were derived based on small field output factor measurements for circular field sizes from 3.1 to 0.3 cm in diameter performed with a 6 MV beam. A PTW 60019 microDiamond detector was used as the reference dosimeter. Numerical detector correction factors for the same fields were derived based on calculations from a geant4 Monte Carlo model of the detectors and the Linac treatment head. Semiempirical detector correction factors were derived from the empirical output factors and the numerical dose-to-water calculations. The PTW 60019 microDiamond was found to over-respond at small field sizes resulting in a bias in the empirical detector correction factors. The over-response was similar in magnitude to that of the unshielded diode. Good agreement was generally found between semiempirical and numerical detector correction factors except for the PTW 60016 Diode P, where the numerical values showed a greater over-response than the semiempirical values by a factor of 3.7% for a 1.1 cm diameter field and higher for smaller fields. Detector correction factors based solely on empirical measurement or numerical calculation are subject to potential bias. A semiempirical approach, combining both empirical and numerical data, provided the most reliable results.
NASA Astrophysics Data System (ADS)
Simpson, I.
2015-12-01
A long standing bias among global climate models (GCMs) is their incorrect representation of the wintertime circulation of the North Atlantic region. Specifically models tend to exhibit a North Atlantic jet (and associated storm track) that is too zonal, extending across central Europe, when it should tilt northward toward Scandinavia. GCM's consistently predict substantial changes in the large scale circulation in this region, consisting of a localized anti-cyclonic circulation, centered over the Mediterranean and accompanied by increased aridity there and increased storminess over Northern Europe.Here, we present preliminary results from experiments that are designed to address the question of what the impact of the climatological circulation biases might be on this predicted future response. Climate change experiments will be compared in two versions of the Community Earth System Model: the first is a free running version of the model, as typically used in climate prediction; the second is a bias corrected version of the model in which a seasonally varying cycle of bias correction tendencies are applied to the wind and temperature fields. These bias correction tendencies are designed to account for deficiencies in the fast parameterized processes, with an aim to push the model toward a more realistic climatology.While these experiments come with the caveat that they assume the bias correction tendencies will remain constant with time, they allow for an initial assessment, through controlled experiments, of the impact that biases in the climatological circulation can have on future predictions in this region. They will also motivate future work that can make use of the bias correction tendencies to understand the underlying physical processes responsible for the incorrect tilt of the jet.
An Analysis of the Individual Effects of Sex Bias.
ERIC Educational Resources Information Center
Smith, Richard M.
Most attempts to correct for the presence of biased test items in a measurement instrument have been either to remove the items or to adjust the scores to correct for the bias. Using the Rasch Dichotomous Response Model and the independent ability estimates derived from three sets of items, those which favor females, those which favor males, and…
ERIC Educational Resources Information Center
Ugille, Maaike; Moeyaert, Mariola; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim
2014-01-01
A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect…
The Detection and Correction of Bias in Student Ratings of Instruction.
ERIC Educational Resources Information Center
Haladyna, Thomas; Hess, Robert K.
1994-01-01
A Rasch model was used to detect and correct bias in Likert rating scales used to assess student perceptions of college teaching, using a database of ratings. Statistical corrections were significant, supporting the model's potential utility. Recommendations are made for a theoretical rationale and further research on the model. (Author/MSE)
Bryere, Josephine; Pornet, Carole; Dejardin, Olivier; Launay, Ludivine; Guittet, Lydia; Launoy, Guy
2015-04-01
Many international ecological studies that examine the link between social environment and cancer incidence use a deprivation index based on the subjects' address at the time of diagnosis to evaluate socioeconomic status. Thus, social past details are ignored, which leads to misclassification bias in the estimations. The objectives of this study were to include the latency delay in such estimations and to observe the effects. We adapted a previous methodology to correct estimates of the influence of socioeconomic environment on cancer incidence considering the latency delay in measuring socioeconomic status. We implemented this method using French data. We evaluated the misclassification due to social mobility with census data and corrected the relative risks. Inclusion of misclassification affected the values of relative risks, and the corrected values showed a greater departure from the value 1 than the uncorrected ones. For cancer of lung, colon-rectum, lips-mouth-pharynx, kidney and esophagus in men, the over incidence in the deprived categories was augmented by the correction. By not taking into account the latency period in measuring socioeconomic status, the burden of cancer associated with social inequality may be underestimated. Copyright © 2014 Elsevier Ltd. All rights reserved.
Hyltoft Petersen, Per; Lund, Flemming; Fraser, Callum G; Sandberg, Sverre; Sölétormos, György
2018-01-01
Background Many clinical decisions are based on comparison of patient results with reference intervals. Therefore, an estimation of the analytical performance specifications for the quality that would be required to allow sharing common reference intervals is needed. The International Federation of Clinical Chemistry (IFCC) recommended a minimum of 120 reference individuals to establish reference intervals. This number implies a certain level of quality, which could then be used for defining analytical performance specifications as the maximum combination of analytical bias and imprecision required for sharing common reference intervals, the aim of this investigation. Methods Two methods were investigated for defining the maximum combination of analytical bias and imprecision that would give the same quality of common reference intervals as the IFCC recommendation. Method 1 is based on a formula for the combination of analytical bias and imprecision and Method 2 is based on the Microsoft Excel formula NORMINV including the fractional probability of reference individuals outside each limit and the Gaussian variables of mean and standard deviation. The combinations of normalized bias and imprecision are illustrated for both methods. The formulae are identical for Gaussian and log-Gaussian distributions. Results Method 2 gives the correct results with a constant percentage of 4.4% for all combinations of bias and imprecision. Conclusion The Microsoft Excel formula NORMINV is useful for the estimation of analytical performance specifications for both Gaussian and log-Gaussian distributions of reference intervals.
NASA Astrophysics Data System (ADS)
Collados-Lara, Antonio-Juan; Pulido-Velazquez, David; Pardo-Iguzquiza, Eulogio
2017-04-01
Assessing impacts of potential future climate change scenarios in precipitation and temperature is essential to design adaptive strategies in water resources systems. The objective of this work is to analyze the possibilities of different statistical downscaling methods to generate future potential scenarios in an Alpine Catchment from historical data and the available climate models simulations performed in the frame of the CORDEX EU project. The initial information employed to define these downscaling approaches are the historical climatic data (taken from the Spain02 project for the period 1971-2000 with a spatial resolution of 12.5 Km) and the future series provided by climatic models in the horizon period 2071-2100 . We have used information coming from nine climate model simulations (obtained from five different Regional climate models (RCM) nested to four different Global Climate Models (GCM)) from the European CORDEX project. In our application we have focused on the Representative Concentration Pathways (RCP) 8.5 emissions scenario, which is the most unfavorable scenario considered in the fifth Assessment Report (AR5) by the Intergovernmental Panel on Climate Change (IPCC). For each RCM we have generated future climate series for the period 2071-2100 by applying two different approaches, bias correction and delta change, and five different transformation techniques (first moment correction, first and second moment correction, regression functions, quantile mapping using distribution derived transformation and quantile mapping using empirical quantiles) for both of them. Ensembles of the obtained series were proposed to obtain more representative potential future climate scenarios to be employed to study potential impacts. In this work we propose a non-equifeaseble combination of the future series giving more weight to those coming from models (delta change approaches) or combination of models and techniques that provides better approximation to the basic and drought statistic of the historical data. A multi-objective analysis using basic statistics (mean, standard deviation and asymmetry coefficient) and droughts statistics (duration, magnitude and intensity) has been performed to identify which models are better in terms of goodness of fit to reproduce the historical series. The drought statistics have been obtained from the Standard Precipitation index (SPI) series using the Theory of Runs. This analysis allows discriminate the best RCM and the best combination of model and correction technique in the bias-correction method. We have also analyzed the possibilities of using different Stochastic Weather Generators to approximate the basic and droughts statistics of the historical series. These analyses have been performed in our case study in a lumped and in a distributed way in order to assess its sensibility to the spatial scale. The statistic of the future temperature series obtained with different ensemble options are quite homogeneous, but the precipitation shows a higher sensibility to the adopted method and spatial scale. The global increment in the mean temperature values are 31.79 %, 31.79 %, 31.03 % and 31.74 % for the distributed bias-correction, distributed delta-change, lumped bias-correction and lumped delta-change ensembles respectively and in the precipitation they are -25.48 %, -28.49 %, -26.42 % and -27.35% respectively. Acknowledgments: This research work has been partially supported by the GESINHIMPADAPT project (CGL2013-48424-C2-2-R) with Spanish MINECO funds. We would also like to thank Spain02 and CORDEX projects for the data provided for this study and the R package qmap.
Mi, Xiaojuan; Hammill, Bradley G; Curtis, Lesley H; Lai, Edward Chia-Cheng; Setoguchi, Soko
2016-11-20
Observational comparative effectiveness and safety studies are often subject to immortal person-time, a period of follow-up during which outcomes cannot occur because of the treatment definition. Common approaches, like excluding immortal time from the analysis or naïvely including immortal time in the analysis, are known to result in biased estimates of treatment effect. Other approaches, such as the Mantel-Byar and landmark methods, have been proposed to handle immortal time. Little is known about the performance of the landmark method in different scenarios. We conducted extensive Monte Carlo simulations to assess the performance of the landmark method compared with other methods in settings that reflect realistic scenarios. We considered four landmark times for the landmark method. We found that the Mantel-Byar method provided unbiased estimates in all scenarios, whereas the exclusion and naïve methods resulted in substantial bias when the hazard of the event was constant or decreased over time. The landmark method performed well in correcting immortal person-time bias in all scenarios when the treatment effect was small, and provided unbiased estimates when there was no treatment effect. The bias associated with the landmark method tended to be small when the treatment rate was higher in the early follow-up period than it was later. These findings were confirmed in a case study of chronic obstructive pulmonary disease. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Malyarenko, Dariya I; Pang, Yuxi; Senegas, Julien; Ivancevic, Marko K; Ross, Brian D; Chenevert, Thomas L
2015-12-01
Spatially non-uniform diffusion weighting bias due to gradient nonlinearity (GNL) causes substantial errors in apparent diffusion coefficient (ADC) maps for anatomical regions imaged distant from magnet isocenter. Our previously-described approach allowed effective removal of spatial ADC bias from three orthogonal DWI measurements for mono-exponential media of arbitrary anisotropy. The present work evaluates correction feasibility and performance for quantitative diffusion parameters of the two-component IVIM model for well-perfused and nearly isotropic renal tissue. Sagittal kidney DWI scans of a volunteer were performed on a clinical 3T MRI scanner near isocenter and offset superiorly. Spatially non-uniform diffusion weighting due to GNL resulted both in shift and broadening of perfusion-suppressed ADC histograms for off-center DWI relative to unbiased measurements close to isocenter. Direction-average DW-bias correctors were computed based on the known gradient design provided by vendor. The computed bias maps were empirically confirmed by coronal DWI measurements for an isotropic gel-flood phantom. Both phantom and renal tissue ADC bias for off-center measurements was effectively removed by applying pre-computed 3D correction maps. Comparable ADC accuracy was achieved for corrections of both b -maps and DWI intensities in presence of IVIM perfusion. No significant bias impact was observed for IVIM perfusion fraction.
Malyarenko, Dariya I.; Pang, Yuxi; Senegas, Julien; Ivancevic, Marko K.; Ross, Brian D.; Chenevert, Thomas L.
2015-01-01
Spatially non-uniform diffusion weighting bias due to gradient nonlinearity (GNL) causes substantial errors in apparent diffusion coefficient (ADC) maps for anatomical regions imaged distant from magnet isocenter. Our previously-described approach allowed effective removal of spatial ADC bias from three orthogonal DWI measurements for mono-exponential media of arbitrary anisotropy. The present work evaluates correction feasibility and performance for quantitative diffusion parameters of the two-component IVIM model for well-perfused and nearly isotropic renal tissue. Sagittal kidney DWI scans of a volunteer were performed on a clinical 3T MRI scanner near isocenter and offset superiorly. Spatially non-uniform diffusion weighting due to GNL resulted both in shift and broadening of perfusion-suppressed ADC histograms for off-center DWI relative to unbiased measurements close to isocenter. Direction-average DW-bias correctors were computed based on the known gradient design provided by vendor. The computed bias maps were empirically confirmed by coronal DWI measurements for an isotropic gel-flood phantom. Both phantom and renal tissue ADC bias for off-center measurements was effectively removed by applying pre-computed 3D correction maps. Comparable ADC accuracy was achieved for corrections of both b-maps and DWI intensities in presence of IVIM perfusion. No significant bias impact was observed for IVIM perfusion fraction. PMID:26811845
Using Twitter to Measure Public Discussion of Diseases: A Case Study
Schwartz, H Andrew; Hill, Shawndra; Merchant, Raina M; Arango, Catalina; Ungar, Lyle
2015-01-01
Background Twitter is increasingly used to estimate disease prevalence, but such measurements can be biased, due to both biased sampling and inherent ambiguity of natural language. Objective We characterized the extent of these biases and how they vary with disease. Methods We correlated self-reported prevalence rates for 22 diseases from Experian’s Simmons National Consumer Study (n=12,305) with the number of times these diseases were mentioned on Twitter during the same period (2012). We also identified and corrected for two types of bias present in Twitter data: (1) demographic variance between US Twitter users and the general US population; and (2) natural language ambiguity, which creates the possibility that mention of a disease name may not actually refer to the disease (eg, “heart attack” on Twitter often does not refer to myocardial infarction). We measured the correlation between disease prevalence and Twitter disease mentions both with and without bias correction. This allowed us to quantify each disease’s overrepresentation or underrepresentation on Twitter, relative to its prevalence. Results Our sample included 80,680,449 tweets. Adjusting disease prevalence to correct for Twitter demographics more than doubles the correlation between Twitter disease mentions and disease prevalence in the general population (from .113 to .258, P <.001). In addition, diseases varied widely in how often mentions of their names on Twitter actually referred to the diseases, from 14.89% (3827/25,704) of instances (for stroke) to 99.92% (5044/5048) of instances (for arthritis). Applying ambiguity correction to our Twitter corpus achieves a correlation between disease mentions and prevalence of .208 ( P <.001). Simultaneously applying correction for both demographics and ambiguity more than triples the baseline correlation to .366 ( P <.001). Compared with prevalence rates, cancer appeared most overrepresented in Twitter, whereas high cholesterol appeared most underrepresented. Conclusions Twitter is a potentially useful tool to measure public interest in and concerns about different diseases, but when comparing diseases, improvements can be made by adjusting for population demographics and word ambiguity. PMID:26925459
A simple and effective solution to the constrained QM/MM simulations
NASA Astrophysics Data System (ADS)
Takahashi, Hideaki; Kambe, Hiroyuki; Morita, Akihiro
2018-04-01
It is a promising extension of the quantum mechanical/molecular mechanical (QM/MM) approach to incorporate the solvent molecules surrounding the QM solute into the QM region to ensure the adequate description of the electronic polarization of the solute. However, the solvent molecules in the QM region inevitably diffuse into the MM bulk during the QM/MM simulation. In this article, we developed a simple and efficient method, referred to as the "boundary constraint with correction (BCC)," to prevent the diffusion of the solvent water molecules by means of a constraint potential. The point of the BCC method is to compensate the error in a statistical property due to the bias potential by adding a correction term obtained through a set of QM/MM simulations. The BCC method is designed so that the effect of the bias potential completely vanishes when the QM solvent is identical with the MM solvent. Furthermore, the desirable conditions, that is, the continuities of energy and force and the conservations of energy and momentum, are fulfilled in principle. We applied the QM/MM-BCC method to a hydronium ion(H3O+) in aqueous solution to construct the radial distribution function (RDF) of the solvent around the solute. It was demonstrated that the correction term fairly compensated the error and led the RDF in good agreement with the result given by an ab initio molecular dynamics simulation.
Schoenecker, Kathryn A.; Lubow, Bruce C.
2016-01-01
Accurately estimating the size of wildlife populations is critical to wildlife management and conservation of species. Raw counts or “minimum counts” are still used as a basis for wildlife management decisions. Uncorrected raw counts are not only negatively biased due to failure to account for undetected animals, but also provide no estimate of precision on which to judge the utility of counts. We applied a hybrid population estimation technique that combined sightability modeling, radio collar-based mark-resight, and simultaneous double count (double-observer) modeling to estimate the population size of elk in a high elevation desert ecosystem. Combining several models maximizes the strengths of each individual model while minimizing their singular weaknesses. We collected data with aerial helicopter surveys of the elk population in the San Luis Valley and adjacent mountains in Colorado State, USA in 2005 and 2007. We present estimates from 7 alternative analyses: 3 based on different methods for obtaining a raw count and 4 based on different statistical models to correct for sighting probability bias. The most reliable of these approaches is a hybrid double-observer sightability model (model MH), which uses detection patterns of 2 independent observers in a helicopter plus telemetry-based detections of radio collared elk groups. Data were fit to customized mark-resight models with individual sighting covariates. Error estimates were obtained by a bootstrapping procedure. The hybrid method was an improvement over commonly used alternatives, with improved precision compared to sightability modeling and reduced bias compared to double-observer modeling. The resulting population estimate corrected for multiple sources of undercount bias that, if left uncorrected, would have underestimated the true population size by as much as 22.9%. Our comparison of these alternative methods demonstrates how various components of our method contribute to improving the final estimate and demonstrates why each is necessary.
Mizukami, Naoki; Clark, Martyn P.; Gutmann, Ethan D.; Mendoza, Pablo A.; Newman, Andrew J.; Nijssen, Bart; Livneh, Ben; Hay, Lauren E.; Arnold, Jeffrey R.; Brekke, Levi D.
2016-01-01
Continental-domain assessments of climate change impacts on water resources typically rely on statistically downscaled climate model outputs to force hydrologic models at a finer spatial resolution. This study examines the effects of four statistical downscaling methods [bias-corrected constructed analog (BCCA), bias-corrected spatial disaggregation applied at daily (BCSDd) and monthly scales (BCSDm), and asynchronous regression (AR)] on retrospective hydrologic simulations using three hydrologic models with their default parameters (the Community Land Model, version 4.0; the Variable Infiltration Capacity model, version 4.1.2; and the Precipitation–Runoff Modeling System, version 3.0.4) over the contiguous United States (CONUS). Biases of hydrologic simulations forced by statistically downscaled climate data relative to the simulation with observation-based gridded data are presented. Each statistical downscaling method produces different meteorological portrayals including precipitation amount, wet-day frequency, and the energy input (i.e., shortwave radiation), and their interplay affects estimations of precipitation partitioning between evapotranspiration and runoff, extreme runoff, and hydrologic states (i.e., snow and soil moisture). The analyses show that BCCA underestimates annual precipitation by as much as −250 mm, leading to unreasonable hydrologic portrayals over the CONUS for all models. Although the other three statistical downscaling methods produce a comparable precipitation bias ranging from −10 to 8 mm across the CONUS, BCSDd severely overestimates the wet-day fraction by up to 0.25, leading to different precipitation partitioning compared to the simulations with other downscaled data. Overall, the choice of downscaling method contributes to less spread in runoff estimates (by a factor of 1.5–3) than the choice of hydrologic model with use of the default parameters if BCCA is excluded.
Estimation of the electromagnetic bias from retracked TOPEX data
NASA Technical Reports Server (NTRS)
Rodriguez, Ernesto; Martin, Jan M.
1994-01-01
We examine the electromagnetic (EM) bias by using retracked TOPEX altimeter data. In contrast to previous studies, we use a parameterization of the EM bias which does not make stringent assumptions about the form of the correction or its global behavior. We find that the most effective single parameter correction uses the altimeter-estimated wind speed but that other parameterizations, using a wave age related parameter of significant wave height, may also significantly reduce the repeat pass variance. The different corrections are compared, and their improvement of the TOPEX height variance is quantified.
Can the behavioral sciences self-correct? A social epistemic study.
Romero, Felipe
2016-12-01
Advocates of the self-corrective thesis argue that scientific method will refute false theories and find closer approximations to the truth in the long run. I discuss a contemporary interpretation of this thesis in terms of frequentist statistics in the context of the behavioral sciences. First, I identify experimental replications and systematic aggregation of evidence (meta-analysis) as the self-corrective mechanism. Then, I present a computer simulation study of scientific communities that implement this mechanism to argue that frequentist statistics may converge upon a correct estimate or not depending on the social structure of the community that uses it. Based on this study, I argue that methodological explanations of the "replicability crisis" in psychology are limited and propose an alternative explanation in terms of biases. Finally, I conclude suggesting that scientific self-correction should be understood as an interaction effect between inference methods and social structures. Copyright © 2016 Elsevier Ltd. All rights reserved.
Methodological challenges to bridge the gap between regional climate and hydrology models
NASA Astrophysics Data System (ADS)
Bozhinova, Denica; José Gómez-Navarro, Juan; Raible, Christoph; Felder, Guido
2017-04-01
The frequency and severity of floods worldwide, together with their impacts, are expected to increase under climate change scenarios. It is therefore very important to gain insight into the physical mechanisms responsible for such events in order to constrain the associated uncertainties. Model simulations of the climate and hydrological processes are important tools that can provide insight in the underlying physical processes and thus enable an accurate assessment of the risks. Coupled together, they can provide a physically consistent picture that allows to assess the phenomenon in a comprehensive way. However, climate and hydrological models work at different temporal and spatial scales, so there are a number of methodological challenges that need to be carefully addressed. An important issue pertains the presence of biases in the simulation of precipitation. Climate models in general, and Regional Climate models (RCMs) in particular, are affected by a number of systematic biases that limit their reliability. In many studies, prominently the assessment of changes due to climate change, such biases are minimised by applying the so-called delta approach, which focuses on changes disregarding absolute values that are more affected by biases. However, this approach is not suitable in this scenario, as the absolute value of precipitation, rather than the change, is fed into the hydrological model. Therefore, bias has to be previously removed, being this a complex matter where various methodologies have been proposed. In this study, we apply and discuss the advantages and caveats of two different methodologies that correct the simulated precipitation to minimise differences with respect an observational dataset: a linear fit (FIT) of the accumulated distributions and Quantile Mapping (QM). The target region is Switzerland, and therefore the observational dataset is provided by MeteoSwiss. The RCM is the Weather Research and Forecasting model (WRF), driven at the boundaries by the Community Earth System Model (CESM). The raw simulation driven by CESM exhibit prominent biases that stand out in the evolution of the annual cycle and demonstrate that the correction of biases is mandatory in this type of studies, rather than a minor correction that might be neglected. The simulation spans the period 1976 - 2005, although the application of the correction is carried out on a daily basis. Both methods lead to a corrected field of precipitation that respects the temporal evolution of the simulated precipitation, at the same time that mimics the distribution of precipitation according to the one in the observations. Due to the nature of the two methodologies, there are important differences between the products of both corrections, that lead to dataset with different properties. FIT is generally more accurate regarding the reproduction of the tails of the distribution, i.e. extreme events, whereas the nature of QM renders it a general-purpose correction whose skill is equally distributed across the full distribution of precipitation, including central values.
Use of regional climate model output for hydrologic simulations
Hay, L.E.; Clark, M.P.; Wilby, R.L.; Gutowski, W.J.; Leavesley, G.H.; Pan, Z.; Arritt, R.W.; Takle, E.S.
2002-01-01
Daily precipitation and maximum and minimum temperature time series from a regional climate model (RegCM2) configured using the continental United States as a domain and run on a 52-km (approximately) spatial resolution were used as input to a distributed hydrologic model for one rainfall-dominated basin (Alapaha River at Statenville, Georgia) and three snowmelt-dominated basins (Animas River at Durango. Colorado; east fork of the Carson River near Gardnerville, Nevada: and Cle Elum River near Roslyn, Washington). For comparison purposes, spatially averaged daily datasets of precipitation and maximum and minimum temperature were developed from measured data for each basin. These datasets included precipitation and temperature data for all stations (hereafter, All-Sta) located within the area of the RegCM2 output used for each basin, but excluded station data used to calibrate the hydrologic model. Both the RegCM2 output and All-Sta data capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all four basins, the RegCM2- and All-Sta-based simulations of runoff show little skill on a daily basis [Nash-Sutcliffe (NS) values range from 0.05 to 0.37 for RegCM2 and -0.08 to 0.65 for All-Sta]. When the precipitation and temperature biases are corrected in the RegCM2 output and All-Sta data (Bias-RegCM2 and Bias-All, respectively) the accuracy of the daily runoff simulations improve dramatically for the snowmelt-dominated basins (NS values range from 0.41 to 0.66 for RegCM2 and 0.60 to 0.76 for All-Sta). In the rainfall-dominated basin, runoff simulations based on the Bias-RegCM2 output show no skill (NS value of 0.09) whereas Bias-All simulated runoff improves (NS value improved from - 0.08 to 0.72). These results indicate that measured data at the coarse resolution of the RegCM2 output can be made appropriate for basin-scale modeling through bias correction (essentially a magnitude correction). However, RegCM2 output, even when bias corrected, does not contain the day-to-day variability present in the All-Sta dataset that is necessary for basin-scale modeling. Future work is warranted to identify the causes for systematic biases in RegCM2 simulations, develop methods to remove the biases, and improve RegCM2 simulations of daily variability in local climate.
NASA Astrophysics Data System (ADS)
Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frédéric
2018-02-01
Respiratory motion reduces both the qualitative and quantitative accuracy of PET images in oncology. This impact is more significant for quantitative applications based on kinetic modeling, where dynamic acquisitions are associated with limited statistics due to the necessity of enhanced temporal resolution. The aim of this study is to address these drawbacks, by combining a respiratory motion correction approach with temporal regularization in a unique reconstruction algorithm for dynamic PET imaging. Elastic transformation parameters for the motion correction are estimated from the non-attenuation-corrected PET images. The derived displacement matrices are subsequently used in a list-mode based OSEM reconstruction algorithm integrating a temporal regularization between the 3D dynamic PET frames, based on temporal basis functions. These functions are simultaneously estimated at each iteration, along with their relative coefficients for each image voxel. Quantitative evaluation has been performed using dynamic FDG PET/CT acquisitions of lung cancer patients acquired on a GE DRX system. The performance of the proposed method is compared with that of a standard multi-frame OSEM reconstruction algorithm. The proposed method achieved substantial improvements in terms of noise reduction while accounting for loss of contrast due to respiratory motion. Results on simulated data showed that the proposed 4D algorithms led to bias reduction values up to 40% in both tumor and blood regions for similar standard deviation levels, in comparison with a standard 3D reconstruction. Patlak parameter estimations on reconstructed images with the proposed reconstruction methods resulted in 30% and 40% bias reduction in the tumor and lung region respectively for the Patlak slope, and a 30% bias reduction for the intercept in the tumor region (a similar Patlak intercept was achieved in the lung area). Incorporation of the respiratory motion correction using an elastic model along with a temporal regularization in the reconstruction process of the PET dynamic series led to substantial quantitative improvements and motion artifact reduction. Future work will include the integration of a linear FDG kinetic model, in order to directly reconstruct parametric images.
Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frédéric
2018-02-13
Respiratory motion reduces both the qualitative and quantitative accuracy of PET images in oncology. This impact is more significant for quantitative applications based on kinetic modeling, where dynamic acquisitions are associated with limited statistics due to the necessity of enhanced temporal resolution. The aim of this study is to address these drawbacks, by combining a respiratory motion correction approach with temporal regularization in a unique reconstruction algorithm for dynamic PET imaging. Elastic transformation parameters for the motion correction are estimated from the non-attenuation-corrected PET images. The derived displacement matrices are subsequently used in a list-mode based OSEM reconstruction algorithm integrating a temporal regularization between the 3D dynamic PET frames, based on temporal basis functions. These functions are simultaneously estimated at each iteration, along with their relative coefficients for each image voxel. Quantitative evaluation has been performed using dynamic FDG PET/CT acquisitions of lung cancer patients acquired on a GE DRX system. The performance of the proposed method is compared with that of a standard multi-frame OSEM reconstruction algorithm. The proposed method achieved substantial improvements in terms of noise reduction while accounting for loss of contrast due to respiratory motion. Results on simulated data showed that the proposed 4D algorithms led to bias reduction values up to 40% in both tumor and blood regions for similar standard deviation levels, in comparison with a standard 3D reconstruction. Patlak parameter estimations on reconstructed images with the proposed reconstruction methods resulted in 30% and 40% bias reduction in the tumor and lung region respectively for the Patlak slope, and a 30% bias reduction for the intercept in the tumor region (a similar Patlak intercept was achieved in the lung area). Incorporation of the respiratory motion correction using an elastic model along with a temporal regularization in the reconstruction process of the PET dynamic series led to substantial quantitative improvements and motion artifact reduction. Future work will include the integration of a linear FDG kinetic model, in order to directly reconstruct parametric images.
Accuracy of CT-based attenuation correction in PET/CT bone imaging
NASA Astrophysics Data System (ADS)
Abella, Monica; Alessio, Adam M.; Mankoff, David A.; MacDonald, Lawrence R.; Vaquero, Juan Jose; Desco, Manuel; Kinahan, Paul E.
2012-05-01
We evaluate the accuracy of scaling CT images for attenuation correction of PET data measured for bone. While the standard tri-linear approach has been well tested for soft tissues, the impact of CT-based attenuation correction on the accuracy of tracer uptake in bone has not been reported in detail. We measured the accuracy of attenuation coefficients of bovine femur segments and patient data using a tri-linear method applied to CT images obtained at different kVp settings. Attenuation values at 511 keV obtained with a 68Ga/68Ge transmission scan were used as a reference standard. The impact of inaccurate attenuation images on PET standardized uptake values (SUVs) was then evaluated using simulated emission images and emission images from five patients with elevated levels of FDG uptake in bone at disease sites. The CT-based linear attenuation images of the bovine femur segments underestimated the true values by 2.9 ± 0.3% for cancellous bone regardless of kVp. For compact bone the underestimation ranged from 1.3% at 140 kVp to 14.1% at 80 kVp. In the patient scans at 140 kVp the underestimation was approximately 2% averaged over all bony regions. The sensitivity analysis indicated that errors in PET SUVs in bone are approximately proportional to errors in the estimated attenuation coefficients for the same regions. The variability in SUV bias also increased approximately linearly with the error in linear attenuation coefficients. These results suggest that bias in bone uptake SUVs of PET tracers ranges from 2.4% to 5.9% when using CT scans at 140 and 120 kVp for attenuation correction. Lower kVp scans have the potential for considerably more error in dense bone. This bias is present in any PET tracer with bone uptake but may be clinically insignificant for many imaging tasks. However, errors from CT-based attenuation correction methods should be carefully evaluated if quantitation of tracer uptake in bone is important.
NASA Astrophysics Data System (ADS)
Hagemann, Stefan; Chen, Cui; Haerter, Jan O.; Gerten, Dieter; Heinke, Jens; Piani, Claudio
2010-05-01
Future climate model scenarios depend crucially on their adequate representation of the hydrological cycle. Within the European project "Water and Global Change" (WATCH) special care is taken to couple state-of-the-art climate model output to a suite of hydrological models. This coupling is expected to lead to a better assessment of changes in the hydrological cycle. However, due to the systematic model errors of climate models, their output is often not directly applicable as input for hydrological models. Thus, the methodology of a statistical bias correction has been developed, which can be used for correcting climate model output to produce internally consistent fields that have the same statistical intensity distribution as the observations. As observations, global re-analysed daily data of precipitation and temperature are used that are obtained in the WATCH project. We will apply the bias correction to global climate model data of precipitation and temperature from the GCMs ECHAM5/MPIOM, CNRM-CM3 and LMDZ-4, and intercompare the bias corrected data to the original GCM data and the observations. Then, the orginal and the bias corrected GCM data will be used to force two global hydrology models: (1) the hydrological model of the Max Planck Institute for Meteorology (MPI-HM) consisting of the Simplified Land surface (SL) scheme and the Hydrological Discharge (HD) model, and (2) the dynamic vegetation model LPJmL operated by the Potsdam Institute for Climate Impact Research. The impact of the bias correction on the projected simulated hydrological changes will be analysed, and the resulting behaviour of the two hydrology models will be compared.
Syfert, Mindy M; Smith, Matthew J; Coomes, David A
2013-01-01
Species distribution models (SDMs) trained on presence-only data are frequently used in ecological research and conservation planning. However, users of SDM software are faced with a variety of options, and it is not always obvious how selecting one option over another will affect model performance. Working with MaxEnt software and with tree fern presence data from New Zealand, we assessed whether (a) choosing to correct for geographical sampling bias and (b) using complex environmental response curves have strong effects on goodness of fit. SDMs were trained on tree fern data, obtained from an online biodiversity data portal, with two sources that differed in size and geographical sampling bias: a small, widely-distributed set of herbarium specimens and a large, spatially clustered set of ecological survey records. We attempted to correct for geographical sampling bias by incorporating sampling bias grids in the SDMs, created from all georeferenced vascular plants in the datasets, and explored model complexity issues by fitting a wide variety of environmental response curves (known as "feature types" in MaxEnt). In each case, goodness of fit was assessed by comparing predicted range maps with tree fern presences and absences using an independent national dataset to validate the SDMs. We found that correcting for geographical sampling bias led to major improvements in goodness of fit, but did not entirely resolve the problem: predictions made with clustered ecological data were inferior to those made with the herbarium dataset, even after sampling bias correction. We also found that the choice of feature type had negligible effects on predictive performance, indicating that simple feature types may be sufficient once sampling bias is accounted for. Our study emphasizes the importance of reducing geographical sampling bias, where possible, in datasets used to train SDMs, and the effectiveness and essentialness of sampling bias correction within MaxEnt.
Calibrating the ECCO ocean general circulation model using Green's functions
NASA Technical Reports Server (NTRS)
Menemenlis, D.; Fu, L. L.; Lee, T.; Fukumori, I.
2002-01-01
Green's functions provide a simple, yet effective, method to test and calibrate General-Circulation-Model(GCM) parameterizations, to study and quantify model and data errors, to correct model biases and trends, and to blend estimates from different solutions and data products.
Force-sensed interface for control and training space robot
NASA Astrophysics Data System (ADS)
Moiseev, O. S.; Sarsadskikh, A. S.; Povalyaev, N. D.; Gorbunov, V. I.; Kulakov, F. M.; Vasilev, V. V.
2018-05-01
A method of positional and force-torque control of robots is proposed. Prototypes of the system and the master handle have been created. Algorithm of bias estimation and gravity compensation for force-torque sensor and force-torque trajectory correction are described.
Coello, Christopher; Willoch, Frode; Selnes, Per; Gjerstad, Leif; Fladby, Tormod; Skretting, Arne
2013-05-15
A voxel-based algorithm to correct for partial volume effect in PET brain volumes is presented. This method (named LoReAn) is based on MRI based segmentation of anatomical regions and accurate measurements of the effective point spread function of the PET imaging process. The objective is to correct for the spill-out of activity from high-uptake anatomical structures (e.g. grey matter) into low-uptake anatomical structures (e.g. white matter) in order to quantify physiological uptake in the white matter. The new algorithm is presented and validated against the state of the art region-based geometric transfer matrix (GTM) method with synthetic and clinical data. Using synthetic data, both bias and coefficient of variation were improved in the white matter region using LoReAn compared to GTM. An increased number of anatomical regions doesn't affect the bias (<5%) and misregistration affects equally LoReAn and GTM algorithms. The LoReAn algorithm appears to be a simple and promising voxel-based algorithm for studying metabolism in white matter regions. Copyright © 2013 Elsevier Inc. All rights reserved.
Natal dispersal in the cooperatively breeding Acorn Woodpecker
Koenig, Walter D.; Hooge, P.N.; Stanback, M.T.; Haydock, J.
2000-01-01
Dispersal data are inevitably biased toward short-distance events, often highly so. We illustrate this problem using our long-term study of Acorn Woodpeckers (Melanerpes formicivorus) in central coastal California. Estimating the proportion of birds disappearing from the study area and correcting for detectability within the maximum observable distance are the first steps toward achieving a realistic estimate of dispersal distributions. Unfortunately, there is generally no objective way to determine the fates of birds not accounted for by these procedures, much less estimating the distances they may have moved. Estimated mean and root-mean-square dispersal distances range from 0.22-2.90 km for males and 0.53-9.57 km for females depending on what assumptions and corrections are made. Three field methods used to help correct for bias beyond the limits of normal study areas include surveying alternative study sites, expanding the study site (super study sites), and radio-tracking dispersers within a population. All of these methods have their limitations or can only be used in special cases. New technologies may help alleviate this problem in the near future. Until then, we urge caution in interpreting observed dispersal data from all but the most isolated of avian populations.
Müller, Jörg M; Furniss, Tilman
2013-11-30
The often-reported low informant agreement about child psychopathology between multiple informants has lead to various suggestions about how to address discrepant ratings. Among the factors that may lower agreement that have been discussed is informant credibility, reliability, or psychopathology, which is of interest in this paper. We tested three different models, namely, the accuracy, the distortion, and an integrated so-called combined model, that conceptualize parental ratings to assess child psychopathology. The data comprise ratings of child psychopathology from multiple informants (mother, therapist and kindergarten teacher) and ratings of maternal psychopathology. The children were patients in a preschool psychiatry unit (N=247). The results from structural equation modeling show that maternal ratings of child psychopathology were biased by maternal psychopathology (distortion model). Based on this statistical background, we suggest a method to adjust biased maternal ratings. We illustrate the maternal bias by comparing the ratings of mother to expert ratings (combined kindergarten teacher and therapist ratings) and show that the correction equation increases the agreement between maternal and expert ratings. We conclude that this approach may help to reduce misclassification of preschool children as 'clinical' on the basis of biased maternal ratings. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Application of Pressure-Based Wall Correction Methods to Two NASA Langley Wind Tunnels
NASA Technical Reports Server (NTRS)
Iyer, V.; Everhart, J. L.
2001-01-01
This paper is a description and status report on the implementation and application of the WICS wall interference method to the National Transonic Facility (NTF) and the 14 x 22-ft subsonic wind tunnel at the NASA Langley Research Center. The method calculates free-air corrections to the measured parameters and aerodynamic coefficients for full span and semispan models when the tunnels are in the solid-wall configuration. From a data quality point of view, these corrections remove predictable bias errors in the measurement due to the presence of the tunnel walls. At the NTF, the method is operational in the off-line and on-line modes, with three tests already computed for wall corrections. At the 14 x 22-ft tunnel, initial implementation has been done based on a test on a full span wing. This facility is currently scheduled for an upgrade to its wall pressure measurement system. With the addition of new wall orifices and other instrumentation upgrades, a significant improvement in the wall correction accuracy is expected.
Somayajula, Srikanth Ayyala; Devred, Emmanuel; Bélanger, Simon; Antoine, David; Vellucci, V; Babin, Marcel
2018-04-20
In this study, we report on the performance of satellite-based photosynthetically available radiation (PAR) algorithms used in published oceanic primary production models. The performance of these algorithms was evaluated using buoy observations under clear and cloudy skies, and for the particular case of low sun angles typically encountered at high latitudes or at moderate latitudes in winter. The PAR models consisted of (i) the standard one from the NASA-Ocean Biology Processing Group (OBPG), (ii) the Gregg and Carder (GC) semi-analytical clear-sky model, and (iii) look-up-tables based on the Santa Barbara DISORT atmospheric radiative transfer (SBDART) model. Various combinations of atmospheric inputs, empirical cloud corrections, and semi-analytical irradiance models yielded a total of 13 (11 + 2 developed in this study) different PAR products, which were compared with in situ measurements collected at high frequency (15 min) at a buoy site in the Mediterranean Sea (the "BOUée pour l'acquiSition d'une Série Optique à Long termE," or, "BOUSSOLE" site). An objective ranking method applied to the algorithm results indicated that seven PAR products out of 13 were well in agreement with the in situ measurements. Specifically, the OBPG method showed the best overall performance with a root mean square difference (RMSD) (bias) of 19.7% (6.6%) and 10% (6.3%) followed by the look-up-table method with a RMSD (bias) of 25.5% (6.8%) and 9.6% (2.6%) at daily and monthly scales, respectively. Among the four methods based on clear-sky PAR empirically corrected for cloud cover, the Dobson and Smith method consistently underestimated daily PAR while the Budyko formulation overestimated daily PAR. Empirically cloud-corrected methods using cloud fraction (CF) performed better under quasi-clear skies (CF<0.3) with an RMSD (bias) of 9.7%-14.8% (3.6%-11.3%) than under partially clear to cloudy skies (0.3
NASA Technical Reports Server (NTRS)
Milesi, Cristina; Costa-Cabral, Mariza; Rath, John; Mills, William; Roy, Sujoy; Thrasher, Bridget; Wang, Weile; Chiang, Felicia; Loewenstein, Max; Podolske, James
2014-01-01
Water resource managers planning for the adaptation to future events of extreme precipitation now have access to high resolution downscaled daily projections derived from statistical bias correction and constructed analogs. We also show that along the Pacific Coast the Northern Oscillation Index (NOI) is a reliable predictor of storm likelihood, and therefore a predictor of seasonal precipitation totals and likelihood of extremely intense precipitation. Such time series can be used to project intensity duration curves into the future or input into stormwater models. However, few climate projection studies have explored the impact of the type of downscaling method used on the range and uncertainty of predictions for local flood protection studies. Here we present a study of the future climate flood risk at NASA Ames Research Center, located in South Bay Area, by comparing the range of predictions in extreme precipitation events calculated from three sets of time series downscaled from CMIP5 data: 1) the Bias Correction Constructed Analogs method dataset downscaled to a 1/8 degree grid (12km); 2) the Bias Correction Spatial Disaggregation method downscaled to a 1km grid; 3) a statistical model of extreme daily precipitation events and projected NOI from CMIP5 models. In addition, predicted years of extreme precipitation are used to estimate the risk of overtopping of the retention pond located on the site through simulations of the EPA SWMM hydrologic model. Preliminary results indicate that the intensity of extreme precipitation events is expected to increase and flood the NASA Ames retention pond. The results from these estimations will assist flood protection managers in planning for infrastructure adaptations.
Variance analysis of forecasted streamflow maxima in a wet temperate climate
NASA Astrophysics Data System (ADS)
Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.
2018-05-01
Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.
NASA Astrophysics Data System (ADS)
Eum, H. I.; Cannon, A. J.
2015-12-01
Climate models are a key provider to investigate impacts of projected future climate conditions on regional hydrologic systems. However, there is a considerable mismatch of spatial resolution between GCMs and regional applications, in particular a region characterized by complex terrain such as Korean peninsula. Therefore, a downscaling procedure is an essential to assess regional impacts of climate change. Numerous statistical downscaling methods have been used mainly due to the computational efficiency and simplicity. In this study, four statistical downscaling methods [Bias-Correction/Spatial Disaggregation (BCSD), Bias-Correction/Constructed Analogue (BCCA), Multivariate Adaptive Constructed Analogs (MACA), and Bias-Correction/Climate Imprint (BCCI)] are applied to downscale the latest Climate Forecast System Reanalysis data to stations for precipitation, maximum temperature, and minimum temperature over South Korea. By split sampling scheme, all methods are calibrated with observational station data for 19 years from 1973 to 1991 are and tested for the recent 19 years from 1992 to 2010. To assess skill of the downscaling methods, we construct a comprehensive suite of performance metrics that measure an ability of reproducing temporal correlation, distribution, spatial correlation, and extreme events. In addition, we employ Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) to identify robust statistical downscaling methods based on the performance metrics for each season. The results show that downscaling skill is considerably affected by the skill of CFSR and all methods lead to large improvements in representing all performance metrics. According to seasonal performance metrics evaluated, when TOPSIS is applied, MACA is identified as the most reliable and robust method for all variables and seasons. Note that such result is derived from CFSR output which is recognized as near perfect climate data in climate studies. Therefore, the ranking of this study may be changed when various GCMs are downscaled and evaluated. Nevertheless, it may be informative for end-users (i.e. modelers or water resources managers) to understand and select more suitable downscaling methods corresponding to priorities on regional applications.
Liu, Zhao; Zheng, Chaorong; Wu, Yue
2018-02-01
Recently, the government installed a boundary layer profiler (BLP), which is operated under the Doppler beam swinging mode, in a coastal area of China, to acquire useful wind field information in the atmospheric boundary layer for several purposes. And under strong wind conditions, the performance of the BLP is evaluated. It is found that, even though the quality controlled BLP data show good agreement with the balloon observations, a systematic bias can always be found for the BLP data. For the low wind velocities, the BLP data tend to overestimate the atmospheric wind. However, with the increment of wind velocity, the BLP data show a tendency of underestimation. In order to remove the effect of poor quality data on bias correction, the probability distribution function of the differences between the two instruments is discussed, and it is found that the t location scale distribution is the most suitable probability model when compared to other probability models. After the outliers with a large discrepancy, which are outside of 95% confidence interval of the t location scale distribution, are discarded, the systematic bias can be successfully corrected using a first-order polynomial correction function. The methodology of bias correction used in the study not only can be referred for the correction of other wind profiling radars, but also can lay a solid basis for further analysis of the wind profiles.
NASA Astrophysics Data System (ADS)
Liu, Zhao; Zheng, Chaorong; Wu, Yue
2018-02-01
Recently, the government installed a boundary layer profiler (BLP), which is operated under the Doppler beam swinging mode, in a coastal area of China, to acquire useful wind field information in the atmospheric boundary layer for several purposes. And under strong wind conditions, the performance of the BLP is evaluated. It is found that, even though the quality controlled BLP data show good agreement with the balloon observations, a systematic bias can always be found for the BLP data. For the low wind velocities, the BLP data tend to overestimate the atmospheric wind. However, with the increment of wind velocity, the BLP data show a tendency of underestimation. In order to remove the effect of poor quality data on bias correction, the probability distribution function of the differences between the two instruments is discussed, and it is found that the t location scale distribution is the most suitable probability model when compared to other probability models. After the outliers with a large discrepancy, which are outside of 95% confidence interval of the t location scale distribution, are discarded, the systematic bias can be successfully corrected using a first-order polynomial correction function. The methodology of bias correction used in the study not only can be referred for the correction of other wind profiling radars, but also can lay a solid basis for further analysis of the wind profiles.
Janmaat, Cynthia J; van Diepen, Merel; Krediet, Raymond T; Hemmelder, Marc H; Dekker, Friedo W
2017-01-01
Purpose Current clinical guidelines recommend to initiate dialysis in the presence of symptoms or signs attributable to kidney failure, often with a glomerular filtration rate (GFR) of 5–10 mL/min/1.73 m2. Little evidence exists about the optimal kidney function to start dialysis. Thus far, most observational studies have been limited by lead-time bias. Only a few studies have accounted for lead-time bias, and showed contradictory results. We examined the effect of GFR at dialysis initiation on survival in chronic kidney disease patients, and the role of lead-time bias therein. We used both kidney function based on 24-hour urine collection (measured GFR [mGFR]) and estimated GFR (eGFR). Materials and methods A total of 1,143 patients with eGFR data at dialysis initiation and 852 patients with mGFR data were included from the NECOSAD cohort. Cox regression was used to adjust for potential confounders. To examine the effect of lead-time bias, survival was counted from the time of dialysis initiation or from a common starting point (GFR 20 mL/min/1.73 m2), using linear interpolation models. Results Without lead-time correction, no difference between early and late starters was present based on eGFR (hazard ratio [HR] 1.03, 95% confidence interval [CI] 0.81–1.3). However, after lead-time correction, early initiation showed a survival disadvantage (HR between 1.1 [95% CI 0.82–1.48] and 1.33 [95% CI 1.05–1.68]). Based on mGFR, the potential survival benefit for early starters without lead-time correction (HR 0.8, 95% CI 0.62–1.03) completely disappeared after lead-time correction (HR between 0.94 [95% CI 0.65–1.34] and 1.21 [95% CI 0.95–1.56]). Dialysis start time differed about a year between early and late initiation. Conclusion Lead-time bias is not only a methodological problem but also has clinical impact when assessing the optimal kidney function to start dialysis. Therefore, lead-time bias is extremely important to correct for. Taking account of lead-time bias, this controlled study showed that early dialysis initiation (eGFR >7.9, mGFR >6.6 mL/min/1.73 m2) was not associated with an improvement in survival. Based on kidney function, this study suggests that in some patients, dialysis could be started even later than an eGFR <5.7 and mGFR <4.3 mL/min/1.73 m2. PMID:28442934
Cognitive determinants of affective forecasting errors
Hoerger, Michael; Quirk, Stuart W.; Lucas, Richard E.; Carr, Thomas H.
2011-01-01
Often to the detriment of human decision making, people are prone to an impact bias when making affective forecasts, overestimating the emotional consequences of future events. The cognitive processes underlying the impact bias, and methods for correcting it, have been debated and warrant further exploration. In the present investigation, we examined both individual differences and contextual variables associated with cognitive processing in affective forecasting for an election. Results showed that the perceived importance of the event and working memory capacity were both associated with an increased impact bias for some participants, whereas retrieval interference had no relationship with bias. Additionally, an experimental manipulation effectively reduced biased forecasts, particularly among participants who were most distracted thinking about peripheral life events. These findings have direct theoretical implications for understanding the impact bias, highlight the importance of individual differences in affective forecasting, and have ramifications for future decision making research. The possible functional role of the impact bias is discussed within the context of evolutionary psychology. PMID:21912580
Tang, Jian; Jiang, Xiaoliang
2017-01-01
Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.
NURD: an implementation of a new method to estimate isoform expression from non-uniform RNA-seq data
2013-01-01
Background RNA-Seq technology has been used widely in transcriptome study, and one of the most important applications is to estimate the expression level of genes and their alternative splicing isoforms. There have been several algorithms published to estimate the expression based on different models. Recently Wu et al. published a method that can accurately estimate isoform level expression by considering position-related sequencing biases using nonparametric models. The method has advantages in handling different read distributions, but there hasn’t been an efficient program to implement this algorithm. Results We developed an efficient implementation of the algorithm in the program NURD. It uses a binary interval search algorithm. The program can correct both the global tendency of sequencing bias in the data and local sequencing bias specific to each gene. The correction makes the isoform expression estimation more reliable under various read distributions. And the implementation is computationally efficient in both the memory cost and running time and can be readily scaled up for huge datasets. Conclusion NURD is an efficient and reliable tool for estimating the isoform expression level. Given the reads mapping result and gene annotation file, NURD will output the expression estimation result. The package is freely available for academic use at http://bioinfo.au.tsinghua.edu.cn/software/NURD/. PMID:23837734
Postprocessing for Air Quality Predictions
NASA Astrophysics Data System (ADS)
Delle Monache, L.
2017-12-01
In recent year, air quality (AQ) forecasting has made significant progress towards better predictions with the goal of protecting the public from harmful pollutants. This progress is the results of improvements in weather and chemical transport models, their coupling, and more accurate emission inventories (e.g., with the development of new algorithms to account in near real-time for fires). Nevertheless, AQ predictions are still affected at times by significant biases which stem from limitations in both weather and chemistry transport models. Those are the result of numerical approximations and the poor representation (and understanding) of important physical and chemical process. Moreover, although the quality of emission inventories has been significantly improved, they are still one of the main sources of uncertainties in AQ predictions. For operational real-time AQ forecasting, a significant portion of these biases can be reduced with the implementation of postprocessing methods. We will review some of the techniques that have been proposed to reduce both systematic and random errors of AQ predictions, and improve the correlation between predictions and observations of ground-level ozone and surface particulate matter less than 2.5 µm in diameter (PM2.5). These methods, which can be applied to both deterministic and probabilistic predictions, include simple bias-correction techniques, corrections inspired by the Kalman filter, regression methods, and the more recently developed analog-based algorithms. These approaches will be compared and contrasted, and strength and weaknesses of each will be discussed.
A bias-corrected estimator in multiple imputation for missing data.
Tomita, Hiroaki; Fujisawa, Hironori; Henmi, Masayuki
2018-05-29
Multiple imputation (MI) is one of the most popular methods to deal with missing data, and its use has been rapidly increasing in medical studies. Although MI is rather appealing in practice since it is possible to use ordinary statistical methods for a complete data set once the missing values are fully imputed, the method of imputation is still problematic. If the missing values are imputed from some parametric model, the validity of imputation is not necessarily ensured, and the final estimate for a parameter of interest can be biased unless the parametric model is correctly specified. Nonparametric methods have been also proposed for MI, but it is not so straightforward as to produce imputation values from nonparametrically estimated distributions. In this paper, we propose a new method for MI to obtain a consistent (or asymptotically unbiased) final estimate even if the imputation model is misspecified. The key idea is to use an imputation model from which the imputation values are easily produced and to make a proper correction in the likelihood function after the imputation by using the density ratio between the imputation model and the true conditional density function for the missing variable as a weight. Although the conditional density must be nonparametrically estimated, it is not used for the imputation. The performance of our method is evaluated by both theory and simulation studies. A real data analysis is also conducted to illustrate our method by using the Duke Cardiac Catheterization Coronary Artery Disease Diagnostic Dataset. Copyright © 2018 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Tangborn, Andrew; Menard, Richard; Ortland, David; Einaudi, Franco (Technical Monitor)
2001-01-01
A new approach to the analysis of systematic and random observation errors is presented in which the error statistics are obtained using forecast data rather than observations from a different instrument type. The analysis is carried out at an intermediate retrieval level, instead of the more typical state variable space. This method is carried out on measurements made by the High Resolution Doppler Imager (HRDI) on board the Upper Atmosphere Research Satellite (UARS). HRDI, a limb sounder, is the only satellite instrument measuring winds in the stratosphere, and the only instrument of any kind making global wind measurements in the upper atmosphere. HRDI measures doppler shifts in the two different O2 absorption bands (alpha and B) and the retrieved products are tangent point Line-of-Sight wind component (level 2 retrieval) and UV winds (level 3 retrieval). This analysis is carried out on a level 1.9 retrieval, in which the contributions from different points along the line-of-sight have not been removed. Biases are calculated from O-F (observed minus forecast) LOS wind components and are separated into a measurement parameter space consisting of 16 different values. The bias dependence on these parameters (plus an altitude dependence) is used to create a bias correction scheme carried out on the level 1.9 retrieval. The random error component is analyzed by separating the gamma and B band observations and locating observation pairs where both bands are very nearly looking at the same location at the same time. It is shown that the two observation streams are uncorrelated and that this allows the forecast error variance to be estimated. The bias correction is found to cut the effective observation error variance in half.
NASA Astrophysics Data System (ADS)
Zhang, Guoguang; Yu, Zitian; Wang, Junmin
2017-03-01
Yaw rate is a crucial signal for the motion control systems of ground vehicles. Yet it may be contaminated by sensor bias. In order to correct the contaminated yaw rate signal and estimate the sensor bias, a robust gain-scheduling observer is proposed in this paper. First of all, a two-degree-of-freedom (2DOF) vehicle lateral and yaw dynamic model is presented, and then a Luenberger-like observer is proposed. To make the observer more applicable to real vehicle driving operations, a 2DOF vehicle model with uncertainties on the coefficients of tire cornering stiffness is employed. Further, a gain-scheduling approach and a robustness enhancement are introduced, leading to a robust gain-scheduling observer. Sensor bias detection mechanism is also designed. Case studies are conducted using an electric ground vehicle to assess the performance of signal correction and sensor bias estimation under difference scenarios.
Correction of aeroheating-induced intensity nonuniformity in infrared images
NASA Astrophysics Data System (ADS)
Liu, Li; Yan, Luxin; Zhao, Hui; Dai, Xiaobing; Zhang, Tianxu
2016-05-01
Aeroheating-induced intensity nonuniformity effects severely influence the effective performance of an infrared (IR) imaging system in high-speed flight. In this paper, we propose a new approach to the correction of intensity nonuniformity in IR images. The basic assumption is that the low-frequency intensity bias is additive and smoothly varying so that it can be modeled as a bivariate polynomial and estimated by using an isotropic total variation (TV) model. A half quadratic penalty method is applied to the isotropic form of TV discretization. And an alternating minimization algorithm is adopted for solving the optimization model. The experimental results of simulated and real aerothermal images show that the proposed correction method can effectively improve IR image quality.
NASA Astrophysics Data System (ADS)
Yang, T.; Lee, C.
2017-12-01
The biases in the Global Circulation Models (GCMs) are crucial for understanding future climate changes. Currently, most bias correction methodologies suffer from the assumption that model bias is stationary. This paper provides a non-stationary bias correction model, termed Residual-based Bagging Tree (RBT) model, to reduce simulation biases and to quantify the contributions of single models. Specifically, the proposed model estimates the residuals between individual models and observations, and takes the differences between observations and the ensemble mean into consideration during the model training process. A case study is conducted for 10 major river basins in Mainland China during different seasons. Results show that the proposed model is capable of providing accurate and stable predictions while including the non-stationarities into the modeling framework. Significant reductions in both bias and root mean squared error are achieved with the proposed RBT model, especially for the central and western parts of China. The proposed RBT model has consistently better performance in reducing biases when compared to the raw ensemble mean, the ensemble mean with simple additive bias correction, and the single best model for different seasons. Furthermore, the contribution of each single GCM in reducing the overall bias is quantified. The single model importance varies between 3.1% and 7.2%. For different future scenarios (RCP 2.6, RCP 4.5, and RCP 8.5), the results from RBT model suggest temperature increases of 1.44 ºC, 2.59 ºC, and 4.71 ºC by the end of the century, respectively, when compared to the average temperature during 1970 - 1999.
Ferrazzi, Giulio; Kuklisova Murgasova, Maria; Arichi, Tomoki; Malamateniou, Christina; Fox, Matthew J; Makropoulos, Antonios; Allsop, Joanna; Rutherford, Mary; Malik, Shaihan; Aljabar, Paul; Hajnal, Joseph V
2014-11-01
There is growing interest in exploring fetal functional brain development, particularly with Resting State fMRI. However, during a typical fMRI acquisition, the womb moves due to maternal respiration and the fetus may perform large-scale and unpredictable movements. Conventional fMRI processing pipelines, which assume that brain movements are infrequent or at least small, are not suitable. Previous published studies have tackled this problem by adopting conventional methods and discarding as much as 40% or more of the acquired data. In this work, we developed and tested a processing framework for fetal Resting State fMRI, capable of correcting gross motion. The method comprises bias field and spin history corrections in the scanner frame of reference, combined with slice to volume registration and scattered data interpolation to place all data into a consistent anatomical space. The aim is to recover an ordered set of samples suitable for further analysis using standard tools such as Group Independent Component Analysis (Group ICA). We have tested the approach using simulations and in vivo data acquired at 1.5 T. After full motion correction, Group ICA performed on a population of 8 fetuses extracted 20 networks, 6 of which were identified as matching those previously observed in preterm babies. Copyright © 2014 Elsevier Inc. All rights reserved.
Zhou, Yongxin; Bai, Jing
2007-01-01
A framework that combines atlas registration, fuzzy connectedness (FC) segmentation, and parametric bias field correction (PABIC) is proposed for the automatic segmentation of brain magnetic resonance imaging (MRI). First, the atlas is registered onto the MRI to initialize the following FC segmentation. Original techniques are proposed to estimate necessary initial parameters of FC segmentation. Further, the result of the FC segmentation is utilized to initialize a following PABIC algorithm. Finally, we re-apply the FC technique on the PABIC corrected MRI to get the final segmentation. Thus, we avoid expert human intervention and provide a fully automatic method for brain MRI segmentation. Experiments on both simulated and real MRI images demonstrate the validity of the method, as well as the limitation of the method. Being a fully automatic method, it is expected to find wide applications, such as three-dimensional visualization, radiation therapy planning, and medical database construction.
MR coil sensitivity inhomogeneity correction for plaque characterization in carotid arteries
NASA Astrophysics Data System (ADS)
Salvado, Olivier; Hillenbrand, Claudia; Suri, Jasjit; Wilson, David L.
2004-05-01
We are involved in a comprehensive program to characterize atherosclerotic disease using multiple MR images having different contrast mechanisms (T1W, T2W, PDW, magnetization transfer, etc.) of human carotid and animal model arteries. We use specially designed intravascular and surface array coils that give high signal-to-noise but suffer from sensitivity inhomogeneity. With carotid surface coils, challenges include: (1) a steep bias field with an 80% change; (2) presence of nearby muscular structures lacking high frequency information to distinguish bias from anatomical features; (3) many confounding zero-valued voxels subject to fat suppression, blood flow cancellation, or air, which are not subject to coil sensitivity; and (4) substantial noise. Bias was corrected using a modification of the adaptive fuzzy c-mean method reported by Pham et al. (IEEE TMI, 18:738-752), whereby a bias field modeled as a mechanical membrane was iteratively improved until cluster means no longer changed. Because our images were noisy, we added a noise reduction filtering step between iterations and used about 5 classes. In a digital phantom having a bias field measured from our MR system, variations across an area comparable to a carotid artery were reduced from 50% to <5% with processing. Human carotid images were qualitatively improved and large regions of skeletal muscle were relatively flat. Other commonly applied techniques failed to segment the images or introduced strong edge artifacts. Current evaluations include comparisons to bias as measured by a body coil in human MR images.
Berry, Christopher M; Zhao, Peng
2015-01-01
Predictive bias studies have generally suggested that cognitive ability test scores overpredict job performance of African Americans, meaning these tests are not predictively biased against African Americans. However, at least 2 issues call into question existing over-/underprediction evidence: (a) a bias identified by Aguinis, Culpepper, and Pierce (2010) in the intercept test typically used to assess over-/underprediction and (b) a focus on the level of observed validity instead of operational validity. The present study developed and utilized a method of assessing over-/underprediction that draws on the math of subgroup regression intercept differences, does not rely on the biased intercept test, allows for analysis at the level of operational validity, and can use meta-analytic estimates as input values. Therefore, existing meta-analytic estimates of key parameters, corrected for relevant statistical artifacts, were used to determine whether African American job performance remains overpredicted at the level of operational validity. African American job performance was typically overpredicted by cognitive ability tests across levels of job complexity and across conditions wherein African American and White regression slopes did and did not differ. Because the present study does not rely on the biased intercept test and because appropriate statistical artifact corrections were carried out, the present study's results are not affected by the 2 issues mentioned above. The present study represents strong evidence that cognitive ability tests generally overpredict job performance of African Americans. (c) 2015 APA, all rights reserved.
Measuring Variability in the Presence of Noise
NASA Astrophysics Data System (ADS)
Welsh, W. F.
Quantitative measurements of a variable signal in the presence of noise requires very careful attention to subtle affects which can easily bias the measurements. This is not limited to the low-count rate regime, nor is the bias error necessarily small. In this talk I will mention some of the dangers in applying standard techniques which are appropriate for high signal to noise data but fail in the cases where the S/N is low. I will discuss methods for correcting the bias in the these cases, both for periodic and non-periodic variability, and will introduce the concept of the ``filtered de-biased RMS''. I will also illustrate some common abuses of power spectrum interpretation. All of these points will be illustrated with examples from recent work on CV and AGN variability.
Isotope pattern deconvolution as a tool to study iron metabolism in plants.
Rodríguez-Castrillón, José Angel; Moldovan, Mariella; García Alonso, J Ignacio; Lucena, Juan José; García-Tomé, Maria Luisa; Hernández-Apaolaza, Lourdes
2008-01-01
Isotope pattern deconvolution is a mathematical technique for isolating distinct isotope signatures from mixtures of natural abundance and enriched tracers. In iron metabolism studies measurement of all four isotopes of the element by high-resolution multicollector or collision cell ICP-MS allows the determination of the tracer/tracee ratio with simultaneous internal mass bias correction and lower uncertainties. This technique was applied here for the first time to study iron uptake by cucumber plants using 57Fe-enriched iron chelates of the o,o and o,p isomers of ethylenediaminedi(o-hydroxyphenylacetic) acid (EDDHA) and ethylenediamine tetraacetic acid (EDTA). Samples of root, stem, leaves, and xylem sap, after exposure of the cucumber plants to the mentioned 57Fe chelates, were collected, dried, and digested using nitric acid. The isotopic composition of iron in the samples was measured by ICP-MS using a high-resolution multicollector instrument. Mass bias correction was computed using both a natural abundance iron standard and by internal correction using isotope pattern deconvolution. It was observed that, for plants with low 57Fe enrichment, isotope pattern deconvolution provided lower tracer/tracee ratio uncertainties than the traditional method applying external mass bias correction. The total amount of the element in the plants was determined by isotope dilution analysis, using a collision cell quadrupole ICP-MS instrument, after addition of 57Fe or natural abundance Fe in a known amount which depended on the isotopic composition of the sample.
On the importance of avoiding shortcuts in applying cognitive models to hierarchical data.
Boehm, Udo; Marsman, Maarten; Matzke, Dora; Wagenmakers, Eric-Jan
2018-06-12
Psychological experiments often yield data that are hierarchically structured. A number of popular shortcut strategies in cognitive modeling do not properly accommodate this structure and can result in biased conclusions. To gauge the severity of these biases, we conducted a simulation study for a two-group experiment. We first considered a modeling strategy that ignores the hierarchical data structure. In line with theoretical results, our simulations showed that Bayesian and frequentist methods that rely on this strategy are biased towards the null hypothesis. Secondly, we considered a modeling strategy that takes a two-step approach by first obtaining participant-level estimates from a hierarchical cognitive model and subsequently using these estimates in a follow-up statistical test. Methods that rely on this strategy are biased towards the alternative hypothesis. Only hierarchical models of the multilevel data lead to correct conclusions. Our results are particularly relevant for the use of hierarchical Bayesian parameter estimates in cognitive modeling.
Attenuation correction for the large non-human primate brain imaging using microPET.
Naidoo-Variawa, S; Lehnert, W; Kassiou, M; Banati, R; Meikle, S R
2010-04-21
Assessment of the biodistribution and pharmacokinetics of radiopharmaceuticals in vivo is often performed on animal models of human disease prior to their use in humans. The baboon brain is physiologically and neuro-anatomically similar to the human brain and is therefore a suitable model for evaluating novel CNS radioligands. We previously demonstrated the feasibility of performing baboon brain imaging on a dedicated small animal PET scanner provided that the data are accurately corrected for degrading physical effects such as photon attenuation in the body. In this study, we investigated factors affecting the accuracy and reliability of alternative attenuation correction strategies when imaging the brain of a large non-human primate (papio hamadryas) using the microPET Focus 220 animal scanner. For measured attenuation correction, the best bias versus noise performance was achieved using a (57)Co transmission point source with a 4% energy window. The optimal energy window for a (68)Ge transmission source operating in singles acquisition mode was 20%, independent of the source strength, providing bias-noise performance almost as good as for (57)Co. For both transmission sources, doubling the acquisition time had minimal impact on the bias-noise trade-off for corrected emission images, despite observable improvements in reconstructed attenuation values. In a [(18)F]FDG brain scan of a female baboon, both measured attenuation correction strategies achieved good results and similar SNR, while segmented attenuation correction (based on uncorrected emission images) resulted in appreciable regional bias in deep grey matter structures and the skull. We conclude that measured attenuation correction using a single pass (57)Co (4% energy window) or (68)Ge (20% window) transmission scan achieves an excellent trade-off between bias and propagation of noise when imaging the large non-human primate brain with a microPET scanner.
Attenuation correction for the large non-human primate brain imaging using microPET
NASA Astrophysics Data System (ADS)
Naidoo-Variawa, S.; Lehnert, W.; Kassiou, M.; Banati, R.; Meikle, S. R.
2010-04-01
Assessment of the biodistribution and pharmacokinetics of radiopharmaceuticals in vivo is often performed on animal models of human disease prior to their use in humans. The baboon brain is physiologically and neuro-anatomically similar to the human brain and is therefore a suitable model for evaluating novel CNS radioligands. We previously demonstrated the feasibility of performing baboon brain imaging on a dedicated small animal PET scanner provided that the data are accurately corrected for degrading physical effects such as photon attenuation in the body. In this study, we investigated factors affecting the accuracy and reliability of alternative attenuation correction strategies when imaging the brain of a large non-human primate (papio hamadryas) using the microPET Focus 220 animal scanner. For measured attenuation correction, the best bias versus noise performance was achieved using a 57Co transmission point source with a 4% energy window. The optimal energy window for a 68Ge transmission source operating in singles acquisition mode was 20%, independent of the source strength, providing bias-noise performance almost as good as for 57Co. For both transmission sources, doubling the acquisition time had minimal impact on the bias-noise trade-off for corrected emission images, despite observable improvements in reconstructed attenuation values. In a [18F]FDG brain scan of a female baboon, both measured attenuation correction strategies achieved good results and similar SNR, while segmented attenuation correction (based on uncorrected emission images) resulted in appreciable regional bias in deep grey matter structures and the skull. We conclude that measured attenuation correction using a single pass 57Co (4% energy window) or 68Ge (20% window) transmission scan achieves an excellent trade-off between bias and propagation of noise when imaging the large non-human primate brain with a microPET scanner.
Electrostatic focal spot correction for x-ray tubes operating in strong magnetic fields.
Lillaney, Prasheel; Shin, Mihye; Hinshaw, Waldo; Fahrig, Rebecca
2014-11-01
A close proximity hybrid x-ray/magnetic resonance (XMR) imaging system offers several critical advantages over current XMR system installations that have large separation distances (∼5 m) between the imaging fields of view. The two imaging systems can be placed in close proximity to each other if an x-ray tube can be designed to be immune to the magnetic fringe fields outside of the MR bore. One of the major obstacles to robust x-ray tube design is correcting for the effects of the MR fringe field on the x-ray tube focal spot. Any fringe field component orthogonal to the x-ray tube electric field leads to electron drift altering the path of the electron trajectories. The method proposed in this study to correct for the electron drift utilizes an external electric field in the direction of the drift. The electric field is created using two electrodes that are positioned adjacent to the cathode. These electrodes are biased with positive and negative potential differences relative to the cathode. The design of the focusing cup assembly is constrained primarily by the strength of the MR fringe field and high voltage standoff distances between the anode, cathode, and the bias electrodes. From these constraints, a focusing cup design suitable for the close proximity XMR system geometry is derived, and a finite element model of this focusing cup geometry is simulated to demonstrate efficacy. A Monte Carlo simulation is performed to determine any effects of the modified focusing cup design on the output x-ray energy spectrum. An orthogonal fringe field magnitude of 65 mT can be compensated for using bias voltages of +15 and -20 kV. These bias voltages are not sufficient to completely correct for larger orthogonal field magnitudes. Using active shielding coils in combination with the bias electrodes provides complete correction at an orthogonal field magnitude of 88.1 mT. Introducing small fields (<10 mT) parallel to the x-ray tube electric field in addition to the orthogonal field does not affect the electrostatic correction technique. However, rotation of the x-ray tube by 30° toward the MR bore increases the parallel magnetic field magnitude (∼72 mT). The presence of this larger parallel field along with the orthogonal field leads to incomplete correction. Monte Carlo simulations demonstrate that the mean energy of the x-ray spectrum is not noticeably affected by the electrostatic correction, but the output flux is reduced by 7.5%. The maximum orthogonal magnetic field magnitude that can be compensated for using the proposed design is 65 mT. Larger orthogonal field magnitudes cannot be completely compensated for because a pure electrostatic approach is limited by the dielectric strength of the vacuum inside the x-ray tube insert. The electrostatic approach also suffers from limitations when there are strong magnetic fields in both the orthogonal and parallel directions because the electrons prefer to stay aligned with the parallel magnetic field. These challenging field conditions can be addressed by using a hybrid correction approach that utilizes both active shielding coils and biasing electrodes.
NASA Astrophysics Data System (ADS)
Saitoh, N.; Hatta, H.; Imasu, R.; Shiomi, K.; Kuze, A.; Niwa, Y.; Machida, T.; Sawa, Y.; Matsueda, H.
2016-12-01
Thermal and Near Infrared Sensor for Carbon Observation (TANSO)-Fourier Transform Spectrometer (FTS) on board the Greenhouse Gases Observing Satellite (GOSAT) has been observing carbon dioxide (CO2) concentrations in several atmospheric layers in the thermal infrared (TIR) band since its launch on 23 January 2009. We have compared TANSO-FTS TIR Version 1 (V1) CO2 data from 2010 to 2012 and CO2 data obtained by the Continuous CO2 Measuring Equipment (CME) installed on several JAL aircraft in the framework of the Comprehensive Observation Network for TRace gases by AIrLiner (CONTRAIL) project to evaluate bias in the TIR CO2 data in the lower and middle troposphere. Here, we have regarded the CME data obtained during the ascent and descent flights over several airports as part of CO2 vertical profiles there. The comparisons showed that the TIR V1 CO2 data had a negative bias against the CME CO2 data; the magnitude of the bias varied depending on season and latitude. We have estimated bias correction values for the TIR V1 lower and middle tropospheric CO2 data in each latitude band from 40°S to 60°N in each season on the basis of the comparisons with the CME CO2 profiles in limited areas over airports, applied the bias correction values to the TIR V1 CO2 data, and evaluated the quality of the bias-corrected TIR CO2 data globally through comparisons with CO2 data taken from the Nonhydrostatic Icosahedral Atmospheric Model (NICAM)-based Transport Model (TM). The bias-corrected TIR CO2 data showed a better agreement with the NICAM-TM CO2 than the original TIR data, which suggests that the bias correction values estimated in the limited areas are basically applicable to global TIR CO2 data. We have compared XCO2 data calculated from both the original and bias-corrected TIR CO2 data with TANSO-FTS SWIR and NICAM-TM XCO2 data; both the TIR XCO2 data agreed with SWIR and NICAM-TM XCO2 data within 1% except over the Sahara desert and strong source and sink regions.
Counteracting estimation bias and social influence to improve the wisdom of crowds.
Kao, Albert B; Berdahl, Andrew M; Hartnett, Andrew T; Lutz, Matthew J; Bak-Coleman, Joseph B; Ioannou, Christos C; Giam, Xingli; Couzin, Iain D
2018-04-01
Aggregating multiple non-expert opinions into a collective estimate can improve accuracy across many contexts. However, two sources of error can diminish collective wisdom: individual estimation biases and information sharing between individuals. Here, we measure individual biases and social influence rules in multiple experiments involving hundreds of individuals performing a classic numerosity estimation task. We first investigate how existing aggregation methods, such as calculating the arithmetic mean or the median, are influenced by these sources of error. We show that the mean tends to overestimate, and the median underestimate, the true value for a wide range of numerosities. Quantifying estimation bias, and mapping individual bias to collective bias, allows us to develop and validate three new aggregation measures that effectively counter sources of collective estimation error. In addition, we present results from a further experiment that quantifies the social influence rules that individuals employ when incorporating personal estimates with social information. We show that the corrected mean is remarkably robust to social influence, retaining high accuracy in the presence or absence of social influence, across numerosities and across different methods for averaging social information. Using knowledge of estimation biases and social influence rules may therefore be an inexpensive and general strategy to improve the wisdom of crowds. © 2018 The Author(s).
Problems and Limitations of Satellite Image Orientation for Determination of Height Models
NASA Astrophysics Data System (ADS)
Jacobsen, K.
2017-05-01
The usual satellite image orientation is based on bias corrected rational polynomial coefficients (RPC). The RPC are describing the direct sensor orientation of the satellite images. The locations of the projection centres today are without problems, but an accuracy limit is caused by the attitudes. Very high resolution satellites today are very agile, able to change the pointed area over 200km within 10 to 11 seconds. The corresponding fast attitude acceleration of the satellite may cause a jitter which cannot be expressed by the third order RPC, even if it is recorded by the gyros. Only a correction of the image geometry may help, but usually this will not be done. The first indication of jitter problems is shown by systematic errors of the y-parallaxes (py) for the intersection of corresponding points during the computation of ground coordinates. These y-parallaxes have a limited influence to the ground coordinates, but similar problems can be expected for the x-parallaxes, determining directly the object height. Systematic y-parallaxes are shown for Ziyuan-3 (ZY3), WorldView-2 (WV2), Pleiades, Cartosat-1, IKONOS and GeoEye. Some of them have clear jitter effects. In addition linear trends of py can be seen. Linear trends in py and tilts in of computed height models may be caused by limited accuracy of the attitude registration, but also by bias correction with affinity transformation. The bias correction is based on ground control points (GCPs). The accuracy of the GCPs usually does not cause some limitations but the identification of the GCPs in the images may be difficult. With 2-dimensional bias corrected RPC-orientation by affinity transformation tilts of the generated height models may be caused, but due to large affine image deformations some satellites, as Cartosat-1, have to be handled with bias correction by affinity transformation. Instead of a 2-dimensional RPC-orientation also a 3-dimensional orientation is possible, respecting the object height more as by 2-dimensional orientation. The 3-dimensional orientation showed advantages for orientation based on a limited number of GCPs, but in case of poor GCP distribution it may cause also negative effects. For some of the used satellites the bias correction by affinity transformation showed advantages, but for some other the bias correction by shift was leading to a better levelling of the generated height models, even if the root mean square (RMS) differences at the GCPs were larger as for bias correction by affinity transformation. The generated height models can be analyzed and corrected with reference height models. For the used data sets accurate reference height models are available, but an analysis and correction with the free of charge available SRTM digital surface model (DSM) or ALOS World 3D (AW3D30) is also possible and leads to similar results. The comparison of the generated height models with the reference DSM shows some height undulations, but the major accuracy influence is caused by tilts of the height models. Some height model undulations reach up to 50 % of the ground sampling distance (GSD), this is not negligible but it cannot be seen not so much at the standard deviations of the height. In any case an improvement of the generated height models is possible with reference height models. If such corrections are applied it compensates possible negative effects of the type of bias correction or 2-dimensional orientations against 3-dimensional handling.
An experimental verification of laser-velocimeter sampling bias and its correction
NASA Technical Reports Server (NTRS)
Johnson, D. A.; Modarress, D.; Owen, F. K.
1982-01-01
The existence of 'sampling bias' in individual-realization laser velocimeter measurements is experimentally verified and shown to be independent of sample rate. The experiments were performed in a simple two-stream mixing shear flow with the standard for comparison being laser-velocimeter results obtained under continuous-wave conditions. It is also demonstrated that the errors resulting from sampling bias can be removed by a proper interpretation of the sampling statistics. In addition, data obtained in a shock-induced separated flow and in the near-wake of airfoils are presented, both bias-corrected and uncorrected, to illustrate the effects of sampling bias in the extreme.
Effects of Sample Selection Bias on the Accuracy of Population Structure and Ancestry Inference
Shringarpure, Suyash; Xing, Eric P.
2014-01-01
Population stratification is an important task in genetic analyses. It provides information about the ancestry of individuals and can be an important confounder in genome-wide association studies. Public genotyping projects have made a large number of datasets available for study. However, practical constraints dictate that of a geographical/ethnic population, only a small number of individuals are genotyped. The resulting data are a sample from the entire population. If the distribution of sample sizes is not representative of the populations being sampled, the accuracy of population stratification analyses of the data could be affected. We attempt to understand the effect of biased sampling on the accuracy of population structure analysis and individual ancestry recovery. We examined two commonly used methods for analyses of such datasets, ADMIXTURE and EIGENSOFT, and found that the accuracy of recovery of population structure is affected to a large extent by the sample used for analysis and how representative it is of the underlying populations. Using simulated data and real genotype data from cattle, we show that sample selection bias can affect the results of population structure analyses. We develop a mathematical framework for sample selection bias in models for population structure and also proposed a correction for sample selection bias using auxiliary information about the sample. We demonstrate that such a correction is effective in practice using simulated and real data. PMID:24637351
ERIC Educational Resources Information Center
Shih, Ching-Lin; Liu, Tien-Hsiang; Wang, Wen-Chung
2014-01-01
The simultaneous item bias test (SIBTEST) method regression procedure and the differential item functioning (DIF)-free-then-DIF strategy are applied to the logistic regression (LR) method simultaneously in this study. These procedures are used to adjust the effects of matching true score on observed score and to better control the Type I error…
Projecting climate change impacts on hydrology: the potential role of daily GCM output
NASA Astrophysics Data System (ADS)
Maurer, E. P.; Hidalgo, H. G.; Das, T.; Dettinger, M. D.; Cayan, D.
2008-12-01
A primary challenge facing resource managers in accommodating climate change is determining the range and uncertainty in regional and local climate projections. This is especially important for assessing changes in extreme events, which will drive many of the more severe impacts of a changed climate. Since global climate models (GCMs) produce output at a spatial scale incompatible with local impact assessment, different techniques have evolved to downscale GCM output so locally important climate features are expressed in the projections. We compared skill and hydrologic projections using two statistical downscaling methods and a distributed hydrology model. The downscaling methods are the constructed analogues (CA) and the bias correction and spatial downscaling (BCSD). CA uses daily GCM output, and can thus capture GCM projections for changing extreme event occurrence, while BCSD uses monthly output and statistically generates historical daily sequences. We evaluate the hydrologic impacts projected using downscaled climate (from the NCEP/NCAR reanalysis as a surrogate GCM) for the late 20th century with both methods, comparing skill in projecting soil moisture, snow pack, and streamflow at key locations in the Western United States. We include an assessment of a new method for correcting for GCM biases in a hybrid method combining the most important characteristics of both methods.
Gabrani-Juma, Hanif; Clarkin, Owen J; Pourmoghaddas, Amir; Driscoll, Brandon; Wells, R Glenn; deKemp, Robert A; Klein, Ran
2017-01-01
Simple and robust techniques are lacking to assess performance of flow quantification using dynamic imaging. We therefore developed a method to qualify flow quantification technologies using a physical compartment exchange phantom and image analysis tool. We validate and demonstrate utility of this method using dynamic PET and SPECT. Dynamic image sequences were acquired on two PET/CT and a cardiac dedicated SPECT (with and without attenuation and scatter corrections) systems. A two-compartment exchange model was fit to image derived time-activity curves to quantify flow rates. Flowmeter measured flow rates (20-300 mL/min) were set prior to imaging and were used as reference truth to which image derived flow rates were compared. Both PET cameras had excellent agreement with truth ( [Formula: see text]). High-end PET had no significant bias (p > 0.05) while lower-end PET had minimal slope bias (wash-in and wash-out slopes were 1.02 and 1.01) but no significant reduction in precision relative to high-end PET (<15% vs. <14% limits of agreement, p > 0.3). SPECT (without scatter and attenuation corrections) slope biases were noted (0.85 and 1.32) and attributed to camera saturation in early time frames. Analysis of wash-out rates from non-saturated, late time frames resulted in excellent agreement with truth ( [Formula: see text], slope = 0.97). Attenuation and scatter corrections did not significantly impact SPECT performance. The proposed phantom, software and quality assurance paradigm can be used to qualify imaging instrumentation and protocols for quantification of kinetic rate parameters using dynamic imaging.
NASA Astrophysics Data System (ADS)
Teuho, J.; Johansson, J.; Linden, J.; Saunavaara, V.; Tolvanen, T.; Teräs, M.
2014-01-01
Selection of reconstruction parameters has an effect on the image quantification in PET, with an additional contribution from a scanner-specific attenuation correction method. For achieving comparable results in inter- and intra-center comparisons, any existing quantitative differences should be identified and compensated for. In this study, a comparison between PET, PET/CT and PET/MR is performed by using an anatomical brain phantom, to identify and measure the amount of bias caused due to differences in reconstruction and attenuation correction methods especially in PET/MR. Differences were estimated by using visual, qualitative and quantitative analysis. The qualitative analysis consisted of a line profile analysis for measuring the reproduction of anatomical structures and the contribution of the amount of iterations to image contrast. The quantitative analysis consisted of measurement and comparison of 10 anatomical VOIs, where the HRRT was considered as the reference. All scanners reproduced the main anatomical structures of the phantom adequately, although the image contrast on the PET/MR was inferior when using a default clinical brain protocol. Image contrast was improved by increasing the amount of iterations from 2 to 5 while using 33 subsets. Furthermore, a PET/MR-specific bias was detected, which resulted in underestimation of the activity values in anatomical structures closest to the skull, due to the MR-derived attenuation map that ignores the bone. Thus, further improvements for the PET/MR reconstruction and attenuation correction could be achieved by optimization of RAMLA-specific reconstruction parameters and implementation of bone to the attenuation template.
Comparing multilayer brain networks between groups: Introducing graph metrics and recommendations.
Mandke, Kanad; Meier, Jil; Brookes, Matthew J; O'Dea, Reuben D; Van Mieghem, Piet; Stam, Cornelis J; Hillebrand, Arjan; Tewarie, Prejaas
2018-02-01
There is an increasing awareness of the advantages of multi-modal neuroimaging. Networks obtained from different modalities are usually treated in isolation, which is however contradictory to accumulating evidence that these networks show non-trivial interdependencies. Even networks obtained from a single modality, such as frequency-band specific functional networks measured from magnetoencephalography (MEG) are often treated independently. Here, we discuss how a multilayer network framework allows for integration of multiple networks into a single network description and how graph metrics can be applied to quantify multilayer network organisation for group comparison. We analyse how well-known biases for single layer networks, such as effects of group differences in link density and/or average connectivity, influence multilayer networks, and we compare four schemes that aim to correct for such biases: the minimum spanning tree (MST), effective graph resistance cost minimisation, efficiency cost optimisation (ECO) and a normalisation scheme based on singular value decomposition (SVD). These schemes can be applied to the layers independently or to the multilayer network as a whole. For correction applied to whole multilayer networks, only the SVD showed sufficient bias correction. For correction applied to individual layers, three schemes (ECO, MST, SVD) could correct for biases. By using generative models as well as empirical MEG and functional magnetic resonance imaging (fMRI) data, we further demonstrated that all schemes were sensitive to identify network topology when the original networks were perturbed. In conclusion, uncorrected multilayer network analysis leads to biases. These biases may differ between centres and studies and could consequently lead to unreproducible results in a similar manner as for single layer networks. We therefore recommend using correction schemes prior to multilayer network analysis for group comparisons. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhu, Q.; Xu, Y. P.; Hsu, K. L.
2017-12-01
A new satellite-based precipitation dataset, Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Climate Data Record (PERSIANN-CDR) with long-term time series dating back to 1983 can be one valuable dataset for climate studies. This study investigates the feasibility of using PERSIANN-CDR as a reference dataset for climate studies. Sixteen CMIP5 models are evaluated over the Xiang River basin, southern China, by comparing their performance on precipitation projection and streamflow simulation, particularly on extreme precipitation and streamflow events. The results show PERSIANN-CDR is a valuable dataset for climate studies, even on extreme precipitation events. The precipitation estimates and their extreme events from CMIP5 models are improved significantly compared with rain gauge observations after bias-correction by the PERSIANN-CDR precipitation estimates. Given streamflows simulated with raw and bias-corrected precipitation estimates from 16 CMIP5 models, 10 out of 16 are improved after bias-correction. The impact of bias-correction on extreme events for streamflow simulations are unstable, with eight out of 16 models can be clearly claimed they are improved after the bias-correction. Concerning the performance of raw CMIP5 models on precipitation, IPSL-CM5A-MR excels the other CMIP5 models, while MRI-CGCM3 outperforms on extreme events with its better performance on six extreme precipitation metrics. Case studies also show that raw CCSM4, CESM1-CAM5, and MRI-CGCM3 outperform other models on streamflow simulation, while MIROC5-ESM-CHEM, MIROC5-ESM and IPSL-CM5A-MR behaves better than the other models after bias-correction.
Knapp, M
1999-01-01
Spielman and Ewens recently proposed a method for testing a marker for linkage with a disease, which combines data from families with and without information on parental genotypes. For some families without parental-genotype information, it may be possible to reconstruct missing parental genotypes from the genotypes of their offspring. The treatment of such a reconstructed family as if parental genotypes have been typed, however, can introduce bias. In the present study, a new method is presented that employs parental-genotype reconstruction and corrects for the biases resulting from reconstruction. The results of an application of this method to a real data set and of a simulation study suggest that this approach may increase the power to detect linkage. PMID:10053021
Hydrologic extremes - an intercomparison of multiple gridded statistical downscaling methods
NASA Astrophysics Data System (ADS)
Werner, A. T.; Cannon, A. J.
2015-06-01
Gridded statistical downscaling methods are the main means of preparing climate model data to drive distributed hydrological models. Past work on the validation of climate downscaling methods has focused on temperature and precipitation, with less attention paid to the ultimate outputs from hydrological models. Also, as attention shifts towards projections of extreme events, downscaling comparisons now commonly assess methods in terms of climate extremes, but hydrologic extremes are less well explored. Here, we test the ability of gridded downscaling models to replicate historical properties of climate and hydrologic extremes, as measured in terms of temporal sequencing (i.e., correlation tests) and distributional properties (i.e., tests for equality of probability distributions). Outputs from seven downscaling methods - bias correction constructed analogues (BCCA), double BCCA (DBCCA), BCCA with quantile mapping reordering (BCCAQ), bias correction spatial disaggregation (BCSD), BCSD using minimum/maximum temperature (BCSDX), climate imprint delta method (CI), and bias corrected CI (BCCI) - are used to drive the Variable Infiltration Capacity (VIC) model over the snow-dominated Peace River basin, British Columbia. Outputs are tested using split-sample validation on 26 climate extremes indices (ClimDEX) and two hydrologic extremes indices (3 day peak flow and 7 day peak flow). To characterize observational uncertainty, four atmospheric reanalyses are used as climate model surrogates and two gridded observational datasets are used as downscaling target data. The skill of the downscaling methods generally depended on reanalysis and gridded observational dataset. However, CI failed to reproduce the distribution and BCSD and BCSDX the timing of winter 7 day low flow events, regardless of reanalysis or observational dataset. Overall, DBCCA passed the greatest number of tests for the ClimDEX indices, while BCCAQ, which is designed to more accurately resolve event-scale spatial gradients, passed the greatest number of tests for hydrologic extremes. Non-stationarity in the observational/reanalysis datasets complicated the evaluation of downscaling performance. Comparing temporal homogeneity and trends in climate indices and hydrological model outputs calculated from downscaled reanalyses and gridded observations was useful for diagnosing the reliability of the various historical datasets. We recommend that such analyses be conducted before such data are used to construct future hydro-climatic change scenarios.
Hydrologic extremes - an intercomparison of multiple gridded statistical downscaling methods
NASA Astrophysics Data System (ADS)
Werner, Arelia T.; Cannon, Alex J.
2016-04-01
Gridded statistical downscaling methods are the main means of preparing climate model data to drive distributed hydrological models. Past work on the validation of climate downscaling methods has focused on temperature and precipitation, with less attention paid to the ultimate outputs from hydrological models. Also, as attention shifts towards projections of extreme events, downscaling comparisons now commonly assess methods in terms of climate extremes, but hydrologic extremes are less well explored. Here, we test the ability of gridded downscaling models to replicate historical properties of climate and hydrologic extremes, as measured in terms of temporal sequencing (i.e. correlation tests) and distributional properties (i.e. tests for equality of probability distributions). Outputs from seven downscaling methods - bias correction constructed analogues (BCCA), double BCCA (DBCCA), BCCA with quantile mapping reordering (BCCAQ), bias correction spatial disaggregation (BCSD), BCSD using minimum/maximum temperature (BCSDX), the climate imprint delta method (CI), and bias corrected CI (BCCI) - are used to drive the Variable Infiltration Capacity (VIC) model over the snow-dominated Peace River basin, British Columbia. Outputs are tested using split-sample validation on 26 climate extremes indices (ClimDEX) and two hydrologic extremes indices (3-day peak flow and 7-day peak flow). To characterize observational uncertainty, four atmospheric reanalyses are used as climate model surrogates and two gridded observational data sets are used as downscaling target data. The skill of the downscaling methods generally depended on reanalysis and gridded observational data set. However, CI failed to reproduce the distribution and BCSD and BCSDX the timing of winter 7-day low-flow events, regardless of reanalysis or observational data set. Overall, DBCCA passed the greatest number of tests for the ClimDEX indices, while BCCAQ, which is designed to more accurately resolve event-scale spatial gradients, passed the greatest number of tests for hydrologic extremes. Non-stationarity in the observational/reanalysis data sets complicated the evaluation of downscaling performance. Comparing temporal homogeneity and trends in climate indices and hydrological model outputs calculated from downscaled reanalyses and gridded observations was useful for diagnosing the reliability of the various historical data sets. We recommend that such analyses be conducted before such data are used to construct future hydro-climatic change scenarios.
Permutation importance: a corrected feature importance measure.
Altmann, André; Toloşi, Laura; Sander, Oliver; Lengauer, Thomas
2010-05-15
In life sciences, interpretability of machine learning models is as important as their prediction accuracy. Linear models are probably the most frequently used methods for assessing feature relevance, despite their relative inflexibility. However, in the past years effective estimators of feature relevance have been derived for highly complex or non-parametric models such as support vector machines and RandomForest (RF) models. Recently, it has been observed that RF models are biased in such a way that categorical variables with a large number of categories are preferred. In this work, we introduce a heuristic for normalizing feature importance measures that can correct the feature importance bias. The method is based on repeated permutations of the outcome vector for estimating the distribution of measured importance for each variable in a non-informative setting. The P-value of the observed importance provides a corrected measure of feature importance. We apply our method to simulated data and demonstrate that (i) non-informative predictors do not receive significant P-values, (ii) informative variables can successfully be recovered among non-informative variables and (iii) P-values computed with permutation importance (PIMP) are very helpful for deciding the significance of variables, and therefore improve model interpretability. Furthermore, PIMP was used to correct RF-based importance measures for two real-world case studies. We propose an improved RF model that uses the significant variables with respect to the PIMP measure and show that its prediction accuracy is superior to that of other existing models. R code for the method presented in this article is available at http://www.mpi-inf.mpg.de/ approximately altmann/download/PIMP.R CONTACT: altmann@mpi-inf.mpg.de, laura.tolosi@mpi-inf.mpg.de Supplementary data are available at Bioinformatics online.
Transport through correlated systems with density functional theory
NASA Astrophysics Data System (ADS)
Kurth, S.; Stefanucci, G.
2017-10-01
We present recent advances in density functional theory (DFT) for applications in the field of quantum transport, with particular emphasis on transport through strongly correlated systems. We review the foundations of the popular Landauer-Büttiker(LB) + DFT approach. This formalism, when using approximations to the exchange-correlation (xc) potential with steps at integer occupation, correctly captures the Kondo plateau in the zero bias conductance at zero temperature but completely fails to capture the transition to the Coulomb blockade (CB) regime as the temperature increases. To overcome the limitations of LB + DFT, the quantum transport problem is treated from a time-dependent (TD) perspective using TDDFT, an exact framework to deal with nonequilibrium situations. The steady-state limit of TDDFT shows that in addition to an xc potential in the junction, there also exists an xc correction to the applied bias. Open shell molecules in the CB regime provide the most striking examples of the importance of the xc bias correction. Using the Anderson model as guidance we estimate these corrections in the limit of zero bias. For the general case we put forward a steady-state DFT which is based on one-to-one correspondence between the pair of basic variables, steady density on and steady current across the junction and the pair local potential on and bias across the junction. Like TDDFT, this framework also leads to both an xc potential in the junction and an xc correction to the bias. Unlike TDDFT, these potentials are independent of history. We highlight the universal features of both xc potential and xc bias corrections for junctions in the CB regime and provide an accurate parametrization for the Anderson model at arbitrary temperatures and interaction strengths, thus providing a unified DFT description for both Kondo and CB regimes and the transition between them.
A New Source Biasing Approach in ADVANTG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bevill, Aaron M; Mosher, Scott W
2012-01-01
The ADVANTG code has been developed at Oak Ridge National Laboratory to generate biased sources and weight window maps for MCNP using the CADIS and FW-CADIS methods. In preparation for an upcoming RSICC release, a new approach for generating a biased source has been developed. This improvement streamlines user input and improves reliability. Previous versions of ADVANTG generated the biased source from ADVANTG input, writing an entirely new general fixed-source definition (SDEF). Because volumetric sources were translated into SDEF-format as a finite set of points, the user had to perform a convergence study to determine whether the number of sourcemore » points used accurately represented the source region. Further, the large number of points that must be written in SDEF-format made the MCNP input and output files excessively long and difficult to debug. ADVANTG now reads SDEF-format distributions and generates corresponding source biasing cards, eliminating the need for a convergence study. Many problems of interest use complicated source regions that are defined using cell rejection. In cell rejection, the source distribution in space is defined using an arbitrarily complex cell and a simple bounding region. Source positions are sampled within the bounding region but accepted only if they fall within the cell; otherwise, the position is resampled entirely. When biasing in space is applied to sources that use rejection sampling, current versions of MCNP do not account for the rejection in setting the source weight of histories, resulting in an 'unfair game'. This problem was circumvented in previous versions of ADVANTG by translating volumetric sources into a finite set of points, which does not alter the mean history weight ({bar w}). To use biasing parameters without otherwise modifying the original cell-rejection SDEF-format source, ADVANTG users now apply a correction factor for {bar w} in post-processing. A stratified-random sampling approach in ADVANTG is under development to automatically report the correction factor with estimated uncertainty. This study demonstrates the use of ADVANTG's new source biasing method, including the application of {bar w}.« less
NASA Astrophysics Data System (ADS)
Monico, J. F. G.; De Oliveira, P. S., Jr.; Morel, L.; Fund, F.; Durand, S.; Durand, F.
2017-12-01
Mitigation of ionospheric effects on GNSS (Global Navigation Satellite System) signals is very challenging, especially for GNSS positioning applications based on SSR (State Space Representation) concept, which requires the knowledge of spatial correlated errors with considerable accuracy level (centimeter). The presence of satellite and receiver hardware biases on GNSS measurements difficult the proper estimation of ionospheric corrections, reducing their physical meaning. This problematic can lead to ionospheric corrections biased of several meters and often presenting negative values, which is physically not possible. In this contribution, we discuss a strategy to obtain SSR ionospheric corrections based on GNSS measurements from CORS (Continuous Operation Reference Stations) Networks with minimal presence of hardware biases and consequently physical meaning. Preliminary results are presented on generation and application of such corrections for simulated users located in Brazilian region under high level of ionospheric activity.
SU-E-I-07: An Improved Technique for Scatter Correction in PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, S; Wang, Y; Lue, K
2014-06-01
Purpose: In positron emission tomography (PET), the single scatter simulation (SSS) algorithm is widely used for scatter estimation in clinical scans. However, bias usually occurs at the essential steps of scaling the computed SSS distribution to real scatter amounts by employing the scatter-only projection tail. The bias can be amplified when the scatter-only projection tail is too small, resulting in incorrect scatter correction. To this end, we propose a novel scatter calibration technique to accurately estimate the amount of scatter using pre-determined scatter fraction (SF) function instead of the employment of scatter-only tail information. Methods: As the SF depends onmore » the radioactivity distribution and the attenuating material of the patient, an accurate theoretical relation cannot be devised. Instead, we constructed an empirical transformation function between SFs and average attenuation coefficients based on a serious of phantom studies with different sizes and materials. From the average attenuation coefficient, the predicted SFs were calculated using empirical transformation function. Hence, real scatter amount can be obtained by scaling the SSS distribution with the predicted SFs. The simulation was conducted using the SimSET. The Siemens Biograph™ 6 PET scanner was modeled in this study. The Software for Tomographic Image Reconstruction (STIR) was employed to estimate the scatter and reconstruct images. The EEC phantom was adopted to evaluate the performance of our proposed technique. Results: The scatter-corrected image of our method demonstrated improved image contrast over that of SSS. For our technique and SSS of the reconstructed images, the normalized standard deviation were 0.053 and 0.182, respectively; the root mean squared errors were 11.852 and 13.767, respectively. Conclusion: We have proposed an alternative method to calibrate SSS (C-SSS) to the absolute scatter amounts using SF. This method can avoid the bias caused by the insufficient tail information and therefore improve the accuracy of scatter estimation.« less
Backfitting in Smoothing Spline Anova, with Application to Historical Global Temperature Data
NASA Astrophysics Data System (ADS)
Luo, Zhen
In the attempt to estimate the temperature history of the earth using the surface observations, various biases can exist. An important source of bias is the incompleteness of sampling over both time and space. There have been a few methods proposed to deal with this problem. Although they can correct some biases resulting from incomplete sampling, they have ignored some other significant biases. In this dissertation, a smoothing spline ANOVA approach which is a multivariate function estimation method is proposed to deal simultaneously with various biases resulting from incomplete sampling. Besides that, an advantage of this method is that we can get various components of the estimated temperature history with a limited amount of information stored. This method can also be used for detecting erroneous observations in the data base. The method is illustrated through an example of modeling winter surface air temperature as a function of year and location. Extension to more complicated models are discussed. The linear system associated with the smoothing spline ANOVA estimates is too large to be solved by full matrix decomposition methods. A computational procedure combining the backfitting (Gauss-Seidel) algorithm and the iterative imputation algorithm is proposed. This procedure takes advantage of the tensor product structure in the data to make the computation feasible in an environment of limited memory. Various related issues are discussed, e.g., the computation of confidence intervals and the techniques to speed up the convergence of the backfitting algorithm such as collapsing and successive over-relaxation.
Zhang, Haixia; Zhao, Junkang; Gu, Caijiao; Cui, Yan; Rong, Huiying; Meng, Fanlong; Wang, Tong
2015-05-01
The study of the medical expenditure and its influencing factors among the students enrolling in Urban Resident Basic Medical Insurance (URBMI) in Taiyuan indicated that non response bias and selection bias coexist in dependent variable of the survey data. Unlike previous studies only focused on one missing mechanism, a two-stage method to deal with two missing mechanisms simultaneously was suggested in this study, combining multiple imputation with sample selection model. A total of 1 190 questionnaires were returned by the students (or their parents) selected in child care settings, schools and universities in Taiyuan by stratified cluster random sampling in 2012. In the returned questionnaires, 2.52% existed not missing at random (NMAR) of dependent variable and 7.14% existed missing at random (MAR) of dependent variable. First, multiple imputation was conducted for MAR by using completed data, then sample selection model was used to correct NMAR in multiple imputation, and a multi influencing factor analysis model was established. Based on 1 000 times resampling, the best scheme of filling the random missing values is the predictive mean matching (PMM) method under the missing proportion. With this optimal scheme, a two stage survey was conducted. Finally, it was found that the influencing factors on annual medical expenditure among the students enrolling in URBMI in Taiyuan included population group, annual household gross income, affordability of medical insurance expenditure, chronic disease, seeking medical care in hospital, seeking medical care in community health center or private clinic, hospitalization, hospitalization canceled due to certain reason, self medication and acceptable proportion of self-paid medical expenditure. The two-stage method combining multiple imputation with sample selection model can deal with non response bias and selection bias effectively in dependent variable of the survey data.
Kepes, Sven; Bushman, Brad J; Anderson, Craig A
2017-07-01
A large meta-analysis by Anderson et al. (2010) found that violent video games increased aggressive thoughts, angry feelings, physiological arousal, and aggressive behavior and decreased empathic feelings and helping behavior. Hilgard, Engelhardt, and Rouder (2017) reanalyzed the data of Anderson et al. (2010) using newer publication bias methods (i.e., precision-effect test, precision-effect estimate with standard error, p-uniform, p-curve). Based on their reanalysis, Hilgard, Engelhardt, and Rouder concluded that experimental studies examining the effect of violent video games on aggressive affect and aggressive behavior may be contaminated by publication bias, and these effects are very small when corrected for publication bias. However, the newer methods Hilgard, Engelhardt, and Rouder used may not be the most appropriate. Because publication bias is a potential a problem in any scientific domain, we used a comprehensive sensitivity analysis battery to examine the influence of publication bias and outliers on the experimental effects reported by Anderson et al. We used best meta-analytic practices and the triangulation approach to locate the likely position of the true mean effect size estimates. Using this methodological approach, we found that the combined adverse effects of outliers and publication bias was less severe than what Hilgard, Engelhardt, and Rouder found for publication bias alone. Moreover, the obtained mean effects using recommended methods and practices were not very small in size. The results of the methods used by Hilgard, Engelhardt, and Rouder tended to not converge well with the results of the methods we used, indicating potentially poor performance. We therefore conclude that violent video game effects should remain a societal concern. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
An Uncertainty Data Set for Passive Microwave Satellite Observations of Warm Cloud Liquid Water Path
NASA Astrophysics Data System (ADS)
Greenwald, Thomas J.; Bennartz, Ralf; Lebsock, Matthew; Teixeira, João.
2018-04-01
The first extended comprehensive data set of the retrieval uncertainties in passive microwave observations of cloud liquid water path (CLWP) for warm oceanic clouds has been created for practical use in climate applications. Four major sources of systematic errors were considered over the 9-year record of the Advanced Microwave Scanning Radiometer-EOS (AMSR-E): clear-sky bias, cloud-rain partition (CRP) bias, cloud-fraction-dependent bias, and cloud temperature bias. Errors were estimated using a unique merged AMSR-E/Moderate resolution Imaging Spectroradiometer Level 2 data set as well as observations from the Cloud-Aerosol Lidar with Orthogonal Polarization and the CloudSat Cloud Profiling Radar. To quantify the CRP bias more accurately, a new parameterization was developed to improve the inference of CLWP in warm rain. The cloud-fraction-dependent bias was found to be a combination of the CRP bias, an in-cloud bias, and an adjacent precipitation bias. Globally, the mean net bias was 0.012 kg/m2, dominated by the CRP and in-cloud biases, but with considerable regional and seasonal variation. Good qualitative agreement between a bias-corrected AMSR-E CLWP climatology and ship observations in the Northeast Pacific suggests that the bias estimates are reasonable. However, a possible underestimation of the net bias in certain conditions may be due in part to the crude method used in classifying precipitation, underscoring the need for an independent method of detecting rain in warm clouds. This study demonstrates the importance of combining visible-infrared imager data and passive microwave CLWP observations for estimating uncertainties and improving the accuracy of these observations.
Impact of chlorophyll bias on the tropical Pacific mean climate in an earth system model
NASA Astrophysics Data System (ADS)
Lim, Hyung-Gyu; Park, Jong-Yeon; Kug, Jong-Seong
2017-12-01
Climate modeling groups nowadays develop earth system models (ESMs) by incorporating biogeochemical processes in their climate models. The ESMs, however, often show substantial bias in simulated marine biogeochemistry which can potentially introduce an undesirable bias in physical ocean fields through biogeophysical interactions. This study examines how and how much the chlorophyll bias in a state-of-the-art ESM affects the mean and seasonal cycle of tropical Pacific sea-surface temperature (SST). The ESM used in the present study shows a sizeable positive bias in the simulated tropical chlorophyll. We found that the correction of the chlorophyll bias can reduce the ESM's intrinsic cold SST mean bias in the equatorial Pacific. The biologically-induced cold SST bias is strongly affected by seasonally-dependent air-sea coupling strength. In addition, the correction of chlorophyll bias can improve the annual cycle of SST by up to 25%. This result suggests a possible modeling approach in understanding the two-way interactions between physical and chlorophyll biases by biogeophysical effects.
Partial volume correction and image analysis methods for intersubject comparison of FDG-PET studies
NASA Astrophysics Data System (ADS)
Yang, Jun
2000-12-01
Partial volume effect is an artifact mainly due to the limited imaging sensor resolution. It creates bias in the measured activity in small structures and around tissue boundaries. In brain FDG-PET studies, especially for Alzheimer's disease study where there is serious gray matter atrophy, accurate estimate of cerebral metabolic rate of glucose is even more problematic due to large amount of partial volume effect. In this dissertation, we developed a framework enabling inter-subject comparison of partial volume corrected brain FDG-PET studies. The framework is composed of the following image processing steps: (1)MRI segmentation, (2)MR-PET registration, (3)MR based PVE correction, (4)MR 3D inter-subject elastic mapping. Through simulation studies, we showed that the newly developed partial volume correction methods, either pixel based or ROI based, performed better than previous methods. By applying this framework to a real Alzheimer's disease study, we demonstrated that the partial volume corrected glucose rates vary significantly among the control, at risk and disease patient groups and this framework is a promising tool useful for assisting early identification of Alzheimer's patients.
Torres, Sergio N; Pezoa, Jorge E; Hayat, Majeed M
2003-10-10
What is to our knowledge a new scene-based algorithm for nonuniformity correction in infrared focal-plane array sensors has been developed. The technique is based on the inverse covariance form of the Kalman filter (KF), which has been reported previously and used in estimating the gain and bias of each detector in the array from scene data. The gain and the bias of each detector in the focal-plane array are assumed constant within a given sequence of frames, corresponding to a certain time and operational conditions, but they are allowed to randomly drift from one sequence to another following a discrete-time Gauss-Markov process. The inverse covariance form filter estimates the gain and the bias of each detector in the focal-plane array and optimally updates them as they drift in time. The estimation is performed with considerably higher computational efficiency than the equivalent KF. The ability of the algorithm in compensating for fixed-pattern noise in infrared imagery and in reducing the computational complexity is demonstrated by use of both simulated and real data.
A Comparison of Three Tests of Mediation
ERIC Educational Resources Information Center
Warbasse, Rosalia E.
2009-01-01
A simulation study was conducted to evaluate the performance of three tests of mediation: the bias-corrected and accelerated bootstrap (Efron & Tibshirani, 1993), the asymmetric confidence limits test (MacKinnon, 2008), and a multiple regression approach described by Kenny, Kashy, and Bolger (1998). The evolution of these methods is reviewed and…
Analysis of Developmental Data: Comparison Among Alternative Methods
ERIC Educational Resources Information Center
Wilson, Ronald S.
1975-01-01
To examine the ability of the correction factor epsilon to counteract statistical bias in univariate analysis, an analysis of variance (adjusted by epsilon) and a multivariate analysis of variance were performed on the same data. The results indicated that univariate analysis is a fully protected design when used with epsilon. (JMB)
Background: Simulation studies have previously demonstrated that time-series analyses using smoothing splines correctly model null health-air pollution associations. Methods: We repeatedly simulated season, meteorology and air quality for the metropolitan area of Atlanta from cyc...
When do we care about political neutrality? The hypocritical nature of reaction to political bias
Sulitzeanu-Kenan, Raanan
2018-01-01
Claims and accusations of political bias are common in many countries. The essence of such claims is a denunciation of alleged violations of political neutrality in the context of media coverage, legal and bureaucratic decisions, academic teaching etc. Yet the acts and messages that give rise to such claims are also embedded within a context of intergroup competition. Thus, in evaluating the seriousness of, and the need for taking a corrective action in reaction to a purported politically biased act people may consider both the alleged normative violation and the political implications of the act/message for the evaluator’s ingroup. The question thus arises whether partisans react similarly to ingroup-aiding and ingroup-harming actions or messages which they perceive as politically biased. In three separate studies, conducted in two countries, we show that political considerations strongly affect partisans’ reactions to actions and messages that they perceive as politically biased. Namely, ingroup-harming biased messages/acts are considered more serious and are more likely to warrant corrective action in comparison to ingroup-aiding biased messages/acts. We conclude by discussing the implications of these findings for the implementations of measures intended for correcting and preventing biases, and for the nature of conflict and competition between rival political groups. PMID:29723271
When do we care about political neutrality? The hypocritical nature of reaction to political bias.
Yair, Omer; Sulitzeanu-Kenan, Raanan
2018-01-01
Claims and accusations of political bias are common in many countries. The essence of such claims is a denunciation of alleged violations of political neutrality in the context of media coverage, legal and bureaucratic decisions, academic teaching etc. Yet the acts and messages that give rise to such claims are also embedded within a context of intergroup competition. Thus, in evaluating the seriousness of, and the need for taking a corrective action in reaction to a purported politically biased act people may consider both the alleged normative violation and the political implications of the act/message for the evaluator's ingroup. The question thus arises whether partisans react similarly to ingroup-aiding and ingroup-harming actions or messages which they perceive as politically biased. In three separate studies, conducted in two countries, we show that political considerations strongly affect partisans' reactions to actions and messages that they perceive as politically biased. Namely, ingroup-harming biased messages/acts are considered more serious and are more likely to warrant corrective action in comparison to ingroup-aiding biased messages/acts. We conclude by discussing the implications of these findings for the implementations of measures intended for correcting and preventing biases, and for the nature of conflict and competition between rival political groups.
Application of distance correction to ChemCam laser-induced breakdown spectroscopy measurements
Mezzacappa, A.; Melikechi, N.; Cousin, A.; ...
2016-04-04
Laser-induced breakdown spectroscopy (LIBS) provides chemical information from atomic, ionic, and molecular emissions from which geochemical composition can be deciphered. Analysis of LIBS spectra in cases where targets are observed at different distances, as is the case for the ChemCam instrument on the Mars rover Curiosity, which performs analyses at distances between 2 and 7.4 m is not a simple task. Previously, we showed that spectral distance correction based on a proxy spectroscopic standard created from first-shot dust observations on Mars targets ameliorates the distance bias in multivariate-based elemental-composition predictions of laboratory data. In this work, we correct an expandedmore » set of neutral and ionic spectral emissions for distance bias in the ChemCam data set. By using and testing different selection criteria to generate multiple proxy standards, we find a correction that minimizes the difference in spectral intensity measured at two different distances and increases spectral reproducibility. When the quantitative performance of distance correction is assessed, there is improvement for SiO 2, Al 2O 3, CaO, FeOT, Na 2O, K 2O, that is, for most of the major rock forming elements, and for the total major-element weight percent predicted. But, for MgO the method does not provide improvements while for TiO 2, it yields inconsistent results. Additionally, we observed that many emission lines do not behave consistently with distance, evidenced from laboratory analogue measurements and ChemCam data. This limits the effectiveness of the method.« less
prepbufr BUFR biascr.$CDUMP.$CDATE Time dependent sat bias correction file abias text satang.$CDUMP.$CDATE Angle dependent sat bias correction satang text sfcanl.$CDUMP.$CDATE surface analysis sfcanl binary tcvitl.$CDUMP.$CDATE Tropical Storm Vitals syndata.tcvitals.tm00 text adpsfc.$CDUMP.$CDATE Surface land
NASA Astrophysics Data System (ADS)
Mamalakis, Antonios; Langousis, Andreas; Deidda, Roberto; Marrocu, Marino
2017-03-01
Distribution mapping has been identified as the most efficient approach to bias-correct climate model rainfall, while reproducing its statistics at spatial and temporal resolutions suitable to run hydrologic models. Yet its implementation based on empirical distributions derived from control samples (referred to as nonparametric distribution mapping) makes the method's performance sensitive to sample length variations, the presence of outliers, the spatial resolution of climate model results, and may lead to biases, especially in extreme rainfall estimation. To address these shortcomings, we propose a methodology for simultaneous bias correction and high-resolution downscaling of climate model rainfall products that uses: (a) a two-component theoretical distribution model (i.e., a generalized Pareto (GP) model for rainfall intensities above a specified threshold u*, and an exponential model for lower rainrates), and (b) proper interpolation of the corresponding distribution parameters on a user-defined high-resolution grid, using kriging for uncertain data. We assess the performance of the suggested parametric approach relative to the nonparametric one, using daily raingauge measurements from a dense network in the island of Sardinia (Italy), and rainfall data from four GCM/RCM model chains of the ENSEMBLES project. The obtained results shed light on the competitive advantages of the parametric approach, which is proved more accurate and considerably less sensitive to the characteristics of the calibration period, independent of the GCM/RCM combination used. This is especially the case for extreme rainfall estimation, where the GP assumption allows for more accurate and robust estimates, also beyond the range of the available data.
Bias-correction of PERSIANN-CDR Extreme Precipitation Estimates Over the United States
NASA Astrophysics Data System (ADS)
Faridzad, M.; Yang, T.; Hsu, K. L.; Sorooshian, S.
2017-12-01
Ground-based precipitation measurements can be sparse or even nonexistent over remote regions which make it difficult for extreme event analysis. PERSIANN-CDR (CDR), with 30+ years of daily rainfall information, provides an opportunity to study precipitation for regions where ground measurements are limited. In this study, the use of CDR annual extreme precipitation for frequency analysis of extreme events over limited/ungauged basins is explored. The adjustment of CDR is implemented in two steps: (1) Calculated CDR bias correction factor at limited gauge locations based on the linear regression analysis of gauge and CDR annual maxima precipitation; and (2) Extend the bias correction factor to the locations where gauges are not available. The correction factors are estimated at gauge sites over various catchments, elevation zones, and climate regions and the results were generalized to ungauged sites based on regional and climatic similarity. Case studies were conducted on 20 basins with diverse climate and altitudes in the Eastern and Western US. Cross-validation reveals that the bias correction factors estimated on limited calibration data can be extended to regions with similar characteristics. The adjusted CDR estimates also outperform gauge interpolation on validation sites consistently. It is suggested that the CDR with bias adjustment has a potential for study frequency analysis of extreme events, especially for regions with limited gauge observations.
Ge, Yulong; Zhou, Feng; Sun, Baoqi; Wang, Shengli; Shi, Bo
2017-01-01
We present quad-constellation (namely, GPS, GLONASS, BeiDou and Galileo) time group delay (TGD) and differential code bias (DCB) correction models to fully exploit the code observations of all the four global navigation satellite systems (GNSSs) for navigation and positioning. The relationship between TGDs and DCBs for multi-GNSS is clearly figured out, and the equivalence of TGD and DCB correction models combining theory with practice is demonstrated. Meanwhile, the TGD/DCB correction models have been extended to various standard point positioning (SPP) and precise point positioning (PPP) scenarios in a multi-GNSS and multi-frequency context. To evaluate the effectiveness and practicability of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both single-frequency GNSS ionosphere-corrected SPP and dual-frequency GNSS ionosphere-free SPP/PPP tests are carried out with quad-constellation signals. Furthermore, the author investigates the influence of differential code biases on GNSS positioning estimates. The experiments show that multi-constellation combination SPP performs better after DCB/TGD correction, for example, for GPS-only b1-based SPP, the positioning accuracies can be improved by 25.0%, 30.6% and 26.7%, respectively, in the N, E, and U components, after the differential code biases correction, while GPS/GLONASS/BDS b1-based SPP can be improved by 16.1%, 26.1% and 9.9%. For GPS/BDS/Galileo the 3rd frequency based SPP, the positioning accuracies are improved by 2.0%, 2.0% and 0.4%, respectively, in the N, E, and U components, after Galileo satellites DCB correction. The accuracy of Galileo-only b1-based SPP are improved about 48.6%, 34.7% and 40.6% with DCB correction, respectively, in the N, E, and U components. The estimates of multi-constellation PPP are subject to different degrees of influence. For multi-constellation combination SPP, the accuracy of single-frequency is slightly better than that of dual-frequency combinations. Dual-frequency combinations are more sensitive to the differential code biases, especially for the 2nd and 3rd frequency combination, such as for GPS/BDS SPP, accuracy improvements of 60.9%, 26.5% and 58.8% in the three coordinate components is achieved after DCB parameters correction. For multi-constellation PPP, the convergence time can be reduced significantly with differential code biases correction. And the accuracy of positioning is slightly better with TGD/DCB correction. PMID:28300787
Ge, Yulong; Zhou, Feng; Sun, Baoqi; Wang, Shengli; Shi, Bo
2017-03-16
We present quad-constellation (namely, GPS, GLONASS, BeiDou and Galileo) time group delay (TGD) and differential code bias (DCB) correction models to fully exploit the code observations of all the four global navigation satellite systems (GNSSs) for navigation and positioning. The relationship between TGDs and DCBs for multi-GNSS is clearly figured out, and the equivalence of TGD and DCB correction models combining theory with practice is demonstrated. Meanwhile, the TGD/DCB correction models have been extended to various standard point positioning (SPP) and precise point positioning (PPP) scenarios in a multi-GNSS and multi-frequency context. To evaluate the effectiveness and practicability of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both single-frequency GNSS ionosphere-corrected SPP and dual-frequency GNSS ionosphere-free SPP/PPP tests are carried out with quad-constellation signals. Furthermore, the author investigates the influence of differential code biases on GNSS positioning estimates. The experiments show that multi-constellation combination SPP performs better after DCB/TGD correction, for example, for GPS-only b1-based SPP, the positioning accuracies can be improved by 25.0%, 30.6% and 26.7%, respectively, in the N, E, and U components, after the differential code biases correction, while GPS/GLONASS/BDS b1-based SPP can be improved by 16.1%, 26.1% and 9.9%. For GPS/BDS/Galileo the 3rd frequency based SPP, the positioning accuracies are improved by 2.0%, 2.0% and 0.4%, respectively, in the N, E, and U components, after Galileo satellites DCB correction. The accuracy of Galileo-only b1-based SPP are improved about 48.6%, 34.7% and 40.6% with DCB correction, respectively, in the N, E, and U components. The estimates of multi-constellation PPP are subject to different degrees of influence. For multi-constellation combination SPP, the accuracy of single-frequency is slightly better than that of dual-frequency combinations. Dual-frequency combinations are more sensitive to the differential code biases, especially for the 2nd and 3rd frequency combination, such as for GPS/BDS SPP, accuracy improvements of 60.9%, 26.5% and 58.8% in the three coordinate components is achieved after DCB parameters correction. For multi-constellation PPP, the convergence time can be reduced significantly with differential code biases correction. And the accuracy of positioning is slightly better with TGD/DCB correction.
Correction of sampling bias in a cross-sectional study of post-surgical complications.
Fluss, Ronen; Mandel, Micha; Freedman, Laurence S; Weiss, Inbal Salz; Zohar, Anat Ekka; Haklai, Ziona; Gordon, Ethel-Sherry; Simchen, Elisheva
2013-06-30
Cross-sectional designs are often used to monitor the proportion of infections and other post-surgical complications acquired in hospitals. However, conventional methods for estimating incidence proportions when applied to cross-sectional data may provide estimators that are highly biased, as cross-sectional designs tend to include a high proportion of patients with prolonged hospitalization. One common solution is to use sampling weights in the analysis, which adjust for the sampling bias inherent in a cross-sectional design. The current paper describes in detail a method to build weights for a national survey of post-surgical complications conducted in Israel. We use the weights to estimate the probability of surgical site infections following colon resection, and validate the results of the weighted analysis by comparing them with those obtained from a parallel study with a historically prospective design. Copyright © 2012 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Isselhardt, B. H.; Prussin, S. G.; Savina, M. R.
2016-01-01
Resonance Ionization Mass Spectrometry (RIMS) has been developed as a method to measure uranium isotope abundances. In this approach, RIMS is used as an element-selective ionization process between uranium atoms and potential isobars without the aid of chemical purification and separation. The use of broad bandwidth lasers with automated feedback control of wavelength was applied to the measurement of the U-235/U-238 ratio to decrease laser-induced isotopic fractionation. In application, isotope standards are used to identify and correct bias in measured isotope ratios, but understanding laser-induced bias from first-principles can improve the precision and accuracy of experimental measurements. A rate equationmore » model for predicting the relative ionization probability has been developed to study the effect of variations in laser parameters on the measured isotope ratio. The model uses atomic data and empirical descriptions of laser performance to estimate the laser-induced bias expected in experimental measurements of the U-235/U-238 ratio. Empirical corrections are also included to account for ionization processes that are difficult to calculate from first principles with the available atomic data. Development of this model has highlighted several important considerations for properly interpreting experimental results.« less
Willems, Sjw; Schat, A; van Noorden, M S; Fiocco, M
2018-02-01
Censored data make survival analysis more complicated because exact event times are not observed. Statistical methodology developed to account for censored observations assumes that patients' withdrawal from a study is independent of the event of interest. However, in practice, some covariates might be associated to both lifetime and censoring mechanism, inducing dependent censoring. In this case, standard survival techniques, like Kaplan-Meier estimator, give biased results. The inverse probability censoring weighted estimator was developed to correct for bias due to dependent censoring. In this article, we explore the use of inverse probability censoring weighting methodology and describe why it is effective in removing the bias. Since implementing this method is highly time consuming and requires programming and mathematical skills, we propose a user friendly algorithm in R. Applications to a toy example and to a medical data set illustrate how the algorithm works. A simulation study was carried out to investigate the performance of the inverse probability censoring weighted estimators in situations where dependent censoring is present in the data. In the simulation process, different sample sizes, strengths of the censoring model, and percentages of censored individuals were chosen. Results show that in each scenario inverse probability censoring weighting reduces the bias induced in the traditional Kaplan-Meier approach where dependent censoring is ignored.
Isselhardt, B. H.; Prussin, S. G.; Savina, M. R.; ...
2015-12-07
Resonance Ionization Mass Spectrometry (RIMS) has been developed as a method to measure uranium isotope abundances. In this approach, RIMS is used as an element-selective ionization process between uranium atoms and potential isobars without the aid of chemical purification and separation. The use of broad bandwidth lasers with automated feedback control of wavelength was applied to the measurement of the 235U/238U ratio to decrease laser-induced isotopic fractionation. In application, isotope standards are used to identify and correct bias in measured isotope ratios, but understanding laser-induced bias from first-principles can improve the precision and accuracy of experimental measurements. A rate equationmore » model for predicting the relative ionization probability has been developed to study the effect of variations in laser parameters on the measured isotope ratio. The model uses atomic data and empirical descriptions of laser performance to estimate the laser-induced bias expected in experimental measurements of the 235U/ 238U ratio. Empirical corrections are also included to account for ionization processes that are difficult to calculate from first principles with the available atomic data. As a result, development of this model has highlighted several important considerations for properly interpreting experimental results.« less
An Uncertainty Data Set for Passive Microwave Satellite Observations of Warm Cloud Liquid Water Path
Bennartz, Ralf; Lebsock, Matthew; Teixeira, João
2018-01-01
Abstract The first extended comprehensive data set of the retrieval uncertainties in passive microwave observations of cloud liquid water path (CLWP) for warm oceanic clouds has been created for practical use in climate applications. Four major sources of systematic errors were considered over the 9‐year record of the Advanced Microwave Scanning Radiometer‐EOS (AMSR‐E): clear‐sky bias, cloud‐rain partition (CRP) bias, cloud‐fraction‐dependent bias, and cloud temperature bias. Errors were estimated using a unique merged AMSR‐E/Moderate resolution Imaging Spectroradiometer Level 2 data set as well as observations from the Cloud‐Aerosol Lidar with Orthogonal Polarization and the CloudSat Cloud Profiling Radar. To quantify the CRP bias more accurately, a new parameterization was developed to improve the inference of CLWP in warm rain. The cloud‐fraction‐dependent bias was found to be a combination of the CRP bias, an in‐cloud bias, and an adjacent precipitation bias. Globally, the mean net bias was 0.012 kg/m2, dominated by the CRP and in‐cloud biases, but with considerable regional and seasonal variation. Good qualitative agreement between a bias‐corrected AMSR‐E CLWP climatology and ship observations in the Northeast Pacific suggests that the bias estimates are reasonable. However, a possible underestimation of the net bias in certain conditions may be due in part to the crude method used in classifying precipitation, underscoring the need for an independent method of detecting rain in warm clouds. This study demonstrates the importance of combining visible‐infrared imager data and passive microwave CLWP observations for estimating uncertainties and improving the accuracy of these observations. PMID:29938146
Tisdall, M Dylan; Reuter, Martin; Qureshi, Abid; Buckner, Randy L; Fischl, Bruce; van der Kouwe, André J W
2016-02-15
Recent work has demonstrated that subject motion produces systematic biases in the metrics computed by widely used morphometry software packages, even when the motion is too small to produce noticeable image artifacts. In the common situation where the control population exhibits different behaviors in the scanner when compared to the experimental population, these systematic measurement biases may produce significant confounds for between-group analyses, leading to erroneous conclusions about group differences. While previous work has shown that prospective motion correction can improve perceived image quality, here we demonstrate that, in healthy subjects performing a variety of directed motions, the use of the volumetric navigator (vNav) prospective motion correction system significantly reduces the motion-induced bias and variance in morphometry. Copyright © 2015 Elsevier Inc. All rights reserved.
2012-01-01
Background Dental caries is highly prevalent and a significant public health problem among children throughout the world. Epidemiological data regarding prevalence of dental caries amongst Pakistani pre-school children is very limited. The objective of this study is to determine the frequency of dental caries among pre-school children of Saddar Town, Karachi, Pakistan and the factors related to caries. Methods A cross-sectional study of 1000 preschool children was conducted in Saddar town, Karachi. Two-stage cluster sampling was used to select the sample. At first stage, eight clusters were selected randomly from total 11 clusters. In second stage, from the eight selected clusters, preschools were identified and children between 3- to 6-years age group were assessed for dental caries. Results Caries prevalence was 51% with a mean dmft score being 2.08 (±2.97) of which decayed teeth constituted 1.95. The mean dmft of males was 2.3 (±3.08) and of females was 1.90 (±2.90). The mean dmft of 3, 4, 5 and 6- year olds was 1.65, 2.11, 2.16 and 3.11 respectively. A significant association was found between dental caries and following variables: age group of 4-years (p-value ² 0.029, RR = 1.248, 95% Bias corrected CI 0.029-0.437) and 5-years (p-value ² 0.009, RR = 1.545, 95% Bias corrected CI 0.047-0.739), presence of dental plaque (p-value ² 0.003, RR = 0.744, 95% Bias corrected CI (−0.433)-(−0.169)), poor oral hygiene (p-value ² 0.000, RR = 0.661, 95% Bias corrected CI (−0.532)-(−0.284)), as well as consumption of non-sweetened milk (p-value ² 0.049, RR = 1.232, 95% Bias corrected CI 0.061-0.367). Conclusion Half of the preschoolers had dental caries coupled with a high prevalence of unmet dental treatment needs. Association between caries experience and age of child, consumption of non-sweetened milk, dental plaque and poor oral hygiene had been established. PMID:23270546
2008-07-01
EPA emission standards, the EPA has also specified the measurement methods . According to EPA, the most accurate and precise method of determining ...function of particle size and refractive index . If particle size distributions and refractive indices in diesel exhaust strongly depend on the...to correct the bias of the raw SFTM data and align the data with the values determined by the federal reference method . Thus, to use these methods a
ERIC Educational Resources Information Center
Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong
2010-01-01
This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile…
Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette
2018-05-01
The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Chromý, Vratislav; Vinklárková, Bára; Šprongl, Luděk; Bittová, Miroslava
2015-01-01
We found previously that albumin-calibrated total protein in certified reference materials causes unacceptable positive bias in analysis of human sera. The simplest way to cure this defect is the use of human-based serum/plasma standards calibrated by the Kjeldahl method. Such standards, commutative with serum samples, will compensate for bias caused by lipids and bilirubin in most human sera. To find a suitable primary reference procedure for total protein in reference materials, we reviewed Kjeldahl methods adopted by laboratory medicine. We found two methods recommended for total protein in human samples: an indirect analysis based on total Kjeldahl nitrogen corrected for its nonprotein nitrogen and a direct analysis made on isolated protein precipitates. The methods found will be assessed in a subsequent article.
Lippman, Sheri A.; Shade, Starley B.; Hubbard, Alan E.
2011-01-01
Background Intervention effects estimated from non-randomized intervention studies are plagued by biases, yet social or structural intervention studies are rarely randomized. There are underutilized statistical methods available to mitigate biases due to self-selection, missing data, and confounding in longitudinal, observational data permitting estimation of causal effects. We demonstrate the use of Inverse Probability Weighting (IPW) to evaluate the effect of participating in a combined clinical and social STI/HIV prevention intervention on reduction of incident chlamydia and gonorrhea infections among sex workers in Brazil. Methods We demonstrate the step-by-step use of IPW, including presentation of the theoretical background, data set up, model selection for weighting, application of weights, estimation of effects using varied modeling procedures, and discussion of assumptions for use of IPW. Results 420 sex workers contributed data on 840 incident chlamydia and gonorrhea infections. Participators were compared to non-participators following application of inverse probability weights to correct for differences in covariate patterns between exposed and unexposed participants and between those who remained in the intervention and those who were lost-to-follow-up. Estimators using four model selection procedures provided estimates of intervention effect between odds ratio (OR) .43 (95% CI:.22-.85) and .53 (95% CI:.26-1.1). Conclusions After correcting for selection bias, loss-to-follow-up, and confounding, our analysis suggests a protective effect of participating in the Encontros intervention. Evaluations of behavioral, social, and multi-level interventions to prevent STI can benefit by introduction of weighting methods such as IPW. PMID:20375927
Matzke, Nicholas J; Irmis, Randall B
2018-01-01
Tip-dating, where fossils are included as dated terminal taxa in Bayesian dating inference, is an increasingly popular method. Data for these studies often come from morphological character matrices originally developed for non-dated, and usually parsimony, analyses. In parsimony, only shared derived characters (synapomorphies) provide grouping information, so many character matrices have an ascertainment bias: they omit autapomorphies (unique derived character states), which are considered uninformative. There has been no study of the effect of this ascertainment bias in tip-dating, but autapomorphies can be informative in model-based inference. We expected that excluding autapomorphies would shorten the morphological branchlengths of terminal branches, and thus bias downwards the time branchlengths inferred in tip-dating. We tested for this effect using a matrix for Carboniferous-Permian eureptiles where all autapomorphies had been deliberately coded. Surprisingly, date estimates are virtually unchanged when autapomorphies are excluded, although we find large changes in morphological rate estimates and small effects on topological and dating confidence. We hypothesized that the puzzling lack of effect on dating was caused by the non-clock nature of the eureptile data. We confirm this explanation by simulating strict clock and non-clock datasets, showing that autapomorphy exclusion biases dating only for the clocklike case. A theoretical solution to ascertainment bias is computing the ascertainment bias correction (M k parsinf ), but we explore this correction in detail, and show that it is computationally impractical for typical datasets with many character states and taxa. Therefore we recommend that palaeontologists collect autapomorphies whenever possible when assembling character matrices.
NASA Astrophysics Data System (ADS)
Nunes, Ana
2015-04-01
Extreme meteorological events played an important role in catastrophic occurrences observed in the past over densely populated areas in Brazil. This motived the proposal of an integrated system for analysis and assessment of vulnerability and risk caused by extreme events in urban areas that are particularly affected by complex topography. That requires a multi-scale approach, which is centered on a regional modeling system, consisting of a regional (spectral) climate model coupled to a land-surface scheme. This regional modeling system employs a boundary forcing method based on scale-selective bias correction and assimilation of satellite-based precipitation estimates. Scale-selective bias correction is a method similar to the spectral nudging technique for dynamical downscaling that allows internal modes to develop in agreement with the large-scale features, while the precipitation assimilation procedure improves the modeled deep-convection and drives the land-surface scheme variables. Here, the scale-selective bias correction acts only on the rotational part of the wind field, letting the precipitation assimilation procedure to correct moisture convergence, in order to reconstruct South American current climate within the South American Hydroclimate Reconstruction Project. The hydroclimate reconstruction outputs might eventually produce improved initial conditions for high-resolution numerical integrations in metropolitan regions, generating more reliable short-term precipitation predictions, and providing accurate hidrometeorological variables to higher resolution geomorphological models. Better representation of deep-convection from intermediate scales is relevant when the resolution of the regional modeling system is refined by any method to meet the scale of geomorphological dynamic models of stability and mass movement, assisting in the assessment of risk areas and estimation of terrain stability over complex topography. The reconstruction of past extreme events also helps the development of a system for decision-making, regarding natural and social disasters, and reducing impacts. Numerical experiments using this regional modeling system successfully modeled severe weather events in Brazil. Comparisons with the NCEP Climate Forecast System Reanalysis outputs were made at resolutions of about 40- and 25-km of the regional climate model.
Fine-tuning satellite-based rainfall estimates
NASA Astrophysics Data System (ADS)
Harsa, Hastuadi; Buono, Agus; Hidayat, Rahmat; Achyar, Jaumil; Noviati, Sri; Kurniawan, Roni; Praja, Alfan S.
2018-05-01
Rainfall datasets are available from various sources, including satellite estimates and ground observation. The locations of ground observation scatter sparsely. Therefore, the use of satellite estimates is advantageous, because satellite estimates can provide data on places where the ground observations do not present. However, in general, the satellite estimates data contain bias, since they are product of algorithms that transform the sensors response into rainfall values. Another cause may come from the number of ground observations used by the algorithms as the reference in determining the rainfall values. This paper describe the application of bias correction method to modify the satellite-based dataset by adding a number of ground observation locations that have not been used before by the algorithm. The bias correction was performed by utilizing Quantile Mapping procedure between ground observation data and satellite estimates data. Since Quantile Mapping required mean and standard deviation of both the reference and the being-corrected data, thus the Inverse Distance Weighting scheme was applied beforehand to the mean and standard deviation of the observation data in order to provide a spatial composition of them, which were originally scattered. Therefore, it was possible to provide a reference data point at the same location with that of the satellite estimates. The results show that the new dataset have statistically better representation of the rainfall values recorded by the ground observation than the previous dataset.
The importance of topographically corrected null models for analyzing ecological point processes.
McDowall, Philip; Lynch, Heather J
2017-07-01
Analyses of point process patterns and related techniques (e.g., MaxEnt) make use of the expected number of occurrences per unit area and second-order statistics based on the distance between occurrences. Ecologists working with point process data often assume that points exist on a two-dimensional x-y plane or within a three-dimensional volume, when in fact many observed point patterns are generated on a two-dimensional surface existing within three-dimensional space. For many surfaces, however, such as the topography of landscapes, the projection from the surface to the x-y plane preserves neither area nor distance. As such, when these point patterns are implicitly projected to and analyzed in the x-y plane, our expectations of the point pattern's statistical properties may not be met. When used in hypothesis testing, we find that the failure to account for the topography of the generating surface may bias statistical tests that incorrectly identify clustering and, furthermore, may bias coefficients in inhomogeneous point process models that incorporate slope as a covariate. We demonstrate the circumstances under which this bias is significant, and present simple methods that allow point processes to be simulated with corrections for topography. These point patterns can then be used to generate "topographically corrected" null models against which observed point processes can be compared. © 2017 by the Ecological Society of America.
Manninen, Antti J.; O'Connor, Ewan J.; Vakkari, Ville; ...
2016-03-03
Current commercially available Doppler lidars provide an economical and robust solution for measuring vertical and horizontal wind velocities, together with the ability to provide co- and cross-polarised backscatter profiles. The high temporal resolution of these instruments allows turbulent properties to be obtained from studying the variation in radial velocities. However, the instrument specifications mean that certain characteristics, especially the background noise behaviour, become a limiting factor for the instrument sensitivity in regions where the aerosol load is low. Turbulent calculations require an accurate estimate of the contribution from velocity uncertainty estimates, which are directly related to the signal-to-noise ratio. Anymore » bias in the signal-to-noise ratio will propagate through as a bias in turbulent properties. In this paper we present a method to correct for artefacts in the background noise behaviour of commercially available Doppler lidars and reduce the signal-to-noise ratio threshold used to discriminate between noise, and cloud or aerosol signals. We show that, for Doppler lidars operating continuously at a number of locations in Finland, the data availability can be increased by as much as 50 % after performing this background correction and subsequent reduction in the threshold. Furthermore the reduction in bias also greatly improves subsequent calculations of turbulent properties in weak signal regimes.« less
Tropospheric GOM at the Pic du Midi Observatory-Correcting Bias in Denuder Based Observations.
Marusczak, Nicolas; Sonke, Jeroen E; Fu, Xuewu; Jiskra, Martin
2017-01-17
Gaseous elemental mercury (GEM, Hg) emissions are transformed to divalent reactive Hg (RM) forms throughout the troposphere and stratosphere. RM is often operationally quantified as the sum of particle bound Hg (PBM) and gaseous oxidized Hg (GOM). The measurement of GOM and PBM is challenging and under mounting criticism. Here we intercompare six months of automated GOM and PBM measurements using a Tekran (TK) KCl-coated denuder and quartz regenerable particulate filter method (GOM TK , PBM TK , and RM TK ) with RM CEM collected on cation exchange membranes (CEMs) at the high altitude Pic du Midi Observatory. We find that RM TK is systematically lower by a factor of 1.3 than RM CEM . We observe a significant relationship between GOM TK (but not PBM TK ) and Tekran flush TK blanks suggesting significant loss (36%) of labile GOM TK from the denuder or inlet. Adding the flush TK blank to RM TK results in good agreement with RM CEM (slope = 1.01, r 2 = 0.90) suggesting we can correct bias in RM TK and GOM TK . We provide a bias corrected (*) Pic du Midi data set for 2012-2014 that shows GOM* and RM* levels in dry free tropospheric air of 198 ± 57 and 229 ± 58 pg m -3 which agree well with in-flight observed RM and with model based GOM and RM estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manninen, Antti J.; O'Connor, Ewan J.; Vakkari, Ville
Current commercially available Doppler lidars provide an economical and robust solution for measuring vertical and horizontal wind velocities, together with the ability to provide co- and cross-polarised backscatter profiles. The high temporal resolution of these instruments allows turbulent properties to be obtained from studying the variation in radial velocities. However, the instrument specifications mean that certain characteristics, especially the background noise behaviour, become a limiting factor for the instrument sensitivity in regions where the aerosol load is low. Turbulent calculations require an accurate estimate of the contribution from velocity uncertainty estimates, which are directly related to the signal-to-noise ratio. Anymore » bias in the signal-to-noise ratio will propagate through as a bias in turbulent properties. In this paper we present a method to correct for artefacts in the background noise behaviour of commercially available Doppler lidars and reduce the signal-to-noise ratio threshold used to discriminate between noise, and cloud or aerosol signals. We show that, for Doppler lidars operating continuously at a number of locations in Finland, the data availability can be increased by as much as 50 % after performing this background correction and subsequent reduction in the threshold. Furthermore the reduction in bias also greatly improves subsequent calculations of turbulent properties in weak signal regimes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saunders, C.; Aldering, G.; Aragon, C.
2015-02-10
We estimate systematic errors due to K-corrections in standard photometric analyses of high-redshift Type Ia supernovae. Errors due to K-correction occur when the spectral template model underlying the light curve fitter poorly represents the actual supernova spectral energy distribution, meaning that the distance modulus cannot be recovered accurately. In order to quantify this effect, synthetic photometry is performed on artificially redshifted spectrophotometric data from 119 low-redshift supernovae from the Nearby Supernova Factory, and the resulting light curves are fit with a conventional light curve fitter. We measure the variation in the standardized magnitude that would be fit for a givenmore » supernova if located at a range of redshifts and observed with various filter sets corresponding to current and future supernova surveys. We find significant variation in the measurements of the same supernovae placed at different redshifts regardless of filters used, which causes dispersion greater than ∼0.05 mag for measurements of photometry using the Sloan-like filters and a bias that corresponds to a 0.03 shift in w when applied to an outside data set. To test the result of a shift in supernova population or environment at higher redshifts, we repeat our calculations with the addition of a reweighting of the supernovae as a function of redshift and find that this strongly affects the results and would have repercussions for cosmology. We discuss possible methods to reduce the contribution of the K-correction bias and uncertainty.« less
NASA Astrophysics Data System (ADS)
Babonis, G. S.; Csatho, B. M.; Schenk, A. F.
2016-12-01
We present a new record of Antarctic ice thickness changes, reconstructed from ICESat laser altimetry observations, from 2004-2009, at over 100,000 locations across the Antarctic Ice Sheet (AIS). This work generates elevation time series at ICESat groundtrack crossover regions on an observation-by-observation basis, with rigorous, quantified, error estimates using the SERAC approach (Schenk and Csatho, 2012). The results include average and annual elevation, volume and mass changes in Antarctica, fully corrected for glacial isostatic adjustment (GIA) and known intercampaign biases; and partitioned into contributions from surficial processes (e.g. firn densification) and ice dynamics. The modular flexibility of the SERAC framework allows for the assimilation of multiple ancillary datasets (e.g. GIA models, Intercampaign Bias Corrections, IBC), in a common framework, to calculate mass changes for several different combinations of GIA models and IBCs and to arrive at a measure of variability from these results. We are able to determine the effect these corrections have on annual and average volume and mass change calculations in Antarctica, and to explore how these differences vary between drainage basins and with elevation. As such, this contribution presents a method that compliments, and is consistent with, the 2012 Ice sheet Mass Balance Inter-comparison Exercise (IMBIE) results (Shepherd 2012). Additionally, this work will contribute to the 2016 IMBIE, which seeks to reconcile ice sheet mass changes from different observations,, including laser altimetry, using a different methodologies and ancillary datasets including GIA models, Firn Densification Models, and Intercampaign Bias Corrections.
Hayes, Alison J; Clarke, Philip M; Lung, Tom Wc
2011-09-25
Many studies have documented the bias in body mass index (BMI) determined from self-reported data on height and weight, but few have examined the change in bias over time. Using data from large, nationally-representative population health surveys, we examined change in bias in height and weight reporting among Australian adults between 1995 and 2008. Our study dataset included 9,635 men and women in 1995 and 9,141 in 2007-2008. We investigated the determinants of the bias and derived correction equations using 2007-2008 data, which can be applied when only self-reported anthropometric data are available. In 1995, self-reported BMI (derived from height and weight) was 1.2 units (men) and 1.4 units (women) lower than measured BMI. In 2007-2008, there was still underreporting, but the amount had declined to 0.6 units (men) and 0.7 units (women) below measured BMI. The major determinants of reporting error in 2007-2008 were age, sex, measured BMI, and education of the respondent. Correction equations for height and weight derived from 2007-2008 data and applied to self-reported data were able to adjust for the bias and were accurate across all age and sex strata. The diminishing reporting bias in BMI in Australia means that correction equations derived from 2007-2008 data may not be transferable to earlier self-reported data. Second, predictions of future overweight and obesity in Australia based on trends in self-reported information are likely to be inaccurate, as the change in reporting bias will affect the apparent increase in self-reported obesity prevalence.
NASA Astrophysics Data System (ADS)
Schoenberg, Ronny; von Blanckenburg, Friedhelm
2005-04-01
Multicollector ICP-MS-based stable isotope procedures provide the capability to determine small variations in metal isotope composition of materials, but they are prone to substantial bias introduced by inadequate sample preparation. Such a "cryptic" bias is not necessarily identifiable from the measured isotope ratios. The analytical protocol for Fe isotope analyses of organic and inorganic materials described here identifies and avoids such pitfalls. In medium-mass resolution mode of the ThermoFinnigan Neptune MC-ICP-MS, a 1-ppm Fe solution with an uptake rate of 50-70 [mu]L min-1 yielded 3 × 10-11 A on 56Fe for the ThermoFinnigan stable introduction system and 1.2-1.8 × 10-10 A for the ESI Apex-Q uptake system. Sensitivity was increased again 3-5-fold when using Finnigan X-cones instead of the standard H-cones. The combination of the ESI Apex-Q apparatus and X-cones allowed the determination of the isotope composition on as little as 50 ng of Fe. Fe isotope compositions were corrected for mass bias with both the standard-sample bracketing (SSB) method, and by using the 65Cu/63Cu ratio of added synthetic copper (Cu-doping) as internal monitor of mass discrimination. Both methods provide identical results on high-purity Fe solutions of either synthetic or natural samples. We prefer the SSB method because of its shorter analysis time and more straightforward correction of instrumental mass bias compared to Cu-doping. Strong error correlations of the data are observed in three isotope diagrams. Thus, we suggest that the quality assessment in such diagrams should be performed with error ellipses rather than error bars. Reproducibility of [delta]56Fe, [delta]57Fe and [delta]58Fe values of natural samples alone is not a sufficient criterion for accuracy. A set of tests is lined out that identify cryptic matrix effects and ensure a reproducible level of quality control. Using these criteria and the SSB correction method, we determined the external reproducibilities for [delta]56Fe, [delta]57Fe and [delta]58Fe at the 95% confidence interval from 318 measurements of 95 natural samples to be 0.049, 0.071 and 0.28[per mille sign], respectively.
Valeri, Linda; Lin, Xihong; VanderWeele, Tyler J.
2014-01-01
Mediation analysis is a popular approach to examine the extent to which the effect of an exposure on an outcome is through an intermediate variable (mediator) and the extent to which the effect is direct. When the mediator is mis-measured the validity of mediation analysis can be severely undermined. In this paper we first study the bias of classical, non-differential measurement error on a continuous mediator in the estimation of direct and indirect causal effects in generalized linear models when the outcome is either continuous or discrete and exposure-mediator interaction may be present. Our theoretical results as well as a numerical study demonstrate that in the presence of non-linearities the bias of naive estimators for direct and indirect effects that ignore measurement error can take unintuitive directions. We then develop methods to correct for measurement error. Three correction approaches using method of moments, regression calibration and SIMEX are compared. We apply the proposed method to the Massachusetts General Hospital lung cancer study to evaluate the effect of genetic variants mediated through smoking on lung cancer risk. PMID:25220625
Correcting systematic bias and instrument measurement drift with mzRefinery
Gibbons, Bryson C.; Chambers, Matthew C.; Monroe, Matthew E.; ...
2015-08-04
Systematic bias in mass measurement adversely affects data quality and negates the advantages of high precision instruments. We introduce the mzRefinery tool into the ProteoWizard package for calibration of mass spectrometry data files. Using confident peptide spectrum matches, three different calibration methods are explored and the optimal transform function is chosen. After calibration, systematic bias is removed and the mass measurement errors are centered at zero ppm. Because it is part of the ProteoWizard package, mzRefinery can read and write a wide variety of file formats. In conclusion, we report on availability; the mzRefinery tool is part of msConvert, availablemore » with the ProteoWizard open source package at http://proteowizard.sourceforge.net/« less
High-precision Ru isotopic measurements by multi-collector ICP-MS.
Becker, Harry; Dalpe, Claude; Walker, Richard J
2002-06-01
Ruthenium isotopic data for a pure Aldrich ruthenium nitrate solution obtained using a Nu Plasma multi collector inductively coupled plasma-mass spectrometer (MC-ICP-MS) shows excellent agreement (better than 1 epsilon unit = 1 part in 10(4)) with data obtained by other techniques for the mass range between 96 and 101 amu. External precisions are at the 0.5-1.7 epsilon level (2sigma). Higher sensitivity for MC ICP-MS compared to negative thermal ionization mass spectrometry (N-TIMS) is offset by the uncertainties introduced by relatively large mass discrimination and instabilities in the plasma source-ion extraction region that affect the long-term reproducibility. Large mass bias correction in ICP mass spectrometry demands particular attention to be paid to the choice of normalizing isotopes. Because of its position in the mass spectrum and the large mass bias correction, obtaining precise and accurate abundance data for 104Ru by MC-ICP-MS remains difficult. Internal and external mass bias correction schemes in this mass range may show similar shortcomings if the isotope of interest does not lie within the mass range covered by the masses used for normalization. Analyses of meteorite samples show that if isobaric interferences from Mo are sufficiently large (Ru/Mo < 10(4)), uncertainties on the Mo interference correction propagate through the mass bias correction and yield inaccurate results for Ru isotopic compositions. Second-order linear corrections may be used to correct for these inaccuracies, but such results are generally less precise than N-TIMS data.
Density estimation in wildlife surveys
Bart, Jonathan; Droege, Sam; Geissler, Paul E.; Peterjohn, Bruce G.; Ralph, C. John
2004-01-01
Several authors have recently discussed the problems with using index methods to estimate trends in population size. Some have expressed the view that index methods should virtually never be used. Others have responded by defending index methods and questioning whether better alternatives exist. We suggest that index methods are often a cost-effective component of valid wildlife monitoring but that double-sampling or another procedure that corrects for bias or establishes bounds on bias is essential. The common assertion that index methods require constant detection rates for trend estimation is mathematically incorrect; the requirement is no long-term trend in detection "ratios" (index result/parameter of interest), a requirement that is probably approximately met by many well-designed index surveys. We urge that more attention be given to defining bird density rigorously and in ways useful to managers. Once this is done, 4 sources of bias in density estimates may be distinguished: coverage, closure, surplus birds, and detection rates. Distance, double-observer, and removal methods do not reduce bias due to coverage, closure, or surplus birds. These methods may yield unbiased estimates of the number of birds present at the time of the survey, but only if their required assumptions are met, which we doubt occurs very often in practice. Double-sampling, in contrast, produces unbiased density estimates if the plots are randomly selected and estimates on the intensive surveys are unbiased. More work is needed, however, to determine the feasibility of double-sampling in different populations and habitats. We believe the tension that has developed over appropriate survey methods can best be resolved through increased appreciation of the mathematical aspects of indices, especially the effects of bias, and through studies in which candidate methods are evaluated against known numbers determined through intensive surveys.
Bowen, Spencer L.; Byars, Larry G.; Michel, Christian J.; Chonde, Daniel B.; Catana, Ciprian
2014-01-01
Kinetic parameters estimated from dynamic 18F-fluorodeoxyglucose PET acquisitions have been used frequently to assess brain function in humans. Neglecting partial volume correction (PVC) for a dynamic series has been shown to produce significant bias in model estimates. Accurate PVC requires a space-variant model describing the reconstructed image spatial point spread function (PSF) that accounts for resolution limitations, including non-uniformities across the field of view due to the parallax effect. For OSEM, image resolution convergence is local and influenced significantly by the number of iterations, the count density, and background-to-target ratio. As both count density and background-to-target values for a brain structure can change during a dynamic scan, the local image resolution may also concurrently vary. When PVC is applied post-reconstruction the kinetic parameter estimates may be biased when neglecting the frame-dependent resolution. We explored the influence of the PVC method and implementation on kinetic parameters estimated by fitting 18F-fluorodeoxyglucose dynamic data acquired on a dedicated brain PET scanner and reconstructed with and without PSF modelling in the OSEM algorithm. The performance of several PVC algorithms was quantified with a phantom experiment, an anthropomorphic Monte Carlo simulation, and a patient scan. Using the last frame reconstructed image only for regional spread function (RSF) generation, as opposed to computing RSFs for each frame independently, and applying perturbation GTM PVC with PSF based OSEM produced the lowest magnitude bias kinetic parameter estimates in most instances, although at the cost of increased noise compared to the PVC methods utilizing conventional OSEM. Use of the last frame RSFs for PVC with no PSF modelling in the OSEM algorithm produced the lowest bias in CMRGlc estimates, although by less than 5% in most cases compared to the other PVC methods. The results indicate that the PVC implementation and choice of PSF modelling in the reconstruction can significantly impact model parameters. PMID:24052021
Assessing the implementation of bias correction in the climate prediction
NASA Astrophysics Data System (ADS)
Nadrah Aqilah Tukimat, Nurul
2018-04-01
An issue of the climate changes nowadays becomes trigger and irregular. The increment of the greenhouse gases (GHGs) emission into the atmospheric system day by day gives huge impact to the fluctuated weather and global warming. It becomes significant to analyse the changes of climate parameters in the long term. However, the accuracy in the climate simulation is always be questioned to control the reliability of the projection results. Thus, the Linear Scaling (LS) as a bias correction method (BC) had been applied to treat the gaps between observed and simulated results. About two rainfall stations were selected in Pahang state there are Station Lubuk Paku and Station Temerloh. Statistical Downscaling Model (SDSM) used to perform the relationship between local weather and atmospheric parameters in projecting the long term rainfall trend. The result revealed the LS was successfully to reduce the error up to 3% and produced better climate simulated results.
Timebias corrections to predictions
NASA Technical Reports Server (NTRS)
Wood, Roger; Gibbs, Philip
1993-01-01
The importance of an accurate knowledge of the time bias corrections to predicted orbits to a satellite laser ranging (SLR) observer, especially for low satellites, is highlighted. Sources of time bias values and the optimum strategy for extrapolation are discussed from the viewpoint of the observer wishing to maximize the chances of getting returns from the next pass. What is said may be seen as a commercial encouraging wider and speedier use of existing data centers for mutually beneficial exchange of time bias data.
Averaging Bias Correction for Future IPDA Lidar Mission MERLIN
NASA Astrophysics Data System (ADS)
Tellier, Yoann; Pierangelo, Clémence; Wirth, Martin; Gibert, Fabien
2018-04-01
The CNES/DLR
Valente, João; Vieira, Pedro M; Couto, Carlos; Lima, Carlos S
2018-02-01
Poor brain extraction in Magnetic Resonance Imaging (MRI) has negative consequences in several types of brain post-extraction such as tissue segmentation and related statistical measures or pattern recognition algorithms. Current state of the art algorithms for brain extraction work on weighted T1 and T2, being not adequate for non-whole brain images such as the case of T2*FLASH@7T partial volumes. This paper proposes two new methods that work directly in T2*FLASH@7T partial volumes. The first is an improvement of the semi-automatic threshold-with-morphology approach adapted to incomplete volumes. The second method uses an improved version of a current implementation of the fuzzy c-means algorithm with bias correction for brain segmentation. Under high inhomogeneity conditions the performance of the first method degrades, requiring user intervention which is unacceptable. The second method performed well for all volumes, being entirely automatic. State of the art algorithms for brain extraction are mainly semi-automatic, requiring a correct initialization by the user and knowledge of the software. These methods can't deal with partial volumes and/or need information from atlas which is not available in T2*FLASH@7T. Also, combined volumes suffer from manipulations such as re-sampling which deteriorates significantly voxel intensity structures making segmentation tasks difficult. The proposed method can overcome all these difficulties, reaching good results for brain extraction using only T2*FLASH@7T volumes. The development of this work will lead to an improvement of automatic brain lesions segmentation in T2*FLASH@7T volumes, becoming more important when lesions such as cortical Multiple-Sclerosis need to be detected. Copyright © 2017 Elsevier B.V. All rights reserved.
Hua, Wei; Sun, Guoying; Dodd, Caitlin N; Romio, Silvana A; Whitaker, Heather J; Izurieta, Hector S; Black, Steven; Sturkenboom, Miriam C J M; Davis, Robert L; Deceuninck, Genevieve; Andrews, N J
2013-08-01
The assumption that the occurrence of outcome event must not alter subsequent exposure probability is critical for preserving the validity of the self-controlled case series (SCCS) method. This assumption is violated in scenarios in which the event constitutes a contraindication for exposure. In this simulation study, we compared the performance of the standard SCCS approach and two alternative approaches when the event-independent exposure assumption was violated. Using the 2009 H1N1 and seasonal influenza vaccines and Guillain-Barré syndrome as a model, we simulated a scenario in which an individual may encounter multiple unordered exposures and each exposure may be contraindicated by the occurrence of outcome event. The degree of contraindication was varied at 0%, 50%, and 100%. The first alternative approach used only cases occurring after exposure with follow-up time starting from exposure. The second used a pseudo-likelihood method. When the event-independent exposure assumption was satisfied, the standard SCCS approach produced nearly unbiased relative incidence estimates. When this assumption was partially or completely violated, two alternative SCCS approaches could be used. While the post-exposure cases only approach could handle only one exposure, the pseudo-likelihood approach was able to correct bias for both exposures. Violation of the event-independent exposure assumption leads to an overestimation of relative incidence which could be corrected by alternative SCCS approaches. In multiple exposure situations, the pseudo-likelihood approach is optimal; the post-exposure cases only approach is limited in handling a second exposure and may introduce additional bias, thus should be used with caution. Copyright © 2013 John Wiley & Sons, Ltd.
North Atlantic climate model bias influence on multiyear predictability
NASA Astrophysics Data System (ADS)
Wu, Y.; Park, T.; Park, W.; Latif, M.
2018-01-01
The influences of North Atlantic biases on multiyear predictability of unforced surface air temperature (SAT) variability are examined in the Kiel Climate Model (KCM). By employing a freshwater flux correction over the North Atlantic to the model, which strongly alleviates both North Atlantic sea surface salinity (SSS) and sea surface temperature (SST) biases, the freshwater flux-corrected integration depicts significantly enhanced multiyear SAT predictability in the North Atlantic sector in comparison to the uncorrected one. The enhanced SAT predictability in the corrected integration is due to a stronger and more variable Atlantic Meridional Overturning Circulation (AMOC) and its enhanced influence on North Atlantic SST. Results obtained from preindustrial control integrations of models participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5) support the findings obtained from the KCM: models with large North Atlantic biases tend to have a weak AMOC influence on SAT and exhibit a smaller SAT predictability over the North Atlantic sector.
Differential sea-state bias: A case study using TOPEX/POSEIDON data
NASA Technical Reports Server (NTRS)
Stewart, Robert H.; Devalla, B.
1994-01-01
We used selected data from the NASA altimeter TOPEX/POSEIDON to calculate differences in range measured by the C and Ku-band altimeters when the satellite overflew 5 to 15 m waves late at night. The range difference is due to free electrons in the ionosphere and to errors in sea-state bias. For the selected data the ionospheric influence on Ku range is less than 2 cm. Any difference in range over short horizontal distances is due only to a small along-track variability of the ionosphere and to errors in calculating the differential sea-state bias. We find that there is a barely detectable error in the bias in the geophysical data records. The wave-induced error in the ionospheric correction is less than 0.2% of significant wave height. The equivalent error in differential range is less than 1% of wave height. Errors in the differential sea-state bias calculations appear to be small even for extreme wave heights that greatly exceed the conditions on which the bias is based. The results also improved our confidence in the sea-state bias correction used for calculating the geophysical data records. Any error in the correction must influence Ku and C-band ranges almost equally.
Correcting Measurement Error in Latent Regression Covariates via the MC-SIMEX Method
ERIC Educational Resources Information Center
Rutkowski, Leslie; Zhou, Yan
2015-01-01
Given the importance of large-scale assessments to educational policy conversations, it is critical that subpopulation achievement is estimated reliably and with sufficient precision. Despite this importance, biased subpopulation estimates have been found to occur when variables in the conditioning model side of a latent regression model contain…
Sixth Annual Flight Mechanics/Estimation Theory Symposium
NASA Technical Reports Server (NTRS)
Lefferts, E. (Editor)
1981-01-01
Methods of orbital position estimation were reviewed. The problem of accuracy in orbital mechanics is discussed and various techniques in current use are presented along with suggested improvements. Of special interest is the compensation for bias in satelliteborne instruments due to attitude instabilities. Image processing and correctional techniques are reported for geodetic measurements and mapping.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Chuan; Brady, Thomas J.; El Fakhri, Georges
2014-04-15
Purpose: Artifacts caused by head motion present a major challenge in brain positron emission tomography (PET) imaging. The authors investigated the feasibility of using wired active MR microcoils to track head motion and incorporate the measured rigid motion fields into iterative PET reconstruction. Methods: Several wired active MR microcoils and a dedicated MR coil-tracking sequence were developed. The microcoils were attached to the outer surface of an anthropomorphic{sup 18}F-filled Hoffman phantom to mimic a brain PET scan. Complex rotation/translation motion of the phantom was induced by a balloon, which was connected to a ventilator. PET list-mode and MR tracking datamore » were acquired simultaneously on a PET-MR scanner. The acquired dynamic PET data were reconstructed iteratively with and without motion correction. Additionally, static phantom data were acquired and used as the gold standard. Results: Motion artifacts in PET images were effectively removed by wired active MR microcoil based motion correction. Motion correction yielded an activity concentration bias ranging from −0.6% to 3.4% as compared to a bias ranging from −25.0% to 16.6% if no motion correction was applied. The contrast recovery values were improved by 37%–156% with motion correction as compared to no motion correction. The image correlation (mean ± standard deviation) between the motion corrected (uncorrected) images of 20 independent noise realizations and static reference was R{sup 2} = 0.978 ± 0.007 (0.588 ± 0.010, respectively). Conclusions: Wired active MR microcoil based motion correction significantly improves brain PET quantitative accuracy and image contrast.« less
Huang, Chuan; Ackerman, Jerome L.; Petibon, Yoann; Brady, Thomas J.; El Fakhri, Georges; Ouyang, Jinsong
2014-01-01
Purpose: Artifacts caused by head motion present a major challenge in brain positron emission tomography (PET) imaging. The authors investigated the feasibility of using wired active MR microcoils to track head motion and incorporate the measured rigid motion fields into iterative PET reconstruction. Methods: Several wired active MR microcoils and a dedicated MR coil-tracking sequence were developed. The microcoils were attached to the outer surface of an anthropomorphic 18F-filled Hoffman phantom to mimic a brain PET scan. Complex rotation/translation motion of the phantom was induced by a balloon, which was connected to a ventilator. PET list-mode and MR tracking data were acquired simultaneously on a PET-MR scanner. The acquired dynamic PET data were reconstructed iteratively with and without motion correction. Additionally, static phantom data were acquired and used as the gold standard. Results: Motion artifacts in PET images were effectively removed by wired active MR microcoil based motion correction. Motion correction yielded an activity concentration bias ranging from −0.6% to 3.4% as compared to a bias ranging from −25.0% to 16.6% if no motion correction was applied. The contrast recovery values were improved by 37%–156% with motion correction as compared to no motion correction. The image correlation (mean ± standard deviation) between the motion corrected (uncorrected) images of 20 independent noise realizations and static reference was R2 = 0.978 ± 0.007 (0.588 ± 0.010, respectively). Conclusions: Wired active MR microcoil based motion correction significantly improves brain PET quantitative accuracy and image contrast. PMID:24694141
Baruzzini, Matthew L.; Hall, Howard L.; Spencer, Khalil J.; ...
2018-04-22
Investigations of the isotope fractionation behaviors of plutonium and uranium reference standards were conducted employing platinum and rhenium (Pt/Re) porous ion emitter (PIE) sources, a relatively new thermal ionization mass spectrometry (TIMS) ion source strategy. The suitability of commonly employed, empirically developed mass bias correction laws (i.e., the Linear, Power, and Russell's laws) for correcting such isotope ratio data was also determined. Corrected plutonium isotope ratio data, regardless of mass bias correction strategy, were statistically identical to that of the certificate, however, the process of isotope fractionation behavior of plutonium using the adopted experimental conditions was determined to be bestmore » described by the Power law. Finally, the fractionation behavior of uranium, using the analytical conditions described herein, is also most suitably modeled using the Power law, though Russell's and the Linear law for mass bias correction rendered results that were identical, within uncertainty, to the certificate value.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baruzzini, Matthew L.; Hall, Howard L.; Spencer, Khalil J.
Investigations of the isotope fractionation behaviors of plutonium and uranium reference standards were conducted employing platinum and rhenium (Pt/Re) porous ion emitter (PIE) sources, a relatively new thermal ionization mass spectrometry (TIMS) ion source strategy. The suitability of commonly employed, empirically developed mass bias correction laws (i.e., the Linear, Power, and Russell's laws) for correcting such isotope ratio data was also determined. Corrected plutonium isotope ratio data, regardless of mass bias correction strategy, were statistically identical to that of the certificate, however, the process of isotope fractionation behavior of plutonium using the adopted experimental conditions was determined to be bestmore » described by the Power law. Finally, the fractionation behavior of uranium, using the analytical conditions described herein, is also most suitably modeled using the Power law, though Russell's and the Linear law for mass bias correction rendered results that were identical, within uncertainty, to the certificate value.« less
Bias atlases for segmentation-based PET attenuation correction using PET-CT and MR.
Ouyang, Jinsong; Chun, Se Young; Petibon, Yoann; Bonab, Ali A; Alpert, Nathaniel; Fakhri, Georges El
2013-10-01
This study was to obtain voxel-wise PET accuracy and precision using tissue-segmentation for attenuation correction. We applied multiple thresholds to the CTs of 23 patients to classify tissues. For six of the 23 patients, MR images were also acquired. The MR fat/in-phase ratio images were used for fat segmentation. Segmented tissue classes were used to create attenuation maps, which were used for attenuation correction in PET reconstruction. PET bias images were then computed using the PET reconstructed with the original CT as the reference. We registered the CTs for all the patients and transformed the corresponding bias images accordingly. We then obtained the mean and standard deviation bias atlas using all the registered bias images. Our CT-based study shows that four-class segmentation (air, lungs, fat, other tissues), which is available on most PET-MR scanners, yields 15.1%, 4.1%, 6.6%, and 12.9% RMSE bias in lungs, fat, non-fat soft-tissues, and bones, respectively. An accurate fat identification is achievable using fat/in-phase MR images. Furthermore, we have found that three-class segmentation (air, lungs, other tissues) yields less than 5% standard deviation of bias within the heart, liver, and kidneys. This implies that three-class segmentation can be sufficient to achieve small variation of bias for imaging these three organs. Finally, we have found that inter- and intra-patient lung density variations contribute almost equally to the overall standard deviation of bias within the lungs.
On land-use modeling: A treatise of satellite imagery data and misclassification error
NASA Astrophysics Data System (ADS)
Sandler, Austin M.
Recent availability of satellite-based land-use data sets, including data sets with contiguous spatial coverage over large areas, relatively long temporal coverage, and fine-scale land cover classifications, is providing new opportunities for land-use research. However, care must be used when working with these datasets due to misclassification error, which causes inconsistent parameter estimates in the discrete choice models typically used to model land-use. I therefore adapt the empirical correction methods developed for other contexts (e.g., epidemiology) so that they can be applied to land-use modeling. I then use a Monte Carlo simulation, and an empirical application using actual satellite imagery data from the Northern Great Plains, to compare the results of a traditional model ignoring misclassification to those from models accounting for misclassification. Results from both the simulation and application indicate that ignoring misclassification will lead to biased results. Even seemingly insignificant levels of misclassification error (e.g., 1%) result in biased parameter estimates, which alter marginal effects enough to affect policy inference. At the levels of misclassification typical in current satellite imagery datasets (e.g., as high as 35%), ignoring misclassification can lead to systematically erroneous land-use probabilities and substantially biased marginal effects. The correction methods I propose, however, generate consistent parameter estimates and therefore consistent estimates of marginal effects and predicted land-use probabilities.
Bias in Examination Test Banks that Accompany Cost Accounting Texts.
ERIC Educational Resources Information Center
Clute, Ronald C.; McGrail, George R.
1989-01-01
Eight text banks that accompany cost accounting textbooks were evaluated for the presence of bias in the distribution of correct responses. All but one were found to have considerable bias, and three of eight were found to have significant choice bias. (SK)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Tao; Tsui, Benjamin M. W.; Li, Xin
Purpose: The radioligand {sup 11}C-KR31173 has been introduced for positron emission tomography (PET) imaging of the angiotensin II subtype 1 receptor in the kidney in vivo. To study the biokinetics of {sup 11}C-KR31173 with a compartmental model, the input function is needed. Collection and analysis of arterial blood samples are the established approach to obtain the input function but they are not feasible in patients with renal diseases. The goal of this study was to develop a quantitative technique that can provide an accurate image-derived input function (ID-IF) to replace the conventional invasive arterial sampling and test the method inmore » pigs with the goal of translation into human studies. Methods: The experimental animals were injected with [{sup 11}C]KR31173 and scanned up to 90 min with dynamic PET. Arterial blood samples were collected for the artery derived input function (AD-IF) and used as a gold standard for ID-IF. Before PET, magnetic resonance angiography of the kidneys was obtained to provide the anatomical information required for derivation of the recovery coefficients in the abdominal aorta, a requirement for partial volume correction of the ID-IF. Different image reconstruction methods, filtered back projection (FBP) and ordered subset expectation maximization (OS-EM), were investigated for the best trade-off between bias and variance of the ID-IF. The effects of kidney uptakes on the quantitative accuracy of ID-IF were also studied. Biological variables such as red blood cell binding and radioligand metabolism were also taken into consideration. A single blood sample was used for calibration in the later phase of the input function. Results: In the first 2 min after injection, the OS-EM based ID-IF was found to be biased, and the bias was found to be induced by the kidney uptake. No such bias was found with the FBP based image reconstruction method. However, the OS-EM based image reconstruction was found to reduce variance in the subsequent phase of the ID-IF. The combined use of FBP and OS-EM resulted in reduced bias and noise. After performing all the necessary corrections, the areas under the curves (AUCs) of the AD-IF were close to that of the AD-IF (average AUC ratio =1 ± 0.08) during the early phase. When applied in a two-tissue-compartmental kinetic model, the average difference between the estimated model parameters from ID-IF and AD-IF was 10% which was within the error of the estimation method. Conclusions: The bias of radioligand concentration in the aorta from the OS-EM image reconstruction is significantly affected by radioligand uptake in the adjacent kidney and cannot be neglected for quantitative evaluation. With careful calibrations and corrections, the ID-IF derived from quantitative dynamic PET images can be used as the input function of the compartmental model to quantify the renal kinetics of {sup 11}C-KR31173 in experimental animals and the authors intend to evaluate this method in future human studies.« less
Santin, G; Bénézet, L; Geoffroy-Perez, B; Bouyer, J; Guéguen, A
2017-02-01
The decline in participation rates in surveys, including epidemiological surveillance surveys, has become a real concern since it may increase nonresponse bias. The aim of this study is to estimate the contribution of a complementary survey among a subsample of nonrespondents, and the additional contribution of paradata in correcting for nonresponse bias in an occupational health surveillance survey. In 2010, 10,000 workers were randomly selected and sent a postal questionnaire. Sociodemographic data were available for the whole sample. After data collection of the questionnaires, a complementary survey among a random subsample of 500 nonrespondents was performed using a questionnaire administered by an interviewer. Paradata were collected for the complete subsample of the complementary survey. Nonresponse bias in the initial sample and in the combined samples were assessed using variables from administrative databases available for the whole sample, not subject to differential measurement errors. Corrected prevalences by reweighting technique were estimated by first using the initial survey alone and then the initial and complementary surveys combined, under several assumptions regarding the missing data process. Results were compared by computing relative errors. The response rates of the initial and complementary surveys were 23.6% and 62.6%, respectively. For the initial and the combined surveys, the relative errors decreased after correction for nonresponse on sociodemographic variables. For the combined surveys without paradata, relative errors decreased compared with the initial survey. The contribution of the paradata was weak. When a complex descriptive survey has a low response rate, a short complementary survey among nonrespondents with a protocol which aims to maximize the response rates, is useful. The contribution of sociodemographic variables in correcting for nonresponse bias is important whereas the additional contribution of paradata in correcting for nonresponse bias is questionable. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Determining the bias and variance of a deterministic finger-tracking algorithm.
Morash, Valerie S; van der Velden, Bas H M
2016-06-01
Finger tracking has the potential to expand haptic research and applications, as eye tracking has done in vision research. In research applications, it is desirable to know the bias and variance associated with a finger-tracking method. However, assessing the bias and variance of a deterministic method is not straightforward. Multiple measurements of the same finger position data will not produce different results, implying zero variance. Here, we present a method of assessing deterministic finger-tracking variance and bias through comparison to a non-deterministic measure. A proof-of-concept is presented using a video-based finger-tracking algorithm developed for the specific purpose of tracking participant fingers during a psychological research study. The algorithm uses ridge detection on videos of the participant's hand, and estimates the location of the right index fingertip. The algorithm was evaluated using data from four participants, who explored tactile maps using only their right index finger and all right-hand fingers. The algorithm identified the index fingertip in 99.78 % of one-finger video frames and 97.55 % of five-finger video frames. Although the algorithm produced slightly biased and more dispersed estimates relative to a human coder, these differences (x=0.08 cm, y=0.04 cm) and standard deviations (σ x =0.16 cm, σ y =0.21 cm) were small compared to the size of a fingertip (1.5-2.0 cm). Some example finger-tracking results are provided where corrections are made using the bias and variance estimates.
An adaptive multi-level simulation algorithm for stochastic biological systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, C., E-mail: lesterc@maths.ox.ac.uk; Giles, M. B.; Baker, R. E.
2015-01-14
Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Montemore » Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the efficiency of our method using a number of examples.« less
Capiau, Sara; Wilk, Leah S; De Kesel, Pieter M M; Aalders, Maurice C G; Stove, Christophe P
2018-02-06
The hematocrit (Hct) effect is one of the most important hurdles currently preventing more widespread implementation of quantitative dried blood spot (DBS) analysis in a routine context. Indeed, the Hct may affect both the accuracy of DBS methods as well as the interpretation of DBS-based results. We previously developed a method to determine the Hct of a DBS based on its hemoglobin content using noncontact diffuse reflectance spectroscopy. Despite the ease with which the analysis can be performed (i.e., mere scanning of the DBS) and the good results that were obtained, the method did require a complicated algorithm to derive the total hemoglobin content from the DBS's reflectance spectrum. As the total hemoglobin was calculated as the sum of oxyhemoglobin, methemoglobin, and hemichrome, the three main hemoglobin derivatives formed in DBS upon aging, the reflectance spectrum needed to be unmixed to determine the quantity of each of these derivatives. We now simplified the method by only using the reflectance at a single wavelength, located at a quasi-isosbestic point in the reflectance curve. At this wavelength, assuming 1-to-1 stoichiometry of the aging reaction, the reflectance is insensitive to the hemoglobin degradation and only scales with the total amount of hemoglobin and, hence, the Hct. This simplified method was successfully validated. At each quality control level as well as at the limits of quantitation (i.e., 0.20 and 0.67) bias, intra- and interday imprecision were within 10%. Method reproducibility was excellent based on incurred sample reanalysis and surpassed the reproducibility of the original method. Furthermore, the influence of the volume spotted, the measurement location within the spot, as well as storage time and temperature were evaluated, showing no relevant impact of these parameters. Application to 233 patient samples revealed a good correlation between the Hct determined on whole blood and the predicted Hct determined on venous DBS. The bias obtained with Bland and Altman analysis was -0.015 and the limits of agreement were -0.061 and 0.031, indicating that the simplified, noncontact Hct prediction method even outperforms the original method. In addition, using caffeine as a model compound, it was demonstrated that this simplified Hct prediction method can effectively be used to implement a Hct-dependent correction factor to DBS-based results to alleviate the Hct bias.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, Elsayed
Purpose: To characterize and correct for radiation-induced background (RIB) observed in the signals from a class of scanning water tanks. Methods: A method was developed to isolate the RIB through detector measurements in the background-free linac console area. Variation of the RIB against a large number of parameters was characterized, and its impact on basic clinical data for photon and electron beams was quantified. Different methods to minimize and/or correct for the RIB were proposed and evaluated. Results: The RIB is due to the presence of the electrometer and connection box in a low background radiation field (by design). Themore » absolute RIB current with a biased detector is up to 2 pA, independent of the detector size, which is 0.6% and 1.5% of the central axis reference signal for a standard and a mini scanning chamber, respectively. The RIB monotonically increases with field size, is three times smaller for detectors that do not require a bias (e.g., diodes), is up to 80% larger for positive (versus negative) polarity, decreases with increasing photon energy, exhibits a single curve versus dose rate at the electrometer location, and is negligible for electron beams. Data after the proposed field-size correction method agree with point measurements from an independent system to within a few tenth of a percent for output factor, head scatter, depth dose at depth, and out-of-field profile dose. Manufacturer recommendations for electrometer placement are insufficient and sometimes incorrect. Conclusions: RIB in scanning water tanks can have a non-negligible effect on dosimetric data.« less
An Evaluation of Unit and ½ Mass Correction Approaches as a ...
Rare earth elements (REE) and certain alkaline earths can produce M+2 interferences in ICP-MS because they have sufficiently low second ionization energies. Four REEs (150Sm, 150Nd, 156Gd and 156Dy) produce false positives on 75As and 78Se and 132Ba can produce a false positive on 66Zn. Currently, US EPA Method 200.8 does not address these as sources of false positives. Additionally, these M+2 false positives are typically enhanced if collision cell technology is utilized to reduce polyatomic interferences associated with ICP-MS detection. Correction equations can be formulated using either a unit or ½ mass approach. The ½ mass correction approach does not suffer from the bias generated from polyatomic or end user based contamination at the unit mass but is limited by the abundance sensitivity of the adjacent mass. For instance, the use of m/z 78 in a unit mass correction of 156Gd on m/z 78 can be biased by residual 40Ar38Ar and 78Se while the ½ mass approach can use 77.5 or 78.5 and is limited by the abundance sensitivity issues from mass 77 and 78 or 78 and 79, respectively. This presentation will evaluate the use of both unit and ½ mass correction approaches as a means of addressing M+2 false positives within the context of updating US EPA Method 200.8. This evaluation will include the analysis of As and Se standards near the detection limit in the presence of low (2ppb) and high (50ppb) levels of REE with benchmark concentrations estimated using
Classification bias in commercial business lists for retail food stores in the U.S.
2012-01-01
Background Aspects of the food environment such as the availability of different types of food stores have recently emerged as key modifiable factors that may contribute to the increased prevalence of obesity. Given that many of these studies have derived their results based on secondary datasets and the relationship of food stores with individual weight outcomes has been reported to vary by store type, it is important to understand the extent to which often-used secondary data correctly classify food stores. We evaluated the classification bias of food stores in Dun & Bradstreet (D&B) and InfoUSA commercial business lists. Methods We performed a full census in 274 randomly selected census tracts in the Chicago metropolitan area and collected detailed store attributes inside stores for classification. Store attributes were compared by classification match status and store type. Systematic classification bias by census tract characteristics was assessed in multivariate regression. Results D&B had a higher classification match rate than InfoUSA for supermarkets and grocery stores, while InfoUSA was higher for convenience stores. Both lists were more likely to correctly classify large supermarkets, grocery stores, and convenience stores with more cash registers and different types of service counters (supermarkets and grocery stores only). The likelihood of a correct classification match for supermarkets and grocery stores did not vary systemically by tract characteristics whereas convenience stores were more likely to be misclassified in predominately Black tracts. Conclusion Researches can rely on classification of food stores in commercial datasets for supermarkets and grocery stores whereas classifications for convenience and specialty food stores are subject to some systematic bias by neighborhood racial/ethnic composition. PMID:22512874
High-resolution dynamical downscaling of the future Alpine climate
NASA Astrophysics Data System (ADS)
Bozhinova, Denica; José Gómez-Navarro, Juan; Raible, Christoph
2017-04-01
The Alpine region and Switzerland is a challenging area for simulating and analysing Global Climate Model (GCM) results. This is mostly due to the combination of a very complex topography and the still rather coarse horizontal resolution of current GCMs, in which not all of the many-scale processes that drive the local weather and climate can be resolved. In our study, the Weather Research and Forecasting (WRF) model is used to dynamically downscale a GCM simulation to a resolution as high as 2 km x 2 km. WRF is driven by initial and boundary conditions produced with the Community Earth System Model (CESM) for the recent past (control run) and until 2100 using the RCP8.5 climate scenario (future run). The control run downscaled with WRF covers the period 1976-2005, while the future run investigates a 20-year-slice simulated for the 2080-2099. We compare the control WRF-CESM simulations to an observational product provided by MeteoSwiss and an additional WRF simulation driven by the ERA-Interim reanalysis, to estimate the bias that is introduced by the extra modelling step of our framework. Several bias-correction methods are evaluated, including a quantile mapping technique, to ameliorate the bias in the control WRF-CESM simulation. In the next step of our study these corrections are applied to our future WRF-CESM run. The resulting downscaled and bias-corrected data is analysed for the properties of precipitation and wind speed in the future climate. Our special interest focuses on the absolute quantities simulated for these meteorological variables as these are used to identify extreme events, such as wind storms and situations that can lead to floods.
NASA Astrophysics Data System (ADS)
Fakhri, G. El; Kijewski, M. F.; Moore, S. C.
2001-06-01
Estimates of SPECT activity within certain deep brain structures could be useful for clinical tasks such as early prediction of Alzheimer's disease with Tc-99m or Parkinson's disease with I-123; however, such estimates are biased by poor spatial resolution and inaccurate scatter and attenuation corrections. We compared an analytical approach (AA) of more accurate quantitation to a slower iterative approach (IA). Monte Carlo simulated projections of 12 normal and 12 pathologic Tc-99m perfusion studies, as well as 12, normal and 12 pathologic I-123 neurotransmission studies, were generated using a digital brain phantom and corrected for scatter by a multispectral fitting procedure. The AA included attenuation correction by a modified Metz-Fan algorithm and activity estimation by a technique that incorporated Metz filtering to compensate for variable collimator response (VCR), IA-modeled attenuation, and VCR in the projector/backprojector of an ordered subsets-expectation maximization (OSEM) algorithm. Bias and standard deviation over the 12 normal and 12 pathologic patients were calculated with respect to the reference values in the corpus callosum, caudate nucleus, and putamen. The IA and AA yielded similar quantitation results in both Tc-99m and I-123 studies in all brain structures considered in both normal and pathologic patients. The bias with respect to the reference activity distributions was less than 7% for Tc-99m studies, but greater than 30% for I-123 studies, due to partial volume effect in the striata. Our results were validated using I-123 physical acquisitions of an anthropomorphic brain phantom. The IA yielded quantitation accuracy comparable to that obtained with IA, while requiring much less processing time. However, in most conditions, IA yielded lower noise for the same bias than did AA.
Improving RNA-Seq expression estimates by correcting for fragment bias
2011-01-01
The biochemistry of RNA-Seq library preparation results in cDNA fragments that are not uniformly distributed within the transcripts they represent. This non-uniformity must be accounted for when estimating expression levels, and we show how to perform the needed corrections using a likelihood based approach. We find improvements in expression estimates as measured by correlation with independently performed qRT-PCR and show that correction of bias leads to improved replicability of results across libraries and sequencing technologies. PMID:21410973
Assessing the utility of frequency dependent nudging for reducing biases in biogeochemical models
NASA Astrophysics Data System (ADS)
Lagman, Karl B.; Fennel, Katja; Thompson, Keith R.; Bianucci, Laura
2014-09-01
Bias errors, resulting from inaccurate boundary and forcing conditions, incorrect model parameterization, etc. are a common problem in environmental models including biogeochemical ocean models. While it is important to correct bias errors wherever possible, it is unlikely that any environmental model will ever be entirely free of such errors. Hence, methods for bias reduction are necessary. A widely used technique for online bias reduction is nudging, where simulated fields are continuously forced toward observations or a climatology. Nudging is robust and easy to implement, but suppresses high-frequency variability and introduces artificial phase shifts. As a solution to this problem Thompson et al. (2006) introduced frequency dependent nudging where nudging occurs only in prescribed frequency bands, typically centered on the mean and the annual cycle. They showed this method to be effective for eddy resolving ocean circulation models. Here we add a stability term to the previous form of frequency dependent nudging which makes the method more robust for non-linear biological models. Then we assess the utility of frequency dependent nudging for biological models by first applying the method to a simple predator-prey model and then to a 1D ocean biogeochemical model. In both cases we only nudge in two frequency bands centered on the mean and the annual cycle, and then assess how well the variability in higher frequency bands is recovered. We evaluate the effectiveness of frequency dependent nudging in comparison to conventional nudging and find significant improvements with the former.
Hagen, Nils T.
2008-01-01
Authorship credit for multi-authored scientific publications is routinely allocated either by issuing full publication credit repeatedly to all coauthors, or by dividing one credit equally among all coauthors. The ensuing inflationary and equalizing biases distort derived bibliometric measures of merit by systematically benefiting secondary authors at the expense of primary authors. Here I show how harmonic counting, which allocates credit according to authorship rank and the number of coauthors, provides simultaneous source-level correction for both biases as well as accommodating further decoding of byline information. I also demonstrate large and erratic effects of counting bias on the original h-index, and show how the harmonic version of the h-index provides unbiased bibliometric ranking of scientific merit while retaining the original's essential simplicity, transparency and intended fairness. Harmonic decoding of byline information resolves the conundrum of authorship credit allocation by providing a simple recipe for source-level correction of inflationary and equalizing bias. Harmonic counting could also offer unrivalled accuracy in automated assessments of scientific productivity, impact and achievement. PMID:19107201
Bergen, Silas; Sheppard, Lianne; Kaufman, Joel D.; Szpiro, Adam A.
2016-01-01
Summary Air pollution epidemiology studies are trending towards a multi-pollutant approach. In these studies, exposures at subject locations are unobserved and must be predicted using observed exposures at misaligned monitoring locations. This induces measurement error, which can bias the estimated health effects and affect standard error estimates. We characterize this measurement error and develop an analytic bias correction when using penalized regression splines to predict exposure. Our simulations show bias from multi-pollutant measurement error can be severe, and in opposite directions or simultaneously positive or negative. Our analytic bias correction combined with a non-parametric bootstrap yields accurate coverage of 95% confidence intervals. We apply our methodology to analyze the association of systolic blood pressure with PM2.5 and NO2 in the NIEHS Sister Study. We find that NO2 confounds the association of systolic blood pressure with PM2.5 and vice versa. Elevated systolic blood pressure was significantly associated with increased PM2.5 and decreased NO2. Correcting for measurement error bias strengthened these associations and widened 95% confidence intervals. PMID:27789915
Deconstructing the smoking-preeclampsia paradox through a counterfactual framework.
Luque-Fernandez, Miguel Angel; Zoega, Helga; Valdimarsdottir, Unnur; Williams, Michelle A
2016-06-01
Although smoking during pregnancy may lead to many adverse outcomes, numerous studies have reported a paradoxical inverse association between maternal cigarette smoking during pregnancy and preeclampsia. Using a counterfactual framework we aimed to explore the structure of this paradox as being a consequence of selection bias. Using a case-control study nested in the Icelandic Birth Registry (1309 women), we show how this selection bias can be explored and corrected for. Cases were defined as any case of pregnancy induced hypertension or preeclampsia occurring after 20 weeks' gestation and controls as normotensive mothers who gave birth in the same year. First, we used directed acyclic graphs to illustrate the common bias structure. Second, we used classical logistic regression and mediation analytic methods for dichotomous outcomes to explore the structure of the bias. Lastly, we performed both deterministic and probabilistic sensitivity analysis to estimate the amount of bias due to an uncontrolled confounder and corrected for it. The biased effect of smoking was estimated to reduce the odds of preeclampsia by 28 % (OR 0.72, 95 %CI 0.52, 0.99) and after stratification by gestational age at delivery (<37 vs. ≥37 gestation weeks) by 75 % (OR 0.25, 95 %CI 0.10, 0.68). In a mediation analysis, the natural indirect effect showed and OR > 1, revealing the structure of the paradox. The bias-adjusted estimation of the smoking effect on preeclampsia showed an OR of 1.22 (95 %CI 0.41, 6.53). The smoking-preeclampsia paradox appears to be an example of (1) selection bias most likely caused by studying cases prevalent at birth rather than all incident cases from conception in a pregnancy cohort, (2) omitting important confounders associated with both smoking and preeclampsia (preventing the outcome to develop) and (3) controlling for a collider (gestation weeks at delivery). Future studies need to consider these aspects when studying and interpreting the association between smoking and pregnancy outcomes.
Helium Mass Spectrometer Leak Detection: A Method to Quantify Total Measurement Uncertainty
NASA Technical Reports Server (NTRS)
Mather, Janice L.; Taylor, Shawn C.
2015-01-01
In applications where leak rates of components or systems are evaluated against a leak rate requirement, the uncertainty of the measured leak rate must be included in the reported result. However, in the helium mass spectrometer leak detection method, the sensitivity, or resolution, of the instrument is often the only component of the total measurement uncertainty noted when reporting results. To address this shortfall, a measurement uncertainty analysis method was developed that includes the leak detector unit's resolution, repeatability, hysteresis, and drift, along with the uncertainty associated with the calibration standard. In a step-wise process, the method identifies the bias and precision components of the calibration standard, the measurement correction factor (K-factor), and the leak detector unit. Together these individual contributions to error are combined and the total measurement uncertainty is determined using the root-sum-square method. It was found that the precision component contributes more to the total uncertainty than the bias component, but the bias component is not insignificant. For helium mass spectrometer leak rate tests where unit sensitivity alone is not enough, a thorough evaluation of the measurement uncertainty such as the one presented herein should be performed and reported along with the leak rate value.
A re-examination of the effects of biased lineup instructions in eyewitness identification.
Clark, Steven E
2005-10-01
A meta-analytic review of research comparing biased and unbiased instructions in eyewitness identification experiments showed an asymmetry; specifically, that biased instructions led to a large and consistent decrease in accuracy in target-absent lineups, but produced inconsistent results for target-present lineups, with an average effect size near zero (Steblay, 1997). The results for target-present lineups are surprising, and are inconsistent with statistical decision theories (i.e., Green & Swets, 1966). A re-examination of the relevant studies and the meta-analysis of those studies shows clear evidence that correct identification rates do increase with biased lineup instructions, and that biased witnesses make correct identifications at a rate considerably above chance. Implications for theory, as well as police procedure and policy, are discussed.
A re-examination of the effects of biased lineup instructions in eyewitness identification.
Clark, Steven E
2005-08-01
A meta-analytic review of research comparing biased and unbiased instructions in eyewitness identification experiments showed an asymmetry, specifically that biased instructions led to a large and consistent decrease in accuracy in target-absent lineups, but produced inconsistent results for target-present lineups, with an average effect size near zero (N. M. Steblay, 1997). The results for target-present lineups are surprising, and are inconsistent with statistical decision theories (i.e., D. M. Green & J. A. Swets, 1966). A re-examination of the relevant studies and the meta-analysis of those studies shows clear evidence that correct identification rates do increase with biased lineup instructions, and that biased witnesses make correct identifications at a rate considerably above chance. Implications for theory, as well as police procedure and policy, are discussed.
An Improved BeiDou-2 Satellite-Induced Code Bias Estimation Method.
Fu, Jingyang; Li, Guangyun; Wang, Li
2018-04-27
Different from GPS, GLONASS, GALILEO and BeiDou-3, it is confirmed that the code multipath bias (CMB), which originate from the satellite end and can be over 1 m, are commonly found in the code observations of BeiDou-2 (BDS) IGSO and MEO satellites. In order to mitigate their adverse effects on absolute precise applications which use the code measurements, we propose in this paper an improved correction model to estimate the CMB. Different from the traditional model which considering the correction values are orbit-type dependent (estimating two sets of values for IGSO and MEO, respectively) and modeling the CMB as a piecewise linear function with a elevation node separation of 10°, we estimate the corrections for each BDS IGSO + MEO satellite on one hand, and a denser elevation node separation of 5° is used to model the CMB variations on the other hand. Currently, the institutions such as IGS-MGEX operate over 120 stations which providing the daily BDS observations. These large amounts of data provide adequate support to refine the CMB estimation satellite by satellite in our improved model. One month BDS observations from MGEX are used for assessing the performance of the improved CMB model by means of precise point positioning (PPP). Experimental results show that for the satellites on the same orbit type, obvious differences can be found in the CMB at the same node and frequency. Results show that the new correction model can improve the wide-lane (WL) ambiguity usage rate for WL fractional cycle bias estimation, shorten the WL and narrow-lane (NL) time to first fix (TTFF) in PPP ambiguity resolution (AR) as well as improve the PPP positioning accuracy. With our improved correction model, the usage of WL ambiguity is increased from 94.1% to 96.0%, the WL and NL TTFF of PPP AR is shorten from 10.6 to 9.3 min, 67.9 to 63.3 min, respectively, compared with the traditional correction model. In addition, both the traditional and improved CMB model have a better performance in these aspects compared with the model which does not account for the CMB correction.
An Improved BeiDou-2 Satellite-Induced Code Bias Estimation Method
Fu, Jingyang; Li, Guangyun; Wang, Li
2018-01-01
Different from GPS, GLONASS, GALILEO and BeiDou-3, it is confirmed that the code multipath bias (CMB), which originate from the satellite end and can be over 1 m, are commonly found in the code observations of BeiDou-2 (BDS) IGSO and MEO satellites. In order to mitigate their adverse effects on absolute precise applications which use the code measurements, we propose in this paper an improved correction model to estimate the CMB. Different from the traditional model which considering the correction values are orbit-type dependent (estimating two sets of values for IGSO and MEO, respectively) and modeling the CMB as a piecewise linear function with a elevation node separation of 10°, we estimate the corrections for each BDS IGSO + MEO satellite on one hand, and a denser elevation node separation of 5° is used to model the CMB variations on the other hand. Currently, the institutions such as IGS-MGEX operate over 120 stations which providing the daily BDS observations. These large amounts of data provide adequate support to refine the CMB estimation satellite by satellite in our improved model. One month BDS observations from MGEX are used for assessing the performance of the improved CMB model by means of precise point positioning (PPP). Experimental results show that for the satellites on the same orbit type, obvious differences can be found in the CMB at the same node and frequency. Results show that the new correction model can improve the wide-lane (WL) ambiguity usage rate for WL fractional cycle bias estimation, shorten the WL and narrow-lane (NL) time to first fix (TTFF) in PPP ambiguity resolution (AR) as well as improve the PPP positioning accuracy. With our improved correction model, the usage of WL ambiguity is increased from 94.1% to 96.0%, the WL and NL TTFF of PPP AR is shorten from 10.6 to 9.3 min, 67.9 to 63.3 min, respectively, compared with the traditional correction model. In addition, both the traditional and improved CMB model have a better performance in these aspects compared with the model which does not account for the CMB correction. PMID:29702559
Evaluation of Bias and Variance in Low-count OSEM List Mode Reconstruction
Jian, Y; Planeta, B; Carson, R E
2016-01-01
Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization (MLEM) reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combination of subsets and iterations. Regions of interest (ROIs) were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations x subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1–5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR. PMID:25479254
Evaluation of bias and variance in low-count OSEM list mode reconstruction
NASA Astrophysics Data System (ADS)
Jian, Y.; Planeta, B.; Carson, R. E.
2015-01-01
Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combinations of subsets and iterations. Regions of interest were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations × subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1-5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR.
El Hadri, Hind; Petersen, Elijah J.; Winchester, Michael R.
2016-01-01
The effect of ICP-MS instrument sensitivity drift on the accuracy of NP size measurements using single particle (sp)ICP-MS is investigated. Theoretical modeling and experimental measurements of the impact of instrument sensitivity drift are in agreement and indicate that drift can impact the measured size of spherical NPs by up to 25 %. Given this substantial bias in the measured size, a method was developed using an internal standard to correct for the impact of drift and was shown to accurately correct for a decrease in instrument sensitivity of up to 50 % for 30 nm and 60 nm gold nanoparticles. PMID:26894759
McManus, I C; Elder, Andrew T; Dacre, Jane
2013-07-30
Bias of clinical examiners against some types of candidate, based on characteristics such as sex or ethnicity, would represent a threat to the validity of an examination, since sex or ethnicity are 'construct-irrelevant' characteristics. In this paper we report a novel method for assessing sex and ethnic bias in over 2000 examiners who had taken part in the PACES and nPACES (new PACES) examinations of the MRCP(UK). PACES and nPACES are clinical skills examinations that have two examiners at each station who mark candidates independently. Differences between examiners cannot be due to differences in performance of a candidate because that is the same for the two examiners, and hence may result from bias or unreliability on the part of the examiners. By comparing each examiner against a 'basket' of all of their co-examiners, it is possible to identify examiners whose behaviour is anomalous. The method assessed hawkishness-doveishness, sex bias, ethnic bias and, as a control condition to assess the statistical method, 'even-number bias' (i.e. treating candidates with odd and even exam numbers differently). Significance levels were Bonferroni corrected because of the large number of examiners being considered. The results of 26 diets of PACES and six diets of nPACES were examined statistically to assess the extent of hawkishness, as well as sex bias and ethnicity bias in individual examiners. The control (odd-number) condition suggested that about 5% of examiners were significant at an (uncorrected) 5% level, and that the method therefore worked as expected. As in a previous study (BMC Medical Education, 2006, 6:42), some examiners were hawkish or doveish relative to their peers. No examiners showed significant sex bias, and only a single examiner showed evidence consistent with ethnic bias. A re-analysis of the data considering only one examiner per station, as would be the case for many clinical examinations, showed that analysis with a single examiner runs a serious risk of false positive identifications probably due to differences in case-mix and content-specificity. In examinations where there are two independent examiners at a station, our method can assess the extent of bias against candidates with particular characteristics. The method would be far less sensitive in examinations with only a single examiner per station as examiner variance would be confounded with candidate performance variance. The method however works well when there is more than one examiner at a station and in the case of the current MRCP(UK) clinical examination, nPACES, found possible sex bias in no examiners and possible ethnic bias in only one.
Biased lineup instructions and face identification from video images.
Thompson, W Burt; Johnson, Jaime
2008-01-01
Previous eyewitness memory research has shown that biased lineup instructions reduce identification accuracy, primarily by increasing false-positive identifications in target-absent lineups. Because some attempts at identification do not rely on a witness's memory of the perpetrator but instead involve matching photos to images on surveillance video, the authors investigated the effects of biased instructions on identification accuracy in a matching task. In Experiment 1, biased instructions did not affect the overall accuracy of participants who used video images as an identification aid, but nearly all correct decisions occurred with target-present photo spreads. Both biased and unbiased instructions resulted in high false-positive rates. In Experiment 2, which focused on video-photo matching accuracy with target-absent photo spreads, unbiased instructions led to more correct responses (i.e., fewer false positives). These findings suggest that investigators should not relax precautions against biased instructions when people attempt to match photos to an unfamiliar person recorded on video.
NASA Astrophysics Data System (ADS)
Mai, Fei; Chang, Chunqi; Liu, Wenqing; Xu, Weichao; Hung, Yeung S.
2009-10-01
Due to the inherent imperfections in the imaging process, fluorescence microscopy images often suffer from spurious intensity variations, which is usually referred to as intensity inhomogeneity, intensity non uniformity, shading or bias field. In this paper, a retrospective shading correction method for fluorescence microscopy Escherichia coli (E. Coli) images is proposed based on segmentation result. Segmentation and shading correction are coupled together, so we iteratively correct the shading effects based on segmentation result and refine the segmentation by segmenting the image after shading correction. A fluorescence microscopy E. Coli image can be segmented (based on its intensity value) into two classes: the background and the cells, where the intensity variation within each class is close to zero if there is no shading. Therefore, we make use of this characteristics to correct the shading in each iteration. Shading is mathematically modeled as a multiplicative component and an additive noise component. The additive component is removed by a denoising process, and the multiplicative component is estimated using a fast algorithm to minimize the intra-class intensity variation. We tested our method on synthetic images and real fluorescence E.coli images. It works well not only for visual inspection, but also for numerical evaluation. Our proposed method should be useful for further quantitative analysis especially for protein expression value comparison.
NASA Astrophysics Data System (ADS)
Kobinata, Hideo; Yamashita, Hiroshi; Nomura, Eiichi; Nakajima, Ken; Kuroki, Yukinori
1998-12-01
A new method for proximity effect correction, suitable for large-field electron-beam (EB) projection lithography with high accelerating voltage, such as SCALPEL and PREVAIL in the case where a stencil mask is used, is discussed. In this lithography, a large-field is exposed by the same dose, and thus, the dose modification method, which is used in the variable-shaped beam and the cell projection methods, cannot be used in this case. In this study, we report on development of a new proximity effect correction method which uses a pattern modified stencil mask suitable for high accelerating voltage and large-field EB projection lithography. In order to obtain the mask bias value, we have investigated linewidth reduction, due to the proximity effect, in the peripheral memory cell area, and found that it could be expressed by a simple function and all the correction parameters were easily determined from only the mask pattern data. The proximity effect for the peripheral array pattern could also be corrected by considering the pattern density. Calculated linewidth deviation was 3% or less for a 0.07-µm-L/S memory cell pattern and 5% or less for a 0.14-µm-line and 0.42-µm-space peripheral array pattern, simultaneously.
A method of undifferenced ambiguity resolution for GPS+GLONASS precise point positioning
Yi, Wenting; Song, Weiwei; Lou, Yidong; Shi, Chuang; Yao, Yibin
2016-01-01
Integer ambiguity resolution is critical for achieving positions of high precision and for shortening the convergence time of precise point positioning (PPP). However, GLONASS adopts the signal processing technology of frequency division multiple access and results in inter-frequency code biases (IFCBs), which are currently difficult to correct. This bias makes the methods proposed for GPS ambiguity fixing unsuitable for GLONASS. To realize undifferenced GLONASS ambiguity fixing, we propose an undifferenced ambiguity resolution method for GPS+GLONASS PPP, which considers the IFCBs estimation. The experimental result demonstrates that the success rate of GLONASS ambiguity fixing can reach 75% through the proposed method. Compared with the ambiguity float solutions, the positioning accuracies of ambiguity-fixed solutions of GLONASS-only PPP are increased by 12.2%, 20.9%, and 10.3%, and that of the GPS+GLONASS PPP by 13.0%, 35.2%, and 14.1% in the North, East and Up directions, respectively. PMID:27222361
A method of undifferenced ambiguity resolution for GPS+GLONASS precise point positioning.
Yi, Wenting; Song, Weiwei; Lou, Yidong; Shi, Chuang; Yao, Yibin
2016-05-25
Integer ambiguity resolution is critical for achieving positions of high precision and for shortening the convergence time of precise point positioning (PPP). However, GLONASS adopts the signal processing technology of frequency division multiple access and results in inter-frequency code biases (IFCBs), which are currently difficult to correct. This bias makes the methods proposed for GPS ambiguity fixing unsuitable for GLONASS. To realize undifferenced GLONASS ambiguity fixing, we propose an undifferenced ambiguity resolution method for GPS+GLONASS PPP, which considers the IFCBs estimation. The experimental result demonstrates that the success rate of GLONASS ambiguity fixing can reach 75% through the proposed method. Compared with the ambiguity float solutions, the positioning accuracies of ambiguity-fixed solutions of GLONASS-only PPP are increased by 12.2%, 20.9%, and 10.3%, and that of the GPS+GLONASS PPP by 13.0%, 35.2%, and 14.1% in the North, East and Up directions, respectively.