Sample records for bias correction model

  1. Towards process-informed bias correction of climate change simulations

    NASA Astrophysics Data System (ADS)

    Maraun, Douglas; Shepherd, Theodore G.; Widmann, Martin; Zappa, Giuseppe; Walton, Daniel; Gutiérrez, José M.; Hagemann, Stefan; Richter, Ingo; Soares, Pedro M. M.; Hall, Alex; Mearns, Linda O.

    2017-11-01

    Biases in climate model simulations introduce biases in subsequent impact simulations. Therefore, bias correction methods are operationally used to post-process regional climate projections. However, many problems have been identified, and some researchers question the very basis of the approach. Here we demonstrate that a typical cross-validation is unable to identify improper use of bias correction. Several examples show the limited ability of bias correction to correct and to downscale variability, and demonstrate that bias correction can cause implausible climate change signals. Bias correction cannot overcome major model errors, and naive application might result in ill-informed adaptation decisions. We conclude with a list of recommendations and suggestions for future research to reduce, post-process, and cope with climate model biases.

  2. How does bias correction of RCM precipitation affect modelled runoff?

    NASA Astrophysics Data System (ADS)

    Teng, J.; Potter, N. J.; Chiew, F. H. S.; Zhang, L.; Vaze, J.; Evans, J. P.

    2014-09-01

    Many studies bias correct daily precipitation from climate models to match the observed precipitation statistics, and the bias corrected data are then used for various modelling applications. This paper presents a review of recent methods used to bias correct precipitation from regional climate models (RCMs). The paper then assesses four bias correction methods applied to the weather research and forecasting (WRF) model simulated precipitation, and the follow-on impact on modelled runoff for eight catchments in southeast Australia. Overall, the best results are produced by either quantile mapping or a newly proposed two-state gamma distribution mapping method. However, the difference between the tested methods is small in the modelling experiments here (and as reported in the literature), mainly because of the substantial corrections required and inconsistent errors over time (non-stationarity). The errors remaining in bias corrected precipitation are typically amplified in modelled runoff. The tested methods cannot overcome limitation of RCM in simulating precipitation sequence, which affects runoff generation. Results further show that whereas bias correction does not seem to alter change signals in precipitation means, it can introduce additional uncertainty to change signals in high precipitation amounts and, consequently, in runoff. Future climate change impact studies need to take this into account when deciding whether to use raw or bias corrected RCM results. Nevertheless, RCMs will continue to improve and will become increasingly useful for hydrological applications as the bias in RCM simulations reduces.

  3. How does bias correction of regional climate model precipitation affect modelled runoff?

    NASA Astrophysics Data System (ADS)

    Teng, J.; Potter, N. J.; Chiew, F. H. S.; Zhang, L.; Wang, B.; Vaze, J.; Evans, J. P.

    2015-02-01

    Many studies bias correct daily precipitation from climate models to match the observed precipitation statistics, and the bias corrected data are then used for various modelling applications. This paper presents a review of recent methods used to bias correct precipitation from regional climate models (RCMs). The paper then assesses four bias correction methods applied to the weather research and forecasting (WRF) model simulated precipitation, and the follow-on impact on modelled runoff for eight catchments in southeast Australia. Overall, the best results are produced by either quantile mapping or a newly proposed two-state gamma distribution mapping method. However, the differences between the methods are small in the modelling experiments here (and as reported in the literature), mainly due to the substantial corrections required and inconsistent errors over time (non-stationarity). The errors in bias corrected precipitation are typically amplified in modelled runoff. The tested methods cannot overcome limitations of the RCM in simulating precipitation sequence, which affects runoff generation. Results further show that whereas bias correction does not seem to alter change signals in precipitation means, it can introduce additional uncertainty to change signals in high precipitation amounts and, consequently, in runoff. Future climate change impact studies need to take this into account when deciding whether to use raw or bias corrected RCM results. Nevertheless, RCMs will continue to improve and will become increasingly useful for hydrological applications as the bias in RCM simulations reduces.

  4. EMC Global Climate And Weather Modeling Branch Personnel

    Science.gov Websites

    Comparison Statistics which includes: NCEP Raw and Bias-Corrected Ensemble Domain Averaged Bias NCEP Raw and Bias-Corrected Ensemble Domain Averaged Bias Reduction (Percents) CMC Raw and Bias-Corrected Control Forecast Domain Averaged Bias CMC Raw and Bias-Corrected Control Forecast Domain Averaged Bias Reduction

  5. HESS Opinions "Should we apply bias correction to global and regional climate model data?"

    NASA Astrophysics Data System (ADS)

    Ehret, U.; Zehe, E.; Wulfmeyer, V.; Warrach-Sagi, K.; Liebert, J.

    2012-04-01

    Despite considerable progress in recent years, output of both Global and Regional Circulation Models is still afflicted with biases to a degree that precludes its direct use, especially in climate change impact studies. This is well known, and to overcome this problem bias correction (BC), i.e. the correction of model output towards observations in a post processing step for its subsequent application in climate change impact studies has now become a standard procedure. In this paper we argue that bias correction, which has a considerable influence on the results of impact studies, is not a valid procedure in the way it is currently used: it impairs the advantages of Circulation Models which are based on established physical laws by altering spatiotemporal field consistency, relations among variables and by violating conservation principles. Bias correction largely neglects feedback mechanisms and it is unclear whether bias correction methods are time-invariant under climate change conditions. Applying bias correction increases agreement of Climate Model output with observations in hind casts and hence narrows the uncertainty range of simulations and predictions without, however, providing a satisfactory physical justification. This is in most cases not transparent to the end user. We argue that this masks rather than reduces uncertainty, which may lead to avoidable forejudging of end users and decision makers. We present here a brief overview of state-of-the-art bias correction methods, discuss the related assumptions and implications, draw conclusions on the validity of bias correction and propose ways to cope with biased output of Circulation Models in the short term and how to reduce the bias in the long term. The most promising strategy for improved future Global and Regional Circulation Model simulations is the increase in model resolution to the convection-permitting scale in combination with ensemble predictions based on sophisticated approaches for ensemble perturbation. With this article, we advocate communicating the entire uncertainty range associated with climate change predictions openly and hope to stimulate a lively discussion on bias correction among the atmospheric and hydrological community and end users of climate change impact studies.

  6. Bias correction in the realized stochastic volatility model for daily volatility on the Tokyo Stock Exchange

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    2018-06-01

    The realized stochastic volatility model has been introduced to estimate more accurate volatility by using both daily returns and realized volatility. The main advantage of the model is that no special bias-correction factor for the realized volatility is required a priori. Instead, the model introduces a bias-correction parameter responsible for the bias hidden in realized volatility. We empirically investigate the bias-correction parameter for realized volatilities calculated at various sampling frequencies for six stocks on the Tokyo Stock Exchange, and then show that the dynamic behavior of the bias-correction parameter as a function of sampling frequency is qualitatively similar to that of the Hansen-Lunde bias-correction factor although their values are substantially different. Under the stochastic diffusion assumption of the return dynamics, we investigate the accuracy of estimated volatilities by examining the standardized returns. We find that while the moments of the standardized returns from low-frequency realized volatilities are consistent with the expectation from the Gaussian variables, the deviation from the expectation becomes considerably large at high frequencies. This indicates that the realized stochastic volatility model itself cannot completely remove bias at high frequencies.

  7. Mapping species distributions with MAXENT using a geographically biased sample of presence data: a performance assessment of methods for correcting sampling bias.

    PubMed

    Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean

    2014-01-01

    MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.

  8. Mapping Species Distributions with MAXENT Using a Geographically Biased Sample of Presence Data: A Performance Assessment of Methods for Correcting Sampling Bias

    PubMed Central

    Fourcade, Yoan; Engler, Jan O.; Rödder, Dennis; Secondi, Jean

    2014-01-01

    MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one “virtual” derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases. PMID:24818607

  9. Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models

    ERIC Educational Resources Information Center

    Raykov, Tenko

    2005-01-01

    A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…

  10. Using Analysis Increments (AI) to Estimate and Correct Systematic Errors in the Global Forecast System (GFS) Online

    NASA Astrophysics Data System (ADS)

    Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.

    2017-12-01

    Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub-grid scale physical parameterizations, more accurate discretization of the model dynamics, boundary conditions, radiative transfer codes, and other potential model improvements which can then replace the empirical correction scheme. The analysis increments also provide guidance in testing new physical parameterizations.

  11. Using a bias aware EnKF to account for unresolved structure in an unsaturated zone model

    NASA Astrophysics Data System (ADS)

    Erdal, D.; Neuweiler, I.; Wollschläger, U.

    2014-01-01

    When predicting flow in the unsaturated zone, any method for modeling the flow will have to define how, and to what level, the subsurface structure is resolved. In this paper, we use the Ensemble Kalman Filter to assimilate local soil water content observations from both a synthetic layered lysimeter and a real field experiment in layered soil in an unsaturated water flow model. We investigate the use of colored noise bias corrections to account for unresolved subsurface layering in a homogeneous model and compare this approach with a fully resolved model. In both models, we use a simplified model parameterization in the Ensemble Kalman Filter. The results show that the use of bias corrections can increase the predictive capability of a simplified homogeneous flow model if the bias corrections are applied to the model states. If correct knowledge of the layering structure is available, the fully resolved model performs best. However, if no, or erroneous, layering is used in the model, the use of a homogeneous model with bias corrections can be the better choice for modeling the behavior of the system.

  12. Impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling

    NASA Astrophysics Data System (ADS)

    Chen, Jie; Li, Chao; Brissette, François P.; Chen, Hua; Wang, Mingna; Essou, Gilles R. C.

    2018-05-01

    Bias correction is usually implemented prior to using climate model outputs for impact studies. However, bias correction methods that are commonly used treat climate variables independently and often ignore inter-variable dependencies. The effects of ignoring such dependencies on impact studies need to be investigated. This study aims to assess the impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling. To this end, a joint bias correction (JBC) method which corrects the joint distribution of two variables as a whole is compared with an independent bias correction (IBC) method; this is considered in terms of correcting simulations of precipitation and temperature from 26 climate models for hydrological modeling over 12 watersheds located in various climate regimes. The results show that the simulated precipitation and temperature are considerably biased not only in the individual distributions, but also in their correlations, which in turn result in biased hydrological simulations. In addition to reducing the biases of the individual characteristics of precipitation and temperature, the JBC method can also reduce the bias in precipitation-temperature (P-T) correlations. In terms of hydrological modeling, the JBC method performs significantly better than the IBC method for 11 out of the 12 watersheds over the calibration period. For the validation period, the advantages of the JBC method are greatly reduced as the performance becomes dependent on the watershed, GCM and hydrological metric considered. For arid/tropical and snowfall-rainfall-mixed watersheds, JBC performs better than IBC. For snowfall- or rainfall-dominated watersheds, however, the two methods behave similarly, with IBC performing somewhat better than JBC. Overall, the results emphasize the advantages of correcting the P-T correlation when using climate model-simulated precipitation and temperature to assess the impact of climate change on watershed hydrology. However, a thorough validation and a comparison with other methods are recommended before using the JBC method, since it may perform worse than the IBC method for some cases due to bias nonstationarity of climate model outputs.

  13. Causes of model dry and warm bias over central U.S. and impact on climate projections.

    PubMed

    Lin, Yanluan; Dong, Wenhao; Zhang, Minghua; Xie, Yuanyu; Xue, Wei; Huang, Jianbin; Luo, Yong

    2017-10-12

    Climate models show a conspicuous summer warm and dry bias over the central United States. Using results from 19 climate models in the Coupled Model Intercomparison Project Phase 5 (CMIP5), we report a persistent dependence of warm bias on dry bias with the precipitation deficit leading the warm bias over this region. The precipitation deficit is associated with the widespread failure of models in capturing strong rainfall events in summer over the central U.S. A robust linear relationship between the projected warming and the present-day warm bias enables us to empirically correct future temperature projections. By the end of the 21st century under the RCP8.5 scenario, the corrections substantially narrow the intermodel spread of the projections and reduce the projected temperature by 2.5 K, resulting mainly from the removal of the warm bias. Instead of a sharp decrease, after this correction the projected precipitation is nearly neutral for all scenarios.Climate models repeatedly show a warm and dry bias over the central United States, but the origin of this bias remains unclear. Here the authors associate this bias to precipitation deficits in models and after applying a correction, projected precipitation in this region shows no significant changes.

  14. Statistical bias correction modelling for seasonal rainfall forecast for the case of Bali island

    NASA Astrophysics Data System (ADS)

    Lealdi, D.; Nurdiati, S.; Sopaheluwakan, A.

    2018-04-01

    Rainfall is an element of climate which is highly influential to the agricultural sector. Rain pattern and distribution highly determines the sustainability of agricultural activities. Therefore, information on rainfall is very useful for agriculture sector and farmers in anticipating the possibility of extreme events which often cause failures of agricultural production. This research aims to identify the biases from seasonal forecast products from ECMWF (European Centre for Medium-Range Weather Forecasts) rainfall forecast and to build a transfer function in order to correct the distribution biases as a new prediction model using quantile mapping approach. We apply this approach to the case of Bali Island, and as a result, the use of bias correction methods in correcting systematic biases from the model gives better results. The new prediction model obtained with this approach is better than ever. We found generally that during rainy season, the bias correction approach performs better than in dry season.

  15. Statistical Downscaling and Bias Correction of Climate Model Outputs for Climate Change Impact Assessment in the U.S. Northeast

    NASA Technical Reports Server (NTRS)

    Ahmed, Kazi Farzan; Wang, Guiling; Silander, John; Wilson, Adam M.; Allen, Jenica M.; Horton, Radley; Anyah, Richard

    2013-01-01

    Statistical downscaling can be used to efficiently downscale a large number of General Circulation Model (GCM) outputs to a fine temporal and spatial scale. To facilitate regional impact assessments, this study statistically downscales (to 1/8deg spatial resolution) and corrects the bias of daily maximum and minimum temperature and daily precipitation data from six GCMs and four Regional Climate Models (RCMs) for the northeast United States (US) using the Statistical Downscaling and Bias Correction (SDBC) approach. Based on these downscaled data from multiple models, five extreme indices were analyzed for the future climate to quantify future changes of climate extremes. For a subset of models and indices, results based on raw and bias corrected model outputs for the present-day climate were compared with observations, which demonstrated that bias correction is important not only for GCM outputs, but also for RCM outputs. For future climate, bias correction led to a higher level of agreements among the models in predicting the magnitude and capturing the spatial pattern of the extreme climate indices. We found that the incorporation of dynamical downscaling as an intermediate step does not lead to considerable differences in the results of statistical downscaling for the study domain.

  16. BeiDou Geostationary Satellite Code Bias Modeling Using Fengyun-3C Onboard Measurements.

    PubMed

    Jiang, Kecai; Li, Min; Zhao, Qile; Li, Wenwen; Guo, Xiang

    2017-10-27

    This study validated and investigated elevation- and frequency-dependent systematic biases observed in ground-based code measurements of the Chinese BeiDou navigation satellite system, using the onboard BeiDou code measurement data from the Chinese meteorological satellite Fengyun-3C. Particularly for geostationary earth orbit satellites, sky-view coverage can be achieved over the entire elevation and azimuth angle ranges with the available onboard tracking data, which is more favorable to modeling code biases. Apart from the BeiDou-satellite-induced biases, the onboard BeiDou code multipath effects also indicate pronounced near-field systematic biases that depend only on signal frequency and the line-of-sight directions. To correct these biases, we developed a proposed code correction model by estimating the BeiDou-satellite-induced biases as linear piece-wise functions in different satellite groups and the near-field systematic biases in a grid approach. To validate the code bias model, we carried out orbit determination using single-frequency BeiDou data with and without code bias corrections applied. Orbit precision statistics indicate that those code biases can seriously degrade single-frequency orbit determination. After the correction model was applied, the orbit position errors, 3D root mean square, were reduced from 150.6 to 56.3 cm.

  17. BeiDou Geostationary Satellite Code Bias Modeling Using Fengyun-3C Onboard Measurements

    PubMed Central

    Jiang, Kecai; Li, Min; Zhao, Qile; Li, Wenwen; Guo, Xiang

    2017-01-01

    This study validated and investigated elevation- and frequency-dependent systematic biases observed in ground-based code measurements of the Chinese BeiDou navigation satellite system, using the onboard BeiDou code measurement data from the Chinese meteorological satellite Fengyun-3C. Particularly for geostationary earth orbit satellites, sky-view coverage can be achieved over the entire elevation and azimuth angle ranges with the available onboard tracking data, which is more favorable to modeling code biases. Apart from the BeiDou-satellite-induced biases, the onboard BeiDou code multipath effects also indicate pronounced near-field systematic biases that depend only on signal frequency and the line-of-sight directions. To correct these biases, we developed a proposed code correction model by estimating the BeiDou-satellite-induced biases as linear piece-wise functions in different satellite groups and the near-field systematic biases in a grid approach. To validate the code bias model, we carried out orbit determination using single-frequency BeiDou data with and without code bias corrections applied. Orbit precision statistics indicate that those code biases can seriously degrade single-frequency orbit determination. After the correction model was applied, the orbit position errors, 3D root mean square, were reduced from 150.6 to 56.3 cm. PMID:29076998

  18. Statistical bias correction method applied on CMIP5 datasets over the Indian region during the summer monsoon season for climate change applications

    NASA Astrophysics Data System (ADS)

    Prasanna, V.

    2018-01-01

    This study makes use of temperature and precipitation from CMIP5 climate model output for climate change application studies over the Indian region during the summer monsoon season (JJAS). Bias correction of temperature and precipitation from CMIP5 GCM simulation results with respect to observation is discussed in detail. The non-linear statistical bias correction is a suitable bias correction method for climate change data because it is simple and does not add up artificial uncertainties to the impact assessment of climate change scenarios for climate change application studies (agricultural production changes) in the future. The simple statistical bias correction uses observational constraints on the GCM baseline, and the projected results are scaled with respect to the changing magnitude in future scenarios, varying from one model to the other. Two types of bias correction techniques are shown here: (1) a simple bias correction using a percentile-based quantile-mapping algorithm and (2) a simple but improved bias correction method, a cumulative distribution function (CDF; Weibull distribution function)-based quantile-mapping algorithm. This study shows that the percentile-based quantile mapping method gives results similar to the CDF (Weibull)-based quantile mapping method, and both the methods are comparable. The bias correction is applied on temperature and precipitation variables for present climate and future projected data to make use of it in a simple statistical model to understand the future changes in crop production over the Indian region during the summer monsoon season. In total, 12 CMIP5 models are used for Historical (1901-2005), RCP4.5 (2005-2100), and RCP8.5 (2005-2100) scenarios. The climate index from each CMIP5 model and the observed agricultural yield index over the Indian region are used in a regression model to project the changes in the agricultural yield over India from RCP4.5 and RCP8.5 scenarios. The results revealed a better convergence of model projections in the bias corrected data compared to the uncorrected data. The study can be extended to localized regional domains aimed at understanding the changes in the agricultural productivity in the future with an agro-economy or a simple statistical model. The statistical model indicated that the total food grain yield is going to increase over the Indian region in the future, the increase in the total food grain yield is approximately 50 kg/ ha for the RCP4.5 scenario from 2001 until the end of 2100, and the increase in the total food grain yield is approximately 90 kg/ha for the RCP8.5 scenario from 2001 until the end of 2100. There are many studies using bias correction techniques, but this study applies the bias correction technique to future climate scenario data from CMIP5 models and applied it to crop statistics to find future crop yield changes over the Indian region.

  19. Streamflow Bias Correction for Climate Change Impact Studies: Harmless Correction or Wrecking Ball?

    NASA Astrophysics Data System (ADS)

    Nijssen, B.; Chegwidden, O.

    2017-12-01

    Projections of the hydrologic impacts of climate change rely on a modeling chain that includes estimates of future greenhouse gas emissions, global climate models, and hydrologic models. The resulting streamflow time series are used in turn as input to impact studies. While these flows can sometimes be used directly in these impact studies, many applications require additional post-processing to remove model errors. Water resources models and regulation studies are a prime example of this type of application. These models rely on specific flows and reservoir levels to trigger reservoir releases and diversions and do not function well if the unregulated streamflow inputs are significantly biased in time and/or amount. This post-processing step is typically referred to as bias-correction, even though this step corrects not just the mean but the entire distribution of flows. Various quantile-mapping approaches have been developed that adjust the modeled flows to match a reference distribution for some historic period. Simulations of future flows are then post-processed using this same mapping to remove hydrologic model errors. These streamflow bias-correction methods have received far less scrutiny than the downscaling and bias-correction methods that are used for climate model output, mostly because they are less widely used. However, some of these methods introduce large artifacts in the resulting flow series, in some cases severely distorting the climate change signal that is present in future flows. In this presentation, we discuss our experience with streamflow bias-correction methods as part of a climate change impact study in the Columbia River basin in the Pacific Northwest region of the United States. To support this discussion, we present a novel way to assess whether a streamflow bias-correction method is merely a harmless correction or is more akin to taking a wrecking ball to the climate change signal.

  20. An improved bias correction method of daily rainfall data using a sliding window technique for climate change impact assessment

    NASA Astrophysics Data System (ADS)

    Smitha, P. S.; Narasimhan, B.; Sudheer, K. P.; Annamalai, H.

    2018-01-01

    Regional climate models (RCMs) are used to downscale the coarse resolution General Circulation Model (GCM) outputs to a finer resolution for hydrological impact studies. However, RCM outputs often deviate from the observed climatological data, and therefore need bias correction before they are used for hydrological simulations. While there are a number of methods for bias correction, most of them use monthly statistics to derive correction factors, which may cause errors in the rainfall magnitude when applied on a daily scale. This study proposes a sliding window based daily correction factor derivations that help build reliable daily rainfall data from climate models. The procedure is applied to five existing bias correction methods, and is tested on six watersheds in different climatic zones of India for assessing the effectiveness of the corrected rainfall and the consequent hydrological simulations. The bias correction was performed on rainfall data downscaled using Conformal Cubic Atmospheric Model (CCAM) to 0.5° × 0.5° from two different CMIP5 models (CNRM-CM5.0, GFDL-CM3.0). The India Meteorological Department (IMD) gridded (0.25° × 0.25°) observed rainfall data was considered to test the effectiveness of the proposed bias correction method. The quantile-quantile (Q-Q) plots and Nash Sutcliffe efficiency (NSE) were employed for evaluation of different methods of bias correction. The analysis suggested that the proposed method effectively corrects the daily bias in rainfall as compared to using monthly factors. The methods such as local intensity scaling, modified power transformation and distribution mapping, which adjusted the wet day frequencies, performed superior compared to the other methods, which did not consider adjustment of wet day frequencies. The distribution mapping method with daily correction factors was able to replicate the daily rainfall pattern of observed data with NSE value above 0.81 over most parts of India. Hydrological simulations forced using the bias corrected rainfall (distribution mapping and modified power transformation methods that used the proposed daily correction factors) was similar to those simulated by the IMD rainfall. The results demonstrate that the methods and the time scales used for bias correction of RCM rainfall data have a larger impact on the accuracy of the daily rainfall and consequently the simulated streamflow. The analysis suggests that the distribution mapping with daily correction factors can be preferred for adjusting RCM rainfall data irrespective of seasons or climate zones for realistic simulation of streamflow.

  1. Detection and Attribution of Simulated Climatic Extreme Events and Impacts: High Sensitivity to Bias Correction

    NASA Astrophysics Data System (ADS)

    Sippel, S.; Otto, F. E. L.; Forkel, M.; Allen, M. R.; Guillod, B. P.; Heimann, M.; Reichstein, M.; Seneviratne, S. I.; Kirsten, T.; Mahecha, M. D.

    2015-12-01

    Understanding, quantifying and attributing the impacts of climatic extreme events and variability is crucial for societal adaptation in a changing climate. However, climate model simulations generated for this purpose typically exhibit pronounced biases in their output that hinders any straightforward assessment of impacts. To overcome this issue, various bias correction strategies are routinely used to alleviate climate model deficiencies most of which have been criticized for physical inconsistency and the non-preservation of the multivariate correlation structure. We assess how biases and their correction affect the quantification and attribution of simulated extremes and variability in i) climatological variables and ii) impacts on ecosystem functioning as simulated by a terrestrial biosphere model. Our study demonstrates that assessments of simulated climatic extreme events and impacts in the terrestrial biosphere are highly sensitive to bias correction schemes with major implications for the detection and attribution of these events. We introduce a novel ensemble-based resampling scheme based on a large regional climate model ensemble generated by the distributed weather@home setup[1], which fully preserves the physical consistency and multivariate correlation structure of the model output. We use extreme value statistics to show that this procedure considerably improves the representation of climatic extremes and variability. Subsequently, biosphere-atmosphere carbon fluxes are simulated using a terrestrial ecosystem model (LPJ-GSI) to further demonstrate the sensitivity of ecosystem impacts to the methodology of bias correcting climate model output. We find that uncertainties arising from bias correction schemes are comparable in magnitude to model structural and parameter uncertainties. The present study consists of a first attempt to alleviate climate model biases in a physically consistent way and demonstrates that this yields improved simulations of climate extremes and associated impacts. [1] http://www.climateprediction.net/weatherathome/

  2. A Realization of Bias Correction Method in the GMAO Coupled System

    NASA Technical Reports Server (NTRS)

    Chang, Yehui; Koster, Randal; Wang, Hailan; Schubert, Siegfried; Suarez, Max

    2018-01-01

    Over the past several decades, a tremendous effort has been made to improve model performance in the simulation of the climate system. The cold or warm sea surface temperature (SST) bias in the tropics is still a problem common to most coupled ocean atmosphere general circulation models (CGCMs). The precipitation biases in CGCMs are also accompanied by SST and surface wind biases. The deficiencies and biases over the equatorial oceans through their influence on the Walker circulation likely contribute the precipitation biases over land surfaces. In this study, we introduce an approach in the CGCM modeling to correct model biases. This approach utilizes the history of the model's short-term forecasting errors and their seasonal dependence to modify model's tendency term and to minimize its climate drift. The study shows that such an approach removes most of model climate biases. A number of other aspects of the model simulation (e.g. extratropical transient activities) are also improved considerably due to the imposed pre-processed initial 3-hour model drift corrections. Because many regional biases in the GEOS-5 CGCM are common amongst other current models, our approaches and findings are applicable to these other models as well.

  3. Impact of a statistical bias correction on the projected simulated hydrological changes obtained from three GCMs and two hydrology models

    NASA Astrophysics Data System (ADS)

    Hagemann, Stefan; Chen, Cui; Haerter, Jan O.; Gerten, Dieter; Heinke, Jens; Piani, Claudio

    2010-05-01

    Future climate model scenarios depend crucially on their adequate representation of the hydrological cycle. Within the European project "Water and Global Change" (WATCH) special care is taken to couple state-of-the-art climate model output to a suite of hydrological models. This coupling is expected to lead to a better assessment of changes in the hydrological cycle. However, due to the systematic model errors of climate models, their output is often not directly applicable as input for hydrological models. Thus, the methodology of a statistical bias correction has been developed, which can be used for correcting climate model output to produce internally consistent fields that have the same statistical intensity distribution as the observations. As observations, global re-analysed daily data of precipitation and temperature are used that are obtained in the WATCH project. We will apply the bias correction to global climate model data of precipitation and temperature from the GCMs ECHAM5/MPIOM, CNRM-CM3 and LMDZ-4, and intercompare the bias corrected data to the original GCM data and the observations. Then, the orginal and the bias corrected GCM data will be used to force two global hydrology models: (1) the hydrological model of the Max Planck Institute for Meteorology (MPI-HM) consisting of the Simplified Land surface (SL) scheme and the Hydrological Discharge (HD) model, and (2) the dynamic vegetation model LPJmL operated by the Potsdam Institute for Climate Impact Research. The impact of the bias correction on the projected simulated hydrological changes will be analysed, and the resulting behaviour of the two hydrology models will be compared.

  4. Bias-correction of CORDEX-MENA projections using the Distribution Based Scaling method

    NASA Astrophysics Data System (ADS)

    Bosshard, Thomas; Yang, Wei; Sjökvist, Elin; Arheimer, Berit; Graham, L. Phil

    2014-05-01

    Within the Regional Initiative for the Assessment of the Impact of Climate Change on Water Resources and Socio-Economic Vulnerability in the Arab Region (RICCAR) lead by UN ESCWA, CORDEX RCM projections for the Middle East Northern Africa (MENA) domain are used to drive hydrological impacts models. Bias-correction of newly available CORDEX-MENA projections is a central part of this project. In this study, the distribution based scaling (DBS) method has been applied to 6 regional climate model projections driven by 2 RCP emission scenarios. The DBS method uses a quantile mapping approach and features a conditional temperature correction dependent on the wet/dry state in the climate model data. The CORDEX-MENA domain is particularly challenging for bias-correction as it spans very diverse climates showing pronounced dry and wet seasons. Results show that the regional climate models simulate too low temperatures and often have a displaced rainfall band compared to WATCH ERA-Interim forcing data in the reference period 1979-2008. DBS is able to correct the temperature biases as well as some aspects of the precipitation biases. Special focus is given to the analysis of the influence of the dry-frequency bias (i.e. climate models simulating too few rain days) on the bias-corrected projections and on the modification of the climate change signal by the DBS method.

  5. Practical estimate of gradient nonlinearity for implementation of apparent diffusion coefficient bias correction.

    PubMed

    Malkyarenko, Dariya I; Chenevert, Thomas L

    2014-12-01

    To describe an efficient procedure to empirically characterize gradient nonlinearity and correct for the corresponding apparent diffusion coefficient (ADC) bias on a clinical magnetic resonance imaging (MRI) scanner. Spatial nonlinearity scalars for individual gradient coils along superior and right directions were estimated via diffusion measurements of an isotropicic e-water phantom. Digital nonlinearity model from an independent scanner, described in the literature, was rescaled by system-specific scalars to approximate 3D bias correction maps. Correction efficacy was assessed by comparison to unbiased ADC values measured at isocenter. Empirically estimated nonlinearity scalars were confirmed by geometric distortion measurements of a regular grid phantom. The applied nonlinearity correction for arbitrarily oriented diffusion gradients reduced ADC bias from 20% down to 2% at clinically relevant offsets both for isotropic and anisotropic media. Identical performance was achieved using either corrected diffusion-weighted imaging (DWI) intensities or corrected b-values for each direction in brain and ice-water. Direction-average trace image correction was adequate only for isotropic medium. Empiric scalar adjustment of an independent gradient nonlinearity model adequately described DWI bias for a clinical scanner. Observed efficiency of implemented ADC bias correction quantitatively agreed with previous theoretical predictions and numerical simulations. The described procedure provides an independent benchmark for nonlinearity bias correction of clinical MRI scanners.

  6. Non-stationary Bias Correction of Monthly CMIP5 Temperature Projections over China using a Residual-based Bagging Tree Model

    NASA Astrophysics Data System (ADS)

    Yang, T.; Lee, C.

    2017-12-01

    The biases in the Global Circulation Models (GCMs) are crucial for understanding future climate changes. Currently, most bias correction methodologies suffer from the assumption that model bias is stationary. This paper provides a non-stationary bias correction model, termed Residual-based Bagging Tree (RBT) model, to reduce simulation biases and to quantify the contributions of single models. Specifically, the proposed model estimates the residuals between individual models and observations, and takes the differences between observations and the ensemble mean into consideration during the model training process. A case study is conducted for 10 major river basins in Mainland China during different seasons. Results show that the proposed model is capable of providing accurate and stable predictions while including the non-stationarities into the modeling framework. Significant reductions in both bias and root mean squared error are achieved with the proposed RBT model, especially for the central and western parts of China. The proposed RBT model has consistently better performance in reducing biases when compared to the raw ensemble mean, the ensemble mean with simple additive bias correction, and the single best model for different seasons. Furthermore, the contribution of each single GCM in reducing the overall bias is quantified. The single model importance varies between 3.1% and 7.2%. For different future scenarios (RCP 2.6, RCP 4.5, and RCP 8.5), the results from RBT model suggest temperature increases of 1.44 ºC, 2.59 ºC, and 4.71 ºC by the end of the century, respectively, when compared to the average temperature during 1970 - 1999.

  7. Bias correction of temperature produced by the Community Climate System Model using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Moghim, S.; Hsu, K.; Bras, R. L.

    2013-12-01

    General Circulation Models (GCMs) are used to predict circulation and energy transfers between the atmosphere and the land. It is known that these models produce biased results that will have impact on their uses. This work proposes a new method for bias correction: the equidistant cumulative distribution function-artificial neural network (EDCDFANN) procedure. The method uses artificial neural networks (ANNs) as a surrogate model to estimate bias-corrected temperature, given an identification of the system derived from GCM models output variables. A two-layer feed forward neural network is trained with observations during a historical period and then the adjusted network can be used to predict bias-corrected temperature for future periods. To capture the extreme values this method is combined with the equidistant CDF matching method (EDCDF, Li et al. 2010). The proposed method is tested with the Community Climate System Model (CCSM3) outputs using air and skin temperature, specific humidity, shortwave and longwave radiation as inputs to the ANN. This method decreases the mean square error and increases the spatial correlation between the modeled temperature and the observed one. The results indicate the EDCDFANN has potential to remove the biases of the model outputs.

  8. The Detection and Correction of Bias in Student Ratings of Instruction.

    ERIC Educational Resources Information Center

    Haladyna, Thomas; Hess, Robert K.

    1994-01-01

    A Rasch model was used to detect and correct bias in Likert rating scales used to assess student perceptions of college teaching, using a database of ratings. Statistical corrections were significant, supporting the model's potential utility. Recommendations are made for a theoretical rationale and further research on the model. (Author/MSE)

  9. Addressing Spatial Dependence Bias in Climate Model Simulations—An Independent Component Analysis Approach

    NASA Astrophysics Data System (ADS)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2018-02-01

    Conventional bias correction is usually applied on a grid-by-grid basis, meaning that the resulting corrections cannot address biases in the spatial distribution of climate variables. To solve this problem, a two-step bias correction method is proposed here to correct time series at multiple locations conjointly. The first step transforms the data to a set of statistically independent univariate time series, using a technique known as independent component analysis (ICA). The mutually independent signals can then be bias corrected as univariate time series and back-transformed to improve the representation of spatial dependence in the data. The spatially corrected data are then bias corrected at the grid scale in the second step. The method has been applied to two CMIP5 General Circulation Model simulations for six different climate regions of Australia for two climate variables—temperature and precipitation. The results demonstrate that the ICA-based technique leads to considerable improvements in temperature simulations with more modest improvements in precipitation. Overall, the method results in current climate simulations that have greater equivalency in space and time with observational data.

  10. The impact of climatological model biases in the North Atlantic jet on predicted future circulation change

    NASA Astrophysics Data System (ADS)

    Simpson, I.

    2015-12-01

    A long standing bias among global climate models (GCMs) is their incorrect representation of the wintertime circulation of the North Atlantic region. Specifically models tend to exhibit a North Atlantic jet (and associated storm track) that is too zonal, extending across central Europe, when it should tilt northward toward Scandinavia. GCM's consistently predict substantial changes in the large scale circulation in this region, consisting of a localized anti-cyclonic circulation, centered over the Mediterranean and accompanied by increased aridity there and increased storminess over Northern Europe.Here, we present preliminary results from experiments that are designed to address the question of what the impact of the climatological circulation biases might be on this predicted future response. Climate change experiments will be compared in two versions of the Community Earth System Model: the first is a free running version of the model, as typically used in climate prediction; the second is a bias corrected version of the model in which a seasonally varying cycle of bias correction tendencies are applied to the wind and temperature fields. These bias correction tendencies are designed to account for deficiencies in the fast parameterized processes, with an aim to push the model toward a more realistic climatology.While these experiments come with the caveat that they assume the bias correction tendencies will remain constant with time, they allow for an initial assessment, through controlled experiments, of the impact that biases in the climatological circulation can have on future predictions in this region. They will also motivate future work that can make use of the bias correction tendencies to understand the underlying physical processes responsible for the incorrect tilt of the jet.

  11. Characterizing bias correction uncertainty in wheat yield predictions

    NASA Astrophysics Data System (ADS)

    Ortiz, Andrea Monica; Jones, Julie; Freckleton, Robert; Scaife, Adam

    2017-04-01

    Farming systems are under increased pressure due to current and future climate change, variability and extremes. Research on the impacts of climate change on crop production typically rely on the output of complex Global and Regional Climate Models, which are used as input to crop impact models. Yield predictions from these top-down approaches can have high uncertainty for several reasons, including diverse model construction and parameterization, future emissions scenarios, and inherent or response uncertainty. These uncertainties propagate down each step of the 'cascade of uncertainty' that flows from climate input to impact predictions, leading to yield predictions that may be too complex for their intended use in practical adaptation options. In addition to uncertainty from impact models, uncertainty can also stem from the intermediate steps that are used in impact studies to adjust climate model simulations to become more realistic when compared to observations, or to correct the spatial or temporal resolution of climate simulations, which are often not directly applicable as input into impact models. These important steps of bias correction or calibration also add uncertainty to final yield predictions, given the various approaches that exist to correct climate model simulations. In order to address how much uncertainty the choice of bias correction method can add to yield predictions, we use several evaluation runs from Regional Climate Models from the Coordinated Regional Downscaling Experiment over Europe (EURO-CORDEX) at different resolutions together with different bias correction methods (linear and variance scaling, power transformation, quantile-quantile mapping) as input to a statistical crop model for wheat, a staple European food crop. The objective of our work is to compare the resulting simulation-driven hindcasted wheat yields to climate observation-driven wheat yield hindcasts from the UK and Germany in order to determine ranges of yield uncertainty that result from different climate model simulation input and bias correction methods. We simulate wheat yields using a General Linear Model that includes the effects of seasonal maximum temperatures and precipitation, since wheat is sensitive to heat stress during important developmental stages. We use the same statistical model to predict future wheat yields using the recently available bias-corrected simulations of EURO-CORDEX-Adjust. While statistical models are often criticized for their lack of complexity, an advantage is that we are here able to consider only the effect of the choice of climate model, resolution or bias correction method on yield. Initial results using both past and future bias-corrected climate simulations with a process-based model will also be presented. Through these methods, we make recommendations in preparing climate model output for crop models.

  12. Can quantile mapping improve precipitation extremes from regional climate models?

    NASA Astrophysics Data System (ADS)

    Tani, Satyanarayana; Gobiet, Andreas

    2015-04-01

    The ability of quantile mapping to accurately bias correct regard to precipitation extremes is investigated in this study. We developed new methods by extending standard quantile mapping (QMα) to improve the quality of bias corrected extreme precipitation events as simulated by regional climate model (RCM) output. The new QM version (QMβ) was developed by combining parametric and nonparametric bias correction methods. The new nonparametric method is tested with and without a controlling shape parameter (Qmβ1 and Qmβ0, respectively). Bias corrections are applied on hindcast simulations for a small ensemble of RCMs at six different locations over Europe. We examined the quality of the extremes through split sample and cross validation approaches of these three bias correction methods. This split-sample approach mimics the application to future climate scenarios. A cross validation framework with particular focus on new extremes was developed. Error characteristics, q-q plots and Mean Absolute Error (MAEx) skill scores are used for evaluation. We demonstrate the unstable behaviour of correction function at higher quantiles with QMα, whereas the correction functions with for QMβ0 and QMβ1 are smoother, with QMβ1 providing the most reasonable correction values. The result from q-q plots demonstrates that, all bias correction methods are capable of producing new extremes but QMβ1 reproduces new extremes with low biases in all seasons compared to QMα, QMβ0. Our results clearly demonstrate the inherent limitations of empirical bias correction methods employed for extremes, particularly new extremes, and our findings reveals that the new bias correction method (Qmß1) produces more reliable climate scenarios for new extremes. These findings present a methodology that can better capture future extreme precipitation events, which is necessary to improve regional climate change impact studies.

  13. Hydrological modeling as an evaluation tool of EURO-CORDEX climate projections and bias correction methods

    NASA Astrophysics Data System (ADS)

    Hakala, Kirsti; Addor, Nans; Seibert, Jan

    2017-04-01

    Streamflow stemming from Switzerland's mountainous landscape will be influenced by climate change, which will pose significant challenges to the water management and policy sector. In climate change impact research, the determination of future streamflow is impeded by different sources of uncertainty, which propagate through the model chain. In this research, we explicitly considered the following sources of uncertainty: (1) climate models, (2) downscaling of the climate projections to the catchment scale, (3) bias correction method and (4) parameterization of the hydrological model. We utilize climate projections at the 0.11 degree 12.5 km resolution from the EURO-CORDEX project, which are the most recent climate projections for the European domain. EURO-CORDEX is comprised of regional climate model (RCM) simulations, which have been downscaled from global climate models (GCMs) from the CMIP5 archive, using both dynamical and statistical techniques. Uncertainties are explored by applying a modeling chain involving 14 GCM-RCMs to ten Swiss catchments. We utilize the rainfall-runoff model HBV Light, which has been widely used in operational hydrological forecasting. The Lindström measure, a combination of model efficiency and volume error, was used as an objective function to calibrate HBV Light. Ten best sets of parameters are then achieved by calibrating using the genetic algorithm and Powell optimization (GAP) method. The GAP optimization method is based on the evolution of parameter sets, which works by selecting and recombining high performing parameter sets with each other. Once HBV is calibrated, we then perform a quantitative comparison of the influence of biases inherited from climate model simulations to the biases stemming from the hydrological model. The evaluation is conducted over two time periods: i) 1980-2009 to characterize the simulation realism under the current climate and ii) 2070-2099 to identify the magnitude of the projected change of streamflow under the climate scenarios RCP4.5 and RCP8.5. We utilize two techniques for correcting biases in the climate model output: quantile mapping and a new method, frequency bias correction. The FBC method matches the frequencies between observed and GCM-RCM data. In this way, it can be used to correct for all time scales, which is a known limitation of quantile mapping. A novel approach for the evaluation of the climate simulations and bias correction methods was then applied. Streamflow can be thought of as the "great integrator" of uncertainties. The ability, or the lack thereof, to correctly simulate streamflow is a way to assess the realism of the bias-corrected climate simulations. Long-term monthly mean as well as high and low flow metrics are used to evaluate the realism of the simulations under current climate and to gauge the impacts of climate change on streamflow. Preliminary results show that under present climate, calibration of the hydrological model comprises of a much smaller band of uncertainty in the modeling chain as compared to the bias correction of the GCM-RCMs. Therefore, for future time periods, we expect the bias correction of climate model data to have a greater influence on projected changes in streamflow than the calibration of the hydrological model.

  14. Explanation of Two Anomalous Results in Statistical Mediation Analysis

    ERIC Educational Resources Information Center

    Fritz, Matthew S.; Taylor, Aaron B.; MacKinnon, David P.

    2012-01-01

    Previous studies of different methods of testing mediation models have consistently found two anomalous results. The first result is elevated Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap tests not found in nonresampling tests or in resampling tests that did not include a bias correction. This is of special…

  15. Use of bias correction techniques to improve seasonal forecasts for reservoirs - A case-study in northwestern Mediterranean.

    PubMed

    Marcos, Raül; Llasat, Ma Carmen; Quintana-Seguí, Pere; Turco, Marco

    2018-01-01

    In this paper, we have compared different bias correction methodologies to assess whether they could be advantageous for improving the performance of a seasonal prediction model for volume anomalies in the Boadella reservoir (northwestern Mediterranean). The bias correction adjustments have been applied on precipitation and temperature from the European Centre for Middle-range Weather Forecasting System 4 (S4). We have used three bias correction strategies: two linear (mean bias correction, BC, and linear regression, LR) and one non-linear (Model Output Statistics analogs, MOS-analog). The results have been compared with climatology and persistence. The volume-anomaly model is a previously computed Multiple Linear Regression that ingests precipitation, temperature and in-flow anomaly data to simulate monthly volume anomalies. The potential utility for end-users has been assessed using economic value curve areas. We have studied the S4 hindcast period 1981-2010 for each month of the year and up to seven months ahead considering an ensemble of 15 members. We have shown that the MOS-analog and LR bias corrections can improve the original S4. The application to volume anomalies points towards the possibility to introduce bias correction methods as a tool to improve water resource seasonal forecasts in an end-user context of climate services. Particularly, the MOS-analog approach gives generally better results than the other approaches in late autumn and early winter. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Quantile Mapping Bias correction for daily precipitation over Vietnam in a regional climate model

    NASA Astrophysics Data System (ADS)

    Trinh, L. T.; Matsumoto, J.; Ngo-Duc, T.

    2017-12-01

    In the past decades, Regional Climate Models (RCMs) have been developed significantly, allowing climate simulation to be conducted at a higher resolution. However, RCMs often contained biases when comparing with observations. Therefore, statistical correction methods were commonly employed to reduce/minimize the model biases. In this study, outputs of the Regional Climate Model (RegCM) version 4.3 driven by the CNRM-CM5 global products were evaluated with and without the Quantile Mapping (QM) bias correction method. The model domain covered the area from 90oE to 145oE and from 15oS to 40oN with a horizontal resolution of 25km. The QM bias correction processes were implemented by using the Vietnam Gridded precipitation dataset (VnGP) and the outputs of RegCM historical run in the period 1986-1995 and then validated for the period 1996-2005. Based on the statistical quantity of spatial correlation and intensity distributions, the QM method showed a significant improvement in rainfall compared to the non-bias correction method. The improvements both in time and space were recognized in all seasons and all climatic sub-regions of Vietnam. Moreover, not only the rainfall amount but also some extreme indices such as R10m, R20mm, R50m, CDD, CWD, R95pTOT, R99pTOT were much better after the correction. The results suggested that the QM correction method should be taken into practice for the projections of the future precipitation over Vietnam.

  17. Are we using the right fuel to drive hydrological models? A climate impact study in the Upper Blue Nile

    NASA Astrophysics Data System (ADS)

    Liersch, Stefan; Tecklenburg, Julia; Rust, Henning; Dobler, Andreas; Fischer, Madlen; Kruschke, Tim; Koch, Hagen; Fokko Hattermann, Fred

    2018-04-01

    Climate simulations are the fuel to drive hydrological models that are used to assess the impacts of climate change and variability on hydrological parameters, such as river discharges, soil moisture, and evapotranspiration. Unlike with cars, where we know which fuel the engine requires, we never know in advance what unexpected side effects might be caused by the fuel we feed our models with. Sometimes we increase the fuel's octane number (bias correction) to achieve better performance and find out that the model behaves differently but not always as was expected or desired. This study investigates the impacts of projected climate change on the hydrology of the Upper Blue Nile catchment using two model ensembles consisting of five global CMIP5 Earth system models and 10 regional climate models (CORDEX Africa). WATCH forcing data were used to calibrate an eco-hydrological model and to bias-correct both model ensembles using slightly differing approaches. On the one hand it was found that the bias correction methods considerably improved the performance of average rainfall characteristics in the reference period (1970-1999) in most of the cases. This also holds true for non-extreme discharge conditions between Q20 and Q80. On the other hand, bias-corrected simulations tend to overemphasize magnitudes of projected change signals and extremes. A general weakness of both uncorrected and bias-corrected simulations is the rather poor representation of high and low flows and their extremes, which were often deteriorated by bias correction. This inaccuracy is a crucial deficiency for regional impact studies dealing with water management issues and it is therefore important to analyse model performance and characteristics and the effect of bias correction, and eventually to exclude some climate models from the ensemble. However, the multi-model means of all ensembles project increasing average annual discharges in the Upper Blue Nile catchment and a shift in seasonal patterns, with decreasing discharges in June and July and increasing discharges from August to November.

  18. A new dynamical downscaling approach with GCM bias corrections and spectral nudging

    NASA Astrophysics Data System (ADS)

    Xu, Zhongfeng; Yang, Zong-Liang

    2015-04-01

    To improve confidence in regional projections of future climate, a new dynamical downscaling (NDD) approach with both general circulation model (GCM) bias corrections and spectral nudging is developed and assessed over North America. GCM biases are corrected by adjusting GCM climatological means and variances based on reanalysis data before the GCM output is used to drive a regional climate model (RCM). Spectral nudging is also applied to constrain RCM-based biases. Three sets of RCM experiments are integrated over a 31 year period. In the first set of experiments, the model configurations are identical except that the initial and lateral boundary conditions are derived from either the original GCM output, the bias-corrected GCM output, or the reanalysis data. The second set of experiments is the same as the first set except spectral nudging is applied. The third set of experiments includes two sensitivity runs with both GCM bias corrections and nudging where the nudging strength is progressively reduced. All RCM simulations are assessed against North American Regional Reanalysis. The results show that NDD significantly improves the downscaled mean climate and climate variability relative to other GCM-driven RCM downscaling approach in terms of climatological mean air temperature, geopotential height, wind vectors, and surface air temperature variability. In the NDD approach, spectral nudging introduces the effects of GCM bias corrections throughout the RCM domain rather than just limiting them to the initial and lateral boundary conditions, thereby minimizing climate drifts resulting from both the GCM and RCM biases.

  19. An improved standardization procedure to remove systematic low frequency variability biases in GCM simulations

    NASA Astrophysics Data System (ADS)

    Mehrotra, Rajeshwar; Sharma, Ashish

    2012-12-01

    The quality of the absolute estimates of general circulation models (GCMs) calls into question the direct use of GCM outputs for climate change impact assessment studies, particularly at regional scales. Statistical correction of GCM output is often necessary when significant systematic biasesoccur between the modeled output and observations. A common procedure is to correct the GCM output by removing the systematic biases in low-order moments relative to observations or to reanalysis data at daily, monthly, or seasonal timescales. In this paper, we present an extension of a recently published nested bias correction (NBC) technique to correct for the low- as well as higher-order moments biases in the GCM-derived variables across selected multiple time-scales. The proposed recursive nested bias correction (RNBC) approach offers an improved basis for applying bias correction at multiple timescales over the original NBC procedure. The method ensures that the bias-corrected series exhibits improvements that are consistently spread over all of the timescales considered. Different variations of the approach starting from the standard NBC to the more complex recursive alternatives are tested to assess their impacts on a range of GCM-simulated atmospheric variables of interest in downscaling applications related to hydrology and water resources. Results of the study suggest that three to five iteration RNBCs are the most effective in removing distributional and persistence related biases across the timescales considered.

  20. Linear Regression Quantile Mapping (RQM) - A new approach to bias correction with consistent quantile trends

    NASA Astrophysics Data System (ADS)

    Passow, Christian; Donner, Reik

    2017-04-01

    Quantile mapping (QM) is an established concept that allows to correct systematic biases in multiple quantiles of the distribution of a climatic observable. It shows remarkable results in correcting biases in historical simulations through observational data and outperforms simpler correction methods which relate only to the mean or variance. Since it has been shown that bias correction of future predictions or scenario runs with basic QM can result in misleading trends in the projection, adjusted, trend preserving, versions of QM were introduced in the form of detrended quantile mapping (DQM) and quantile delta mapping (QDM) (Cannon, 2015, 2016). Still, all previous versions and applications of QM based bias correction rely on the assumption of time-independent quantiles over the investigated period, which can be misleading in the context of a changing climate. Here, we propose a novel combination of linear quantile regression (QR) with the classical QM method to introduce a consistent, time-dependent and trend preserving approach of bias correction for historical and future projections. Since QR is a regression method, it is possible to estimate quantiles in the same resolution as the given data and include trends or other dependencies. We demonstrate the performance of the new method of linear regression quantile mapping (RQM) in correcting biases of temperature and precipitation products from historical runs (1959 - 2005) of the COSMO model in climate mode (CCLM) from the Euro-CORDEX ensemble relative to gridded E-OBS data of the same spatial and temporal resolution. A thorough comparison with established bias correction methods highlights the strengths and potential weaknesses of the new RQM approach. References: A.J. Cannon, S.R. Sorbie, T.Q. Murdock: Bias Correction of GCM Precipitation by Quantile Mapping - How Well Do Methods Preserve Changes in Quantiles and Extremes? Journal of Climate, 28, 6038, 2015 A.J. Cannon: Multivariate Bias Correction of Climate Model Outputs - Matching Marginal Distributions and Inter-variable Dependence Structure. Journal of Climate, 29, 7045, 2016

  1. Model-Based Control of Observer Bias for the Analysis of Presence-Only Data in Ecology

    PubMed Central

    Warton, David I.; Renner, Ian W.; Ramp, Daniel

    2013-01-01

    Presence-only data, where information is available concerning species presence but not species absence, are subject to bias due to observers being more likely to visit and record sightings at some locations than others (hereafter “observer bias”). In this paper, we describe and evaluate a model-based approach to accounting for observer bias directly – by modelling presence locations as a function of known observer bias variables (such as accessibility variables) in addition to environmental variables, then conditioning on a common level of bias to make predictions of species occurrence free of such observer bias. We implement this idea using point process models with a LASSO penalty, a new presence-only method related to maximum entropy modelling, that implicitly addresses the “pseudo-absence problem” of where to locate pseudo-absences (and how many). The proposed method of bias-correction is evaluated using systematically collected presence/absence data for 62 plant species endemic to the Blue Mountains near Sydney, Australia. It is shown that modelling and controlling for observer bias significantly improves the accuracy of predictions made using presence-only data, and usually improves predictions as compared to pseudo-absence or “inventory” methods of bias correction based on absences from non-target species. Future research will consider the potential for improving the proposed bias-correction approach by estimating the observer bias simultaneously across multiple species. PMID:24260167

  2. Evaluation of a new satellite-based precipitation dataset for climate studies in the Xiang River basin, Southern China

    NASA Astrophysics Data System (ADS)

    Zhu, Q.; Xu, Y. P.; Hsu, K. L.

    2017-12-01

    A new satellite-based precipitation dataset, Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Climate Data Record (PERSIANN-CDR) with long-term time series dating back to 1983 can be one valuable dataset for climate studies. This study investigates the feasibility of using PERSIANN-CDR as a reference dataset for climate studies. Sixteen CMIP5 models are evaluated over the Xiang River basin, southern China, by comparing their performance on precipitation projection and streamflow simulation, particularly on extreme precipitation and streamflow events. The results show PERSIANN-CDR is a valuable dataset for climate studies, even on extreme precipitation events. The precipitation estimates and their extreme events from CMIP5 models are improved significantly compared with rain gauge observations after bias-correction by the PERSIANN-CDR precipitation estimates. Given streamflows simulated with raw and bias-corrected precipitation estimates from 16 CMIP5 models, 10 out of 16 are improved after bias-correction. The impact of bias-correction on extreme events for streamflow simulations are unstable, with eight out of 16 models can be clearly claimed they are improved after the bias-correction. Concerning the performance of raw CMIP5 models on precipitation, IPSL-CM5A-MR excels the other CMIP5 models, while MRI-CGCM3 outperforms on extreme events with its better performance on six extreme precipitation metrics. Case studies also show that raw CCSM4, CESM1-CAM5, and MRI-CGCM3 outperform other models on streamflow simulation, while MIROC5-ESM-CHEM, MIROC5-ESM and IPSL-CM5A-MR behaves better than the other models after bias-correction.

  3. Problems and Limitations of Satellite Image Orientation for Determination of Height Models

    NASA Astrophysics Data System (ADS)

    Jacobsen, K.

    2017-05-01

    The usual satellite image orientation is based on bias corrected rational polynomial coefficients (RPC). The RPC are describing the direct sensor orientation of the satellite images. The locations of the projection centres today are without problems, but an accuracy limit is caused by the attitudes. Very high resolution satellites today are very agile, able to change the pointed area over 200km within 10 to 11 seconds. The corresponding fast attitude acceleration of the satellite may cause a jitter which cannot be expressed by the third order RPC, even if it is recorded by the gyros. Only a correction of the image geometry may help, but usually this will not be done. The first indication of jitter problems is shown by systematic errors of the y-parallaxes (py) for the intersection of corresponding points during the computation of ground coordinates. These y-parallaxes have a limited influence to the ground coordinates, but similar problems can be expected for the x-parallaxes, determining directly the object height. Systematic y-parallaxes are shown for Ziyuan-3 (ZY3), WorldView-2 (WV2), Pleiades, Cartosat-1, IKONOS and GeoEye. Some of them have clear jitter effects. In addition linear trends of py can be seen. Linear trends in py and tilts in of computed height models may be caused by limited accuracy of the attitude registration, but also by bias correction with affinity transformation. The bias correction is based on ground control points (GCPs). The accuracy of the GCPs usually does not cause some limitations but the identification of the GCPs in the images may be difficult. With 2-dimensional bias corrected RPC-orientation by affinity transformation tilts of the generated height models may be caused, but due to large affine image deformations some satellites, as Cartosat-1, have to be handled with bias correction by affinity transformation. Instead of a 2-dimensional RPC-orientation also a 3-dimensional orientation is possible, respecting the object height more as by 2-dimensional orientation. The 3-dimensional orientation showed advantages for orientation based on a limited number of GCPs, but in case of poor GCP distribution it may cause also negative effects. For some of the used satellites the bias correction by affinity transformation showed advantages, but for some other the bias correction by shift was leading to a better levelling of the generated height models, even if the root mean square (RMS) differences at the GCPs were larger as for bias correction by affinity transformation. The generated height models can be analyzed and corrected with reference height models. For the used data sets accurate reference height models are available, but an analysis and correction with the free of charge available SRTM digital surface model (DSM) or ALOS World 3D (AW3D30) is also possible and leads to similar results. The comparison of the generated height models with the reference DSM shows some height undulations, but the major accuracy influence is caused by tilts of the height models. Some height model undulations reach up to 50 % of the ground sampling distance (GSD), this is not negligible but it cannot be seen not so much at the standard deviations of the height. In any case an improvement of the generated height models is possible with reference height models. If such corrections are applied it compensates possible negative effects of the type of bias correction or 2-dimensional orientations against 3-dimensional handling.

  4. The L0 Regularized Mumford-Shah Model for Bias Correction and Segmentation of Medical Images.

    PubMed

    Duan, Yuping; Chang, Huibin; Huang, Weimin; Zhou, Jiayin; Lu, Zhongkang; Wu, Chunlin

    2015-11-01

    We propose a new variant of the Mumford-Shah model for simultaneous bias correction and segmentation of images with intensity inhomogeneity. First, based on the model of images with intensity inhomogeneity, we introduce an L0 gradient regularizer to model the true intensity and a smooth regularizer to model the bias field. In addition, we derive a new data fidelity using the local intensity properties to allow the bias field to be influenced by its neighborhood. Second, we use a two-stage segmentation method, where the fast alternating direction method is implemented in the first stage for the recovery of true intensity and bias field and a simple thresholding is used in the second stage for segmentation. Different from most of the existing methods for simultaneous bias correction and segmentation, we estimate the bias field and true intensity without fixing either the number of the regions or their values in advance. Our method has been validated on medical images of various modalities with intensity inhomogeneity. Compared with the state-of-art approaches and the well-known brain software tools, our model is fast, accurate, and robust with initializations.

  5. Impact of chlorophyll bias on the tropical Pacific mean climate in an earth system model

    NASA Astrophysics Data System (ADS)

    Lim, Hyung-Gyu; Park, Jong-Yeon; Kug, Jong-Seong

    2017-12-01

    Climate modeling groups nowadays develop earth system models (ESMs) by incorporating biogeochemical processes in their climate models. The ESMs, however, often show substantial bias in simulated marine biogeochemistry which can potentially introduce an undesirable bias in physical ocean fields through biogeophysical interactions. This study examines how and how much the chlorophyll bias in a state-of-the-art ESM affects the mean and seasonal cycle of tropical Pacific sea-surface temperature (SST). The ESM used in the present study shows a sizeable positive bias in the simulated tropical chlorophyll. We found that the correction of the chlorophyll bias can reduce the ESM's intrinsic cold SST mean bias in the equatorial Pacific. The biologically-induced cold SST bias is strongly affected by seasonally-dependent air-sea coupling strength. In addition, the correction of chlorophyll bias can improve the annual cycle of SST by up to 25%. This result suggests a possible modeling approach in understanding the two-way interactions between physical and chlorophyll biases by biogeophysical effects.

  6. MRI non-uniformity correction through interleaved bias estimation and B-spline deformation with a template.

    PubMed

    Fletcher, E; Carmichael, O; Decarli, C

    2012-01-01

    We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer's disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions.

  7. MRI Non-Uniformity Correction Through Interleaved Bias Estimation and B-Spline Deformation with a Template*

    PubMed Central

    Fletcher, E.; Carmichael, O.; DeCarli, C.

    2013-01-01

    We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer’s disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions. PMID:23365843

  8. On the Limitations of Variational Bias Correction

    NASA Technical Reports Server (NTRS)

    Moradi, Isaac; Mccarty, Will; Gelaro, Ronald

    2018-01-01

    Satellite radiances are the largest dataset assimilated into Numerical Weather Prediction (NWP) models, however the data are subject to errors and uncertainties that need to be accounted for before assimilating into the NWP models. Variational bias correction uses the time series of observation minus background to estimate the observations bias. This technique does not distinguish between the background error, forward operator error, and observations error so that all these errors are summed up together and counted as observation error. We identify some sources of observations errors (e.g., antenna emissivity, non-linearity in the calibration, and antenna pattern) and show the limitations of variational bias corrections on estimating these errors.

  9. Modal Correction Method For Dynamically Induced Errors In Wind-Tunnel Model Attitude Measurements

    NASA Technical Reports Server (NTRS)

    Buehrle, R. D.; Young, C. P., Jr.

    1995-01-01

    This paper describes a method for correcting the dynamically induced bias errors in wind tunnel model attitude measurements using measured modal properties of the model system. At NASA Langley Research Center, the predominant instrumentation used to measure model attitude is a servo-accelerometer device that senses the model attitude with respect to the local vertical. Under smooth wind tunnel operating conditions, this inertial device can measure the model attitude with an accuracy of 0.01 degree. During wind tunnel tests when the model is responding at high dynamic amplitudes, the inertial device also senses the centrifugal acceleration associated with model vibration. This centrifugal acceleration results in a bias error in the model attitude measurement. A study of the response of a cantilevered model system to a simulated dynamic environment shows significant bias error in the model attitude measurement can occur and is vibration mode and amplitude dependent. For each vibration mode contributing to the bias error, the error is estimated from the measured modal properties and tangential accelerations at the model attitude device. Linear superposition is used to combine the bias estimates for individual modes to determine the overall bias error as a function of time. The modal correction model predicts the bias error to a high degree of accuracy for the vibration modes characterized in the simulated dynamic environment.

  10. A rank-based approach for correcting systematic biases in spatial disaggregation of coarse-scale climate simulations

    NASA Astrophysics Data System (ADS)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2017-07-01

    Use of General Circulation Model (GCM) precipitation and evapotranspiration sequences for hydrologic modelling can result in unrealistic simulations due to the coarse scales at which GCMs operate and the systematic biases they contain. The Bias Correction Spatial Disaggregation (BCSD) method is a popular statistical downscaling and bias correction method developed to address this issue. The advantage of BCSD is its ability to reduce biases in the distribution of precipitation totals at the GCM scale and then introduce more realistic variability at finer scales than simpler spatial interpolation schemes. Although BCSD corrects biases at the GCM scale before disaggregation; at finer spatial scales biases are re-introduced by the assumptions made in the spatial disaggregation process. Our study focuses on this limitation of BCSD and proposes a rank-based approach that aims to reduce the spatial disaggregation bias especially for both low and high precipitation extremes. BCSD requires the specification of a multiplicative bias correction anomaly field that represents the ratio of the fine scale precipitation to the disaggregated precipitation. It is shown that there is significant temporal variation in the anomalies, which is masked when a mean anomaly field is used. This can be improved by modelling the anomalies in rank-space. Results from the application of the rank-BCSD procedure improve the match between the distributions of observed and downscaled precipitation at the fine scale compared to the original BCSD approach. Further improvements in the distribution are identified when a scaling correction to preserve mass in the disaggregation process is implemented. An assessment of the approach using a single GCM over Australia shows clear advantages especially in the simulation of particularly low and high downscaled precipitation amounts.

  11. North Atlantic climate model bias influence on multiyear predictability

    NASA Astrophysics Data System (ADS)

    Wu, Y.; Park, T.; Park, W.; Latif, M.

    2018-01-01

    The influences of North Atlantic biases on multiyear predictability of unforced surface air temperature (SAT) variability are examined in the Kiel Climate Model (KCM). By employing a freshwater flux correction over the North Atlantic to the model, which strongly alleviates both North Atlantic sea surface salinity (SSS) and sea surface temperature (SST) biases, the freshwater flux-corrected integration depicts significantly enhanced multiyear SAT predictability in the North Atlantic sector in comparison to the uncorrected one. The enhanced SAT predictability in the corrected integration is due to a stronger and more variable Atlantic Meridional Overturning Circulation (AMOC) and its enhanced influence on North Atlantic SST. Results obtained from preindustrial control integrations of models participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5) support the findings obtained from the KCM: models with large North Atlantic biases tend to have a weak AMOC influence on SAT and exhibit a smaller SAT predictability over the North Atlantic sector.

  12. Regression dilution bias: tools for correction methods and sample size calculation.

    PubMed

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  13. The effects of sampling bias and model complexity on the predictive performance of MaxEnt species distribution models.

    PubMed

    Syfert, Mindy M; Smith, Matthew J; Coomes, David A

    2013-01-01

    Species distribution models (SDMs) trained on presence-only data are frequently used in ecological research and conservation planning. However, users of SDM software are faced with a variety of options, and it is not always obvious how selecting one option over another will affect model performance. Working with MaxEnt software and with tree fern presence data from New Zealand, we assessed whether (a) choosing to correct for geographical sampling bias and (b) using complex environmental response curves have strong effects on goodness of fit. SDMs were trained on tree fern data, obtained from an online biodiversity data portal, with two sources that differed in size and geographical sampling bias: a small, widely-distributed set of herbarium specimens and a large, spatially clustered set of ecological survey records. We attempted to correct for geographical sampling bias by incorporating sampling bias grids in the SDMs, created from all georeferenced vascular plants in the datasets, and explored model complexity issues by fitting a wide variety of environmental response curves (known as "feature types" in MaxEnt). In each case, goodness of fit was assessed by comparing predicted range maps with tree fern presences and absences using an independent national dataset to validate the SDMs. We found that correcting for geographical sampling bias led to major improvements in goodness of fit, but did not entirely resolve the problem: predictions made with clustered ecological data were inferior to those made with the herbarium dataset, even after sampling bias correction. We also found that the choice of feature type had negligible effects on predictive performance, indicating that simple feature types may be sufficient once sampling bias is accounted for. Our study emphasizes the importance of reducing geographical sampling bias, where possible, in datasets used to train SDMs, and the effectiveness and essentialness of sampling bias correction within MaxEnt.

  14. An empirical determination of the effects of sea state bias on Seasat altimetry

    NASA Technical Reports Server (NTRS)

    Born, G. H.; Richards, M. A.; Rosborough, G. W.

    1982-01-01

    A linear empirical model has been developed for the correction of sea state bias effects, in Seasat altimetry data altitude measurements, that are due to (1) electromagnetic bias caused by the fact that ocean wave troughs reflect the altimeter signal more strongly than the crests, shifting the apparent mean sea level toward the wave troughs, and (2) an independent instrument-related bias resulting from the inability of height corrections applied in the ground processor to compensate for simplifying assumptions made for the processor aboard Seasat. After applying appropriate corrections to the altimetry data, an empirical model for the sea state bias is obtained by differencing significant wave height and height measurements from coincident ground tracks. Height differences are minimized by solving for the coefficient of a linear relationship between height differences and wave height differences that minimize the height differences. In more than 50% of the 36 cases examined, 7% of the value of significant wave height should be subtracted for sea state bias correction.

  15. The Impact of Assimilation of GPM Clear Sky Radiance on HWRF Hurricane Track and Intensity Forecasts

    NASA Astrophysics Data System (ADS)

    Yu, C. L.; Pu, Z.

    2016-12-01

    The impact of GPM microwave imager (GMI) clear sky radiances on hurricane forecasting is examined by ingesting GMI level 1C recalibrated brightness temperature into the NCEP Gridpoint Statistical Interpolation (GSI)- based ensemble-variational hybrid data assimilation system for the operational Hurricane Weather Research and Forecast (HWRF) system. The GMI clear sky radiances are compared with the Community Radiative Transfer Model (CRTM) simulated radiances to closely study the quality of the radiance observations. The quality check result indicates the presence of bias in various channels. A static bias correction scheme, in which the appropriate bias correction coefficients for GMI data is evaluated by applying regression method on a sufficiently large sample of data representative to the observational bias in the regions of concern, is used to correct the observational bias in GMI clear sky radiances. Forecast results with and without assimilation of GMI radiance are compared using hurricane cases from recent hurricane seasons (e.g., Hurricane Joaquin in 2015). Diagnoses of data assimilation results show that the bias correction coefficients obtained from the regression method can correct the inherent biases in GMI radiance data, significantly reducing observational residuals. The removal of biases also allows more data to pass GSI quality control and hence to be assimilated into the model. Forecast results for hurricane Joaquin demonstrates that the quality of analysis from the data assimilation is sensitive to the bias correction, with positive impacts on the hurricane track forecast when systematic biases are removed from the radiance data. Details will be presented at the symposium.

  16. Identification and Correction of Additive and Multiplicative Spatial Biases in Experimental High-Throughput Screening.

    PubMed

    Mazoure, Bogdan; Caraus, Iurie; Nadon, Robert; Makarenkov, Vladimir

    2018-06-01

    Data generated by high-throughput screening (HTS) technologies are prone to spatial bias. Traditionally, bias correction methods used in HTS assume either a simple additive or, more recently, a simple multiplicative spatial bias model. These models do not, however, always provide an accurate correction of measurements in wells located at the intersection of rows and columns affected by spatial bias. The measurements in these wells depend on the nature of interaction between the involved biases. Here, we propose two novel additive and two novel multiplicative spatial bias models accounting for different types of bias interactions. We describe a statistical procedure that allows for detecting and removing different types of additive and multiplicative spatial biases from multiwell plates. We show how this procedure can be applied by analyzing data generated by the four HTS technologies (homogeneous, microorganism, cell-based, and gene expression HTS), the three high-content screening (HCS) technologies (area, intensity, and cell-count HCS), and the only small-molecule microarray technology available in the ChemBank small-molecule screening database. The proposed methods are included in the AssayCorrector program, implemented in R, and available on CRAN.

  17. A Dynamical Downscaling Approach with GCM Bias Corrections and Spectral Nudging

    NASA Astrophysics Data System (ADS)

    Xu, Z.; Yang, Z.

    2013-12-01

    To reduce the biases in the regional climate downscaling simulations, a dynamical downscaling approach with GCM bias corrections and spectral nudging is developed and assessed over North America. Regional climate simulations are performed with the Weather Research and Forecasting (WRF) model embedded in the National Center for Atmospheric Research (NCAR) Community Atmosphere Model (CAM). To reduce the GCM biases, the GCM climatological means and the variances of interannual variations are adjusted based on the National Centers for Environmental Prediction-NCAR global reanalysis products (NNRP) before using them to drive WRF which is the same as our previous method. In this study, we further introduce spectral nudging to reduce the RCM-based biases. Two sets of WRF experiments are performed with and without spectral nudging. All WRF experiments are identical except that the initial and lateral boundary conditions are derived from the NNRP, the original GCM output, and the bias corrected GCM output, respectively. The GCM-driven RCM simulations with bias corrections and spectral nudging (IDDng) are compared with those without spectral nudging (IDD) and North American Regional Reanalysis (NARR) data to assess the additional reduction in RCM biases relative to the IDD approach. The results show that the spectral nudging introduces the effect of GCM bias correction into the RCM domain, thereby minimizing the climate drift resulting from the RCM biases. The GCM bias corrections and spectral nudging significantly improve the downscaled mean climate and extreme temperature simulations. Our results suggest that both GCM bias corrections or spectral nudging are necessary to reduce the error of downscaled climate. Only one of them does not guarantee better downscaling simulation. The new dynamical downscaling method can be applied to regional projection of future climate or downscaling of GCM sensitivity simulations. Annual mean RMSEs. The RMSEs are computed over the verification region by monthly mean data over 1981-2010. Experimental design

  18. Impact of bias-corrected reanalysis-derived lateral boundary conditions on WRF simulations

    NASA Astrophysics Data System (ADS)

    Moalafhi, Ditiro Benson; Sharma, Ashish; Evans, Jason Peter; Mehrotra, Rajeshwar; Rocheta, Eytan

    2017-08-01

    Lateral and lower boundary conditions derived from a suitable global reanalysis data set form the basis for deriving a dynamically consistent finer resolution downscaled product for climate and hydrological assessment studies. A problem with this, however, is that systematic biases have been noted to be present in the global reanalysis data sets that form these boundaries, biases which can be carried into the downscaled simulations thereby reducing their accuracy or efficacy. In this work, three Weather Research and Forecasting (WRF) model downscaling experiments are undertaken to investigate the impact of bias correcting European Centre for Medium range Weather Forecasting Reanalysis ERA-Interim (ERA-I) atmospheric temperature and relative humidity using Atmospheric Infrared Sounder (AIRS) satellite data. The downscaling is performed over a domain centered over southern Africa between the years 2003 and 2012. The sample mean and the mean as well as standard deviation at each grid cell for each variable are used for bias correction. The resultant WRF simulations of near-surface temperature and precipitation are evaluated seasonally and annually against global gridded observational data sets and compared with ERA-I reanalysis driving field. The study reveals inconsistencies between the impact of the bias correction prior to downscaling and the resultant model simulations after downscaling. Mean and standard deviation bias-corrected WRF simulations are, however, found to be marginally better than mean only bias-corrected WRF simulations and raw ERA-I reanalysis-driven WRF simulations. Performances, however, differ when assessing different attributes in the downscaled field. This raises questions about the efficacy of the correction procedures adopted.

  19. Efficient bias correction for magnetic resonance image denoising.

    PubMed

    Mukherjee, Partha Sarathi; Qiu, Peihua

    2013-05-30

    Magnetic resonance imaging (MRI) is a popular radiology technique that is used for visualizing detailed internal structure of the body. Observed MRI images are generated by the inverse Fourier transformation from received frequency signals of a magnetic resonance scanner system. Previous research has demonstrated that random noise involved in the observed MRI images can be described adequately by the so-called Rician noise model. Under that model, the observed image intensity at a given pixel is a nonlinear function of the true image intensity and of two independent zero-mean random variables with the same normal distribution. Because of such a complicated noise structure in the observed MRI images, denoised images by conventional denoising methods are usually biased, and the bias could reduce image contrast and negatively affect subsequent image analysis. Therefore, it is important to address the bias issue properly. To this end, several bias-correction procedures have been proposed in the literature. In this paper, we study the Rician noise model and the corresponding bias-correction problem systematically and propose a new and more effective bias-correction formula based on the regression analysis and Monte Carlo simulation. Numerical studies show that our proposed method works well in various applications. Copyright © 2012 John Wiley & Sons, Ltd.

  20. Modeling bias and variation in the stochastic processes of small RNA sequencing

    PubMed Central

    Etheridge, Alton; Sakhanenko, Nikita; Galas, David

    2017-01-01

    Abstract The use of RNA-seq as the preferred method for the discovery and validation of small RNA biomarkers has been hindered by high quantitative variability and biased sequence counts. In this paper we develop a statistical model for sequence counts that accounts for ligase bias and stochastic variation in sequence counts. This model implies a linear quadratic relation between the mean and variance of sequence counts. Using a large number of sequencing datasets, we demonstrate how one can use the generalized additive models for location, scale and shape (GAMLSS) distributional regression framework to calculate and apply empirical correction factors for ligase bias. Bias correction could remove more than 40% of the bias for miRNAs. Empirical bias correction factors appear to be nearly constant over at least one and up to four orders of magnitude of total RNA input and independent of sample composition. Using synthetic mixes of known composition, we show that the GAMLSS approach can analyze differential expression with greater accuracy, higher sensitivity and specificity than six existing algorithms (DESeq2, edgeR, EBSeq, limma, DSS, voom) for the analysis of small RNA-seq data. PMID:28369495

  1. Correction of stream quality trends for the effects of laboratory measurement bias

    USGS Publications Warehouse

    Alexander, Richard B.; Smith, Richard A.; Schwarz, Gregory E.

    1993-01-01

    We present a statistical model relating measurements of water quality to associated errors in laboratory methods. Estimation of the model allows us to correct trends in water quality for long-term and short-term variations in laboratory measurement errors. An illustration of the bias correction method for a large national set of stream water quality and quality assurance data shows that reductions in the bias of estimates of water quality trend slopes are achieved at the expense of increases in the variance of these estimates. Slight improvements occur in the precision of estimates of trend in bias by using correlative information on bias and water quality to estimate random variations in measurement bias. The results of this investigation stress the need for reliable, long-term quality assurance data and efficient statistical methods to assess the effects of measurement errors on the detection of water quality trends.

  2. Effects of vibration on inertial wind-tunnel model attitude measurement devices

    NASA Technical Reports Server (NTRS)

    Young, Clarence P., Jr.; Buehrle, Ralph D.; Balakrishna, S.; Kilgore, W. Allen

    1994-01-01

    Results of an experimental study of a wind tunnel model inertial angle-of-attack sensor response to a simulated dynamic environment are presented. The inertial device cannot distinguish between the gravity vector and the centrifugal accelerations associated with wind tunnel model vibration, this situation results in a model attitude measurement bias error. Significant bias error in model attitude measurement was found for the model system tested. The model attitude bias error was found to be vibration mode and amplitude dependent. A first order correction model was developed and used for estimating attitude measurement bias error due to dynamic motion. A method for correcting the output of the model attitude inertial sensor in the presence of model dynamics during on-line wind tunnel operation is proposed.

  3. Intercomparison of Downscaling Methods on Hydrological Impact for Earth System Model of NE United States

    NASA Astrophysics Data System (ADS)

    Yang, P.; Fekete, B. M.; Rosenzweig, B.; Lengyel, F.; Vorosmarty, C. J.

    2012-12-01

    Atmospheric dynamics are essential inputs to Regional-scale Earth System Models (RESMs). Variables including surface air temperature, total precipitation, solar radiation, wind speed and humidity must be downscaled from coarse-resolution, global General Circulation Models (GCMs) to the high temporal and spatial resolution required for regional modeling. However, this downscaling procedure can be challenging due to the need to correct for bias from the GCM and to capture the spatiotemporal heterogeneity of the regional dynamics. In this study, the results obtained using several downscaling techniques and observational datasets were compared for a RESM of the Northeast Corridor of the United States. Previous efforts have enhanced GCM model outputs through bias correction using novel techniques. For example, the Climate Impact Research at Potsdam Institute developed a series of bias-corrected GCMs towards the next generation climate change scenarios (Schiermeier, 2012; Moss et al., 2010). Techniques to better represent the heterogeneity of climate variables have also been improved using statistical approaches (Maurer, 2008; Abatzoglou, 2011). For this study, four downscaling approaches to transform bias-corrected HADGEM2-ES Model output (daily at .5 x .5 degree) to the 3'*3'(longitude*latitude) daily and monthly resolution required for the Northeast RESM were compared: 1) Bilinear Interpolation, 2) Daily bias-corrected spatial downscaling (D-BCSD) with Gridded Meteorological Datasets (developed by Abazoglou 2011), 3) Monthly bias-corrected spatial disaggregation (M-BCSD) with CRU(Climate Research Unit) and 4) Dynamic Downscaling based on Weather Research and Forecast (WRF) model. Spatio-temporal analysis of the variability in precipitation was conducted over the study domain. Validation of the variables of different downscaling methods against observational datasets was carried out for assessment of the downscaled climate model outputs. The effects of using the different approaches to downscale atmospheric variables (specifically air temperature and precipitation) for use as inputs to the Water Balance Model (WBMPlus, Vorosmarty et al., 1998;Wisser et al., 2008) for simulation of daily discharge and monthly stream flow in the Northeast US for a 100-year period in the 21st century were also assessed. Statistical techniques especially monthly bias-corrected spatial disaggregation (M-BCSD) showed potential advantage among other methods for the daily discharge and monthly stream flow simulation. However, Dynamic Downscaling will provide important complements to the statistical approaches tested.

  4. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data

    PubMed Central

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-01-01

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration’s (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003–2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW’s) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach. PMID:27314363

  5. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data.

    PubMed

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-06-15

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration's (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003-2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW's) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach.

  6. Comparing bias correction methods in downscaling meteorological variables for a hydrologic impact study in an arid area in China

    NASA Astrophysics Data System (ADS)

    Fang, G. H.; Yang, J.; Chen, Y. N.; Zammit, C.

    2015-06-01

    Water resources are essential to the ecosystem and social economy in the desert and oasis of the arid Tarim River basin, northwestern China, and expected to be vulnerable to climate change. It has been demonstrated that regional climate models (RCMs) provide more reliable results for a regional impact study of climate change (e.g., on water resources) than general circulation models (GCMs). However, due to their considerable bias it is still necessary to apply bias correction before they are used for water resources research. In this paper, after a sensitivity analysis on input meteorological variables based on the Sobol' method, we compared five precipitation correction methods and three temperature correction methods in downscaling RCM simulations applied over the Kaidu River basin, one of the headwaters of the Tarim River basin. Precipitation correction methods applied include linear scaling (LS), local intensity scaling (LOCI), power transformation (PT), distribution mapping (DM) and quantile mapping (QM), while temperature correction methods are LS, variance scaling (VARI) and DM. The corrected precipitation and temperature were compared to the observed meteorological data, prior to being used as meteorological inputs of a distributed hydrologic model to study their impacts on streamflow. The results show (1) streamflows are sensitive to precipitation, temperature and solar radiation but not to relative humidity and wind speed; (2) raw RCM simulations are heavily biased from observed meteorological data, and its use for streamflow simulations results in large biases from observed streamflow, and all bias correction methods effectively improved these simulations; (3) for precipitation, PT and QM methods performed equally best in correcting the frequency-based indices (e.g., standard deviation, percentile values) while the LOCI method performed best in terms of the time-series-based indices (e.g., Nash-Sutcliffe coefficient, R2); (4) for temperature, all correction methods performed equally well in correcting raw temperature; and (5) for simulated streamflow, precipitation correction methods have more significant influence than temperature correction methods and the performances of streamflow simulations are consistent with those of corrected precipitation; i.e., the PT and QM methods performed equally best in correcting flow duration curve and peak flow while the LOCI method performed best in terms of the time-series-based indices. The case study is for an arid area in China based on a specific RCM and hydrologic model, but the methodology and some results can be applied to other areas and models.

  7. Sensitivity of the atmospheric water cycle to corrections of the sea surface temperature bias over southern Africa in a regional climate model

    NASA Astrophysics Data System (ADS)

    Weber, Torsten; Haensler, Andreas; Jacob, Daniela

    2017-12-01

    Regional climate models (RCMs) have been used to dynamically downscale global climate projections at high spatial and temporal resolution in order to analyse the atmospheric water cycle. In southern Africa, precipitation pattern were strongly affected by the moisture transport from the southeast Atlantic and southwest Indian Ocean and, consequently, by their sea surface temperatures (SSTs). However, global ocean models often have deficiencies in resolving regional to local scale ocean currents, e.g. in ocean areas offshore the South African continent. By downscaling global climate projections using RCMs, the biased SSTs from the global forcing data were introduced to the RCMs and affected the results of regional climate projections. In this work, the impact of the SST bias correction on precipitation, evaporation and moisture transport were analysed over southern Africa. For this analysis, several experiments were conducted with the regional climate model REMO using corrected and uncorrected SSTs. In these experiments, a global MPI-ESM-LR historical simulation was downscaled with the regional climate model REMO to a high spatial resolution of 50 × 50 km2 and of 25 × 25 km2 for southern Africa using a double-nesting method. The results showed a distinct impact of the corrected SST on the moisture transport, the meridional vertical circulation and on the precipitation pattern in southern Africa. Furthermore, it was found that the experiment with the corrected SST led to a reduction of the wet bias over southern Africa and to a better agreement with observations as without SST bias corrections.

  8. Multivariate quantile mapping bias correction: an N-dimensional probability density function transform for climate model simulations of multiple variables

    NASA Astrophysics Data System (ADS)

    Cannon, Alex J.

    2018-01-01

    Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another—the N-dimensional probability density function transform—is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin, particularly for annual maxima of the FWI distribution and spatiotemporal autocorrelation of precipitation fields.

  9. CD-SEM real time bias correction using reference metrology based modeling

    NASA Astrophysics Data System (ADS)

    Ukraintsev, V.; Banke, W.; Zagorodnev, G.; Archie, C.; Rana, N.; Pavlovsky, V.; Smirnov, V.; Briginas, I.; Katnani, A.; Vaid, A.

    2018-03-01

    Accuracy of patterning impacts yield, IC performance and technology time to market. Accuracy of patterning relies on optical proximity correction (OPC) models built using CD-SEM inputs and intra die critical dimension (CD) control based on CD-SEM. Sub-nanometer measurement uncertainty (MU) of CD-SEM is required for current technologies. Reported design and process related bias variation of CD-SEM is in the range of several nanometers. Reference metrology and numerical modeling are used to correct SEM. Both methods are slow to be used for real time bias correction. We report on real time CD-SEM bias correction using empirical models based on reference metrology (RM) data. Significant amount of currently untapped information (sidewall angle, corner rounding, etc.) is obtainable from SEM waveforms. Using additional RM information provided for specific technology (design rules, materials, processes) CD extraction algorithms can be pre-built and then used in real time for accurate CD extraction from regular CD-SEM images. The art and challenge of SEM modeling is in finding robust correlation between SEM waveform features and bias of CD-SEM as well as in minimizing RM inputs needed to create accurate (within the design and process space) model. The new approach was applied to improve CD-SEM accuracy of 45 nm GATE and 32 nm MET1 OPC 1D models. In both cases MU of the state of the art CD-SEM has been improved by 3x and reduced to a nanometer level. Similar approach can be applied to 2D (end of line, contours, etc.) and 3D (sidewall angle, corner rounding, etc.) cases.

  10. A method to preserve trends in quantile mapping bias correction of climate modeled temperature

    NASA Astrophysics Data System (ADS)

    Grillakis, Manolis G.; Koutroulis, Aristeidis G.; Daliakopoulos, Ioannis N.; Tsanis, Ioannis K.

    2017-09-01

    Bias correction of climate variables is a standard practice in climate change impact (CCI) studies. Various methodologies have been developed within the framework of quantile mapping. However, it is well known that quantile mapping may significantly modify the long-term statistics due to the time dependency of the temperature bias. Here, a method to overcome this issue without compromising the day-to-day correction statistics is presented. The methodology separates the modeled temperature signal into a normalized and a residual component relative to the modeled reference period climatology, in order to adjust the biases only for the former and preserve the signal of the later. The results show that this method allows for the preservation of the originally modeled long-term signal in the mean, the standard deviation and higher and lower percentiles of temperature. To illustrate the improvements, the methodology is tested on daily time series obtained from five Euro CORDEX regional climate models (RCMs).

  11. Sequence-specific bias correction for RNA-seq data using recurrent neural networks.

    PubMed

    Zhang, Yao-Zhong; Yamaguchi, Rui; Imoto, Seiya; Miyano, Satoru

    2017-01-25

    The recent success of deep learning techniques in machine learning and artificial intelligence has stimulated a great deal of interest among bioinformaticians, who now wish to bring the power of deep learning to bare on a host of bioinformatical problems. Deep learning is ideally suited for biological problems that require automatic or hierarchical feature representation for biological data when prior knowledge is limited. In this work, we address the sequence-specific bias correction problem for RNA-seq data redusing Recurrent Neural Networks (RNNs) to model nucleotide sequences without pre-determining sequence structures. The sequence-specific bias of a read is then calculated based on the sequence probabilities estimated by RNNs, and used in the estimation of gene abundance. We explore the application of two popular RNN recurrent units for this task and demonstrate that RNN-based approaches provide a flexible way to model nucleotide sequences without knowledge of predetermined sequence structures. Our experiments show that training a RNN-based nucleotide sequence model is efficient and RNN-based bias correction methods compare well with the-state-of-the-art sequence-specific bias correction method on the commonly used MAQC-III data set. RNNs provides an alternative and flexible way to calculate sequence-specific bias without explicitly pre-determining sequence structures.

  12. How and how much does RAD-seq bias genetic diversity estimates?

    PubMed

    Cariou, Marie; Duret, Laurent; Charlat, Sylvain

    2016-11-08

    RAD-seq is a powerful tool, increasingly used in population genomics. However, earlier studies have raised red flags regarding possible biases associated with this technique. In particular, polymorphism on restriction sites results in preferential sampling of closely related haplotypes, so that RAD data tends to underestimate genetic diversity. Here we (1) clarify the theoretical basis of this bias, highlighting the potential confounding effects of population structure and selection, (2) confront predictions to real data from in silico digestion of full genomes and (3) provide a proof of concept toward an ABC-based correction of the RAD-seq bias. Under a neutral and panmictic model, we confirm the previously established relationship between the true polymorphism and its RAD-based estimation, showing a more pronounced bias when polymorphism is high. Using more elaborate models, we show that selection, resulting in heterogeneous levels of polymorphism along the genome, exacerbates the bias and leads to a more pronounced underestimation. On the contrary, spatial genetic structure tends to reduce the bias. We confront the neutral and panmictic model to "ideal" empirical data (in silico RAD-sequencing) using full genomes from natural populations of the fruit fly Drosophila melanogaster and the fungus Shizophyllum commune, harbouring respectively moderate and high genetic diversity. In D. melanogaster, predictions fit the model, but the small difference between the true and RAD polymorphism makes this comparison insensitive to deviations from the model. In the highly polymorphic fungus, the model captures a large part of the bias but makes inaccurate predictions. Accordingly, ABC corrections based on this model improve the estimations, albeit with some imprecisions. The RAD-seq underestimation of genetic diversity associated with polymorphism in restriction sites becomes more pronounced when polymorphism is high. In practice, this means that in many systems where polymorphism does not exceed 2 %, the bias is of minor importance in the face of other sources of uncertainty, such as heterogeneous bases composition or technical artefacts. The neutral panmictic model provides a practical mean to correct the bias through ABC, albeit with some imprecisions. More elaborate ABC methods might integrate additional parameters, such as population structure and selection, but their opposite effects could hinder accurate corrections.

  13. An Analysis of the Individual Effects of Sex Bias.

    ERIC Educational Resources Information Center

    Smith, Richard M.

    Most attempts to correct for the presence of biased test items in a measurement instrument have been either to remove the items or to adjust the scores to correct for the bias. Using the Rasch Dichotomous Response Model and the independent ability estimates derived from three sets of items, those which favor females, those which favor males, and…

  14. Addressing the mischaracterization of extreme rainfall in regional climate model simulations - A synoptic pattern based bias correction approach

    NASA Astrophysics Data System (ADS)

    Li, Jingwan; Sharma, Ashish; Evans, Jason; Johnson, Fiona

    2018-01-01

    Addressing systematic biases in regional climate model simulations of extreme rainfall is a necessary first step before assessing changes in future rainfall extremes. Commonly used bias correction methods are designed to match statistics of the overall simulated rainfall with observations. This assumes that change in the mix of different types of extreme rainfall events (i.e. convective and non-convective) in a warmer climate is of little relevance in the estimation of overall change, an assumption that is not supported by empirical or physical evidence. This study proposes an alternative approach to account for the potential change of alternate rainfall types, characterized here by synoptic weather patterns (SPs) using self-organizing maps classification. The objective of this study is to evaluate the added influence of SPs on the bias correction, which is achieved by comparing the corrected distribution of future extreme rainfall with that using conventional quantile mapping. A comprehensive synthetic experiment is first defined to investigate the conditions under which the additional information of SPs makes a significant difference to the bias correction. Using over 600,000 synthetic cases, statistically significant differences are found to be present in 46% cases. This is followed by a case study over the Sydney region using a high-resolution run of the Weather Research and Forecasting (WRF) regional climate model, which indicates a small change in the proportions of the SPs and a statistically significant change in the extreme rainfall over the region, although the differences between the changes obtained from the two bias correction methods are not statistically significant.

  15. Detecting and correcting for publication bias in meta-analysis - A truncated normal distribution approach.

    PubMed

    Zhu, Qiaohao; Carriere, K C

    2016-01-01

    Publication bias can significantly limit the validity of meta-analysis when trying to draw conclusion about a research question from independent studies. Most research on detection and correction for publication bias in meta-analysis focus mainly on funnel plot-based methodologies or selection models. In this paper, we formulate publication bias as a truncated distribution problem, and propose new parametric solutions. We develop methodologies of estimating the underlying overall effect size and the severity of publication bias. We distinguish the two major situations, in which publication bias may be induced by: (1) small effect size or (2) large p-value. We consider both fixed and random effects models, and derive estimators for the overall mean and the truncation proportion. These estimators will be obtained using maximum likelihood estimation and method of moments under fixed- and random-effects models, respectively. We carried out extensive simulation studies to evaluate the performance of our methodology, and to compare with the non-parametric Trim and Fill method based on funnel plot. We find that our methods based on truncated normal distribution perform consistently well, both in detecting and correcting publication bias under various situations.

  16. Correction of contaminated yaw rate signal and estimation of sensor bias for an electric vehicle under normal driving conditions

    NASA Astrophysics Data System (ADS)

    Zhang, Guoguang; Yu, Zitian; Wang, Junmin

    2017-03-01

    Yaw rate is a crucial signal for the motion control systems of ground vehicles. Yet it may be contaminated by sensor bias. In order to correct the contaminated yaw rate signal and estimate the sensor bias, a robust gain-scheduling observer is proposed in this paper. First of all, a two-degree-of-freedom (2DOF) vehicle lateral and yaw dynamic model is presented, and then a Luenberger-like observer is proposed. To make the observer more applicable to real vehicle driving operations, a 2DOF vehicle model with uncertainties on the coefficients of tire cornering stiffness is employed. Further, a gain-scheduling approach and a robustness enhancement are introduced, leading to a robust gain-scheduling observer. Sensor bias detection mechanism is also designed. Case studies are conducted using an electric ground vehicle to assess the performance of signal correction and sensor bias estimation under difference scenarios.

  17. Performance of bias corrected MPEG rainfall estimate for rainfall-runoff simulation in the upper Blue Nile Basin, Ethiopia

    NASA Astrophysics Data System (ADS)

    Worqlul, Abeyou W.; Ayana, Essayas K.; Maathuis, Ben H. P.; MacAlister, Charlotte; Philpot, William D.; Osorio Leyton, Javier M.; Steenhuis, Tammo S.

    2018-01-01

    In many developing countries and remote areas of important ecosystems, good quality precipitation data are neither available nor readily accessible. Satellite observations and processing algorithms are being extensively used to produce satellite rainfall products (SREs). Nevertheless, these products are prone to systematic errors and need extensive validation before to be usable for streamflow simulations. In this study, we investigated and corrected the bias of Multi-Sensor Precipitation Estimate-Geostationary (MPEG) data. The corrected MPEG dataset was used as input to a semi-distributed hydrological model Hydrologiska Byråns Vattenbalansavdelning (HBV) for simulation of discharge of the Gilgel Abay and Gumara watersheds in the Upper Blue Nile basin, Ethiopia. The result indicated that the MPEG satellite rainfall captured 81% and 78% of the gauged rainfall variability with a consistent bias of underestimating the gauged rainfall by 60%. A linear bias correction applied significantly reduced the bias while maintaining the coefficient of correlation. The simulated flow using bias corrected MPEG SRE resulted in a simulated flow comparable to the gauge rainfall for both watersheds. The study indicated the potential of MPEG SRE in water budget studies after applying a linear bias correction.

  18. Reduction of CMIP5 models bias using Cumulative Distribution Function transform and impact on crops yields simulations across West Africa.

    NASA Astrophysics Data System (ADS)

    Moise Famien, Adjoua; Defrance, Dimitri; Sultan, Benjamin; Janicot, Serge; Vrac, Mathieu

    2017-04-01

    Different CMIP exercises show that the simulations of the future/current temperature and precipitation are complex with a high uncertainty degree. For example, the African monsoon system is not correctly simulated and most of the CMIP5 models underestimate the precipitation. Therefore, Global Climate Models (GCMs) show significant systematic biases that require bias correction before it can be used in impacts studies. Several methods of bias corrections have been developed for several years and are increasingly using more complex statistical methods. The aims of this work is to show the interest of the CDFt (Cumulative Distribution Function transfom (Michelangeli et al.,2009)) method to reduce the data bias from 29 CMIP5 GCMs over Africa and to assess the impact of bias corrected data on crop yields prediction by the end of the 21st century. In this work, we apply the CDFt to daily data covering the period from 1950 to 2099 (Historical and RCP8.5) and we correct the climate variables (temperature, precipitation, solar radiation, wind) by the use of the new daily database from the EU project WATer and global CHange (WATCH) available from 1979 to 2013 as reference data. The performance of the method is assessed in several cases. First, data are corrected based on different calibrations periods and are compared, on one hand, with observations to estimate the sensitivity of the method to the calibration period and, on other hand, with another bias-correction method used in the ISIMIP project. We find that, whatever the calibration period used, CDFt corrects well the mean state of variables and preserves their trend, as well as daily rainfall occurrence and intensity distributions. However, some differences appear when compared to the outputs obtained with the method used in ISIMIP and show that the quality of the correction is strongly related to the reference data. Secondly, we validate the bias correction method with the agronomic simulations (SARRA-H model (Kouressy et al., 2008)) by comparison with FAO crops yields estimations over West Africa. Impact simulations show that crop model is sensitive to input data. They show also decreasing in crop yields by the end of this century. Michelangeli, P. A., Vrac, M., & Loukos, H. (2009). Probabilistic downscaling approaches: Application to wind cumulative distribution functions. Geophysical Research Letters, 36(11). Kouressy M, Dingkuhn M, Vaksmann M and Heinemann A B 2008: Adaptation to diverse semi-arid environments of sorghum genotypes having different plant type and sensitivity to photoperiod. Agric. Forest Meteorol., http://dx.doi.org/10.1016/j.agrformet.2007.09.009

  19. Correcting Biases in a lower resolution global circulation model with data assimilation

    NASA Astrophysics Data System (ADS)

    Canter, Martin; Barth, Alexander

    2016-04-01

    With this work, we aim at developping a new method of bias correction using data assimilation. This method is based on the stochastic forcing of a model to correct bias. First, through a preliminary run, we estimate the bias of the model and its possible sources. Then, we establish a forcing term which is directly added inside the model's equations. We create an ensemble of runs and consider the forcing term as a control variable during the assimilation of observations. We then use this analysed forcing term to correct the bias of the model. Since the forcing is added inside the model, it acts as a source term, unlike external forcings such as wind. This procedure has been developed and successfully tested with a twin experiment on a Lorenz 95 model. It is currently being applied and tested on the sea ice ocean NEMO LIM model, which is used in the PredAntar project. NEMO LIM is a global and low resolution (2 degrees) coupled model (hydrodynamic model and sea ice model) with long time steps allowing simulations over several decades. Due to its low resolution, the model is subject to bias in area where strong currents are present. We aim at correcting this bias by using perturbed current fields from higher resolution models and randomly generated perturbations. The random perturbations need to be constrained in order to respect the physical properties of the ocean, and not create unwanted phenomena. To construct those random perturbations, we first create a random field with the Diva tool (Data-Interpolating Variational Analysis). Using a cost function, this tool penalizes abrupt variations in the field, while using a custom correlation length. It also decouples disconnected areas based on topography. Then, we filter the field to smoothen it and remove small scale variations. We use this field as a random stream function, and take its derivatives to get zonal and meridional velocity fields. We also constrain the stream function along the coasts in order not to have currents perpendicular to the coast. The randomly generated stochastic forcing are then directly injected into the NEMO LIM model's equations in order to force the model at each timestep, and not only during the assimilation step. Results from a twin experiment will be presented. This method is being applied to a real case, with observations on the sea surface height available from the mean dynamic topography of CNES (Centre national d'études spatiales). The model, the bias correction, and more extensive forcings, in particular with a three dimensional structure and a time-varying component, will also be presented.

  20. Two-compartment modeling of tissue microcirculation revisited.

    PubMed

    Brix, Gunnar; Salehi Ravesh, Mona; Griebel, Jürgen

    2017-05-01

    Conventional two-compartment modeling of tissue microcirculation is used for tracer kinetic analysis of dynamic contrast-enhanced (DCE) computed tomography or magnetic resonance imaging studies although it is well-known that the underlying assumption of an instantaneous mixing of the administered contrast agent (CA) in capillaries is far from being realistic. It was thus the aim of the present study to provide theoretical and computational evidence in favor of a conceptually alternative modeling approach that makes it possible to characterize the bias inherent to compartment modeling and, moreover, to approximately correct for it. Starting from a two-region distributed-parameter model that accounts for spatial gradients in CA concentrations within blood-tissue exchange units, a modified lumped two-compartment exchange model was derived. It has the same analytical structure as the conventional two-compartment model, but indicates that the apparent blood flow identifiable from measured DCE data is substantially overestimated, whereas the three other model parameters (i.e., the permeability-surface area product as well as the volume fractions of the plasma and interstitial distribution space) are unbiased. Furthermore, a simple formula was derived to approximately compute a bias-corrected flow from the estimates of the apparent flow and permeability-surface area product obtained by model fitting. To evaluate the accuracy of the proposed modeling and bias correction method, representative noise-free DCE curves were analyzed. They were simulated for 36 microcirculation and four input scenarios by an axially distributed reference model. As analytically proven, the considered two-compartment exchange model is structurally identifiable from tissue residue data. The apparent flow values estimated for the 144 simulated tissue/input scenarios were considerably biased. After bias-correction, the deviations between estimated and actual parameter values were (11.2 ± 6.4) % (vs. (105 ± 21) % without correction) for the flow, (3.6 ± 6.1) % for the permeability-surface area product, (5.8 ± 4.9) % for the vascular volume and (2.5 ± 4.1) % for the interstitial volume; with individual deviations of more than 20% being the exception and just marginal. Increasing the duration of CA administration only had a statistically significant but opposite effect on the accuracy of the estimated flow (declined) and intravascular volume (improved). Physiologically well-defined tissue parameters are structurally identifiable and accurately estimable from DCE data by the conceptually modified two-compartment model in combination with the bias correction. The accuracy of the bias-corrected flow is nearly comparable to that of the three other (theoretically unbiased) model parameters. As compared to conventional two-compartment modeling, this feature constitutes a major advantage for tracer kinetic analysis of both preclinical and clinical DCE imaging studies. © 2017 American Association of Physicists in Medicine.

  1. A bias-corrected CMIP5 dataset for Africa using the CDF-t method - a contribution to agricultural impact studies

    NASA Astrophysics Data System (ADS)

    Moise Famien, Adjoua; Janicot, Serge; Delfin Ochou, Abe; Vrac, Mathieu; Defrance, Dimitri; Sultan, Benjamin; Noël, Thomas

    2018-03-01

    The objective of this paper is to present a new dataset of bias-corrected CMIP5 global climate model (GCM) daily data over Africa. This dataset was obtained using the cumulative distribution function transform (CDF-t) method, a method that has been applied to several regions and contexts but never to Africa. Here CDF-t has been applied over the period 1950-2099 combining Historical runs and climate change scenarios for six variables: precipitation, mean near-surface air temperature, near-surface maximum air temperature, near-surface minimum air temperature, surface downwelling shortwave radiation, and wind speed, which are critical variables for agricultural purposes. WFDEI has been used as the reference dataset to correct the GCMs. Evaluation of the results over West Africa has been carried out on a list of priority user-based metrics that were discussed and selected with stakeholders. It includes simulated yield using a crop model simulating maize growth. These bias-corrected GCM data have been compared with another available dataset of bias-corrected GCMs using WATCH Forcing Data as the reference dataset. The impact of WFD, WFDEI, and also EWEMBI reference datasets has been also examined in detail. It is shown that CDF-t is very effective at removing the biases and reducing the high inter-GCM scattering. Differences with other bias-corrected GCM data are mainly due to the differences among the reference datasets. This is particularly true for surface downwelling shortwave radiation, which has a significant impact in terms of simulated maize yields. Projections of future yields over West Africa are quite different, depending on the bias-correction method used. However all these projections show a similar relative decreasing trend over the 21st century.

  2. Corrected ROC analysis for misclassified binary outcomes.

    PubMed

    Zawistowski, Matthew; Sussman, Jeremy B; Hofer, Timothy P; Bentley, Douglas; Hayward, Rodney A; Wiitala, Wyndy L

    2017-06-15

    Creating accurate risk prediction models from Big Data resources such as Electronic Health Records (EHRs) is a critical step toward achieving precision medicine. A major challenge in developing these tools is accounting for imperfect aspects of EHR data, particularly the potential for misclassified outcomes. Misclassification, the swapping of case and control outcome labels, is well known to bias effect size estimates for regression prediction models. In this paper, we study the effect of misclassification on accuracy assessment for risk prediction models and find that it leads to bias in the area under the curve (AUC) metric from standard ROC analysis. The extent of the bias is determined by the false positive and false negative misclassification rates as well as disease prevalence. Notably, we show that simply correcting for misclassification while building the prediction model is not sufficient to remove the bias in AUC. We therefore introduce an intuitive misclassification-adjusted ROC procedure that accounts for uncertainty in observed outcomes and produces bias-corrected estimates of the true AUC. The method requires that misclassification rates are either known or can be estimated, quantities typically required for the modeling step. The computational simplicity of our method is a key advantage, making it ideal for efficiently comparing multiple prediction models on very large datasets. Finally, we apply the correction method to a hospitalization prediction model from a cohort of over 1 million patients from the Veterans Health Administrations EHR. Implementations of the ROC correction are provided for Stata and R. Published 2017. This article is a U.S. Government work and is in the public domain in the USA. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  3. Bias correction of risk estimates in vaccine safety studies with rare adverse events using a self-controlled case series design.

    PubMed

    Zeng, Chan; Newcomer, Sophia R; Glanz, Jason M; Shoup, Jo Ann; Daley, Matthew F; Hambidge, Simon J; Xu, Stanley

    2013-12-15

    The self-controlled case series (SCCS) method is often used to examine the temporal association between vaccination and adverse events using only data from patients who experienced such events. Conditional Poisson regression models are used to estimate incidence rate ratios, and these models perform well with large or medium-sized case samples. However, in some vaccine safety studies, the adverse events studied are rare and the maximum likelihood estimates may be biased. Several bias correction methods have been examined in case-control studies using conditional logistic regression, but none of these methods have been evaluated in studies using the SCCS design. In this study, we used simulations to evaluate 2 bias correction approaches-the Firth penalized maximum likelihood method and Cordeiro and McCullagh's bias reduction after maximum likelihood estimation-with small sample sizes in studies using the SCCS design. The simulations showed that the bias under the SCCS design with a small number of cases can be large and is also sensitive to a short risk period. The Firth correction method provides finite and less biased estimates than the maximum likelihood method and Cordeiro and McCullagh's method. However, limitations still exist when the risk period in the SCCS design is short relative to the entire observation period.

  4. Hypothesis Testing Using Factor Score Regression

    PubMed Central

    Devlieger, Ines; Mayer, Axel; Rosseel, Yves

    2015-01-01

    In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886

  5. Mixed Model Association with Family-Biased Case-Control Ascertainment.

    PubMed

    Hayeck, Tristan J; Loh, Po-Ru; Pollack, Samuela; Gusev, Alexander; Patterson, Nick; Zaitlen, Noah A; Price, Alkes L

    2017-01-05

    Mixed models have become the tool of choice for genetic association studies; however, standard mixed model methods may be poorly calibrated or underpowered under family sampling bias and/or case-control ascertainment. Previously, we introduced a liability threshold-based mixed model association statistic (LTMLM) to address case-control ascertainment in unrelated samples. Here, we consider family-biased case-control ascertainment, where case and control subjects are ascertained non-randomly with respect to family relatedness. Previous work has shown that this type of ascertainment can severely bias heritability estimates; we show here that it also impacts mixed model association statistics. We introduce a family-based association statistic (LT-Fam) that is robust to this problem. Similar to LTMLM, LT-Fam is computed from posterior mean liabilities (PML) under a liability threshold model; however, LT-Fam uses published narrow-sense heritability estimates to avoid the problem of biased heritability estimation, enabling correct calibration. In simulations with family-biased case-control ascertainment, LT-Fam was correctly calibrated (average χ 2 = 1.00-1.02 for null SNPs), whereas the Armitage trend test (ATT), standard mixed model association (MLM), and case-control retrospective association test (CARAT) were mis-calibrated (e.g., average χ 2 = 0.50-1.22 for MLM, 0.89-2.65 for CARAT). LT-Fam also attained higher power than other methods in some settings. In 1,259 type 2 diabetes-affected case subjects and 5,765 control subjects from the CARe cohort, downsampled to induce family-biased ascertainment, LT-Fam was correctly calibrated whereas ATT, MLM, and CARAT were again mis-calibrated. Our results highlight the importance of modeling family sampling bias in case-control datasets with related samples. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  6. Skin Temperature Analysis and Bias Correction in a Coupled Land-Atmosphere Data Assimilation System

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Radakovich, Jon D.; daSilva, Arlindo; Todling, Ricardo; Verter, Frances

    2006-01-01

    In an initial investigation, remotely sensed surface temperature is assimilated into a coupled atmosphere/land global data assimilation system, with explicit accounting for biases in the model state. In this scheme, an incremental bias correction term is introduced in the model's surface energy budget. In its simplest form, the algorithm estimates and corrects a constant time mean bias for each gridpoint; additional benefits are attained with a refined version of the algorithm which allows for a correction of the mean diurnal cycle. The method is validated against the assimilated observations, as well as independent near-surface air temperature observations. In many regions, not accounting for the diurnal cycle of bias caused degradation of the diurnal amplitude of background model air temperature. Energy fluxes collected through the Coordinated Enhanced Observing Period (CEOP) are used to more closely inspect the surface energy budget. In general, sensible heat flux is improved with the surface temperature assimilation, and two stations show a reduction of bias by as much as 30 Wm(sup -2) Rondonia station in Amazonia, the Bowen ratio changes direction in an improvement related to the temperature assimilation. However, at many stations the monthly latent heat flux bias is slightly increased. These results show the impact of univariate assimilation of surface temperature observations on the surface energy budget, and suggest the need for multivariate land data assimilation. The results also show the need for independent validation data, especially flux stations in varied climate regimes.

  7. Estimation and correction of visibility bias in aerial surveys of wintering ducks

    USGS Publications Warehouse

    Pearse, A.T.; Gerard, P.D.; Dinsmore, S.J.; Kaminski, R.M.; Reinecke, K.J.

    2008-01-01

    Incomplete detection of all individuals leading to negative bias in abundance estimates is a pervasive source of error in aerial surveys of wildlife, and correcting that bias is a critical step in improving surveys. We conducted experiments using duck decoys as surrogates for live ducks to estimate bias associated with surveys of wintering ducks in Mississippi, USA. We found detection of decoy groups was related to wetland cover type (open vs. forested), group size (1?100 decoys), and interaction of these variables. Observers who detected decoy groups reported counts that averaged 78% of the decoys actually present, and this counting bias was not influenced by either covariate cited above. We integrated this sightability model into estimation procedures for our sample surveys with weight adjustments derived from probabilities of group detection (estimated by logistic regression) and count bias. To estimate variances of abundance estimates, we used bootstrap resampling of transects included in aerial surveys and data from the bias-correction experiment. When we implemented bias correction procedures on data from a field survey conducted in January 2004, we found bias-corrected estimates of abundance increased 36?42%, and associated standard errors increased 38?55%, depending on species or group estimated. We deemed our method successful for integrating correction of visibility bias in an existing sample survey design for wintering ducks in Mississippi, and we believe this procedure could be implemented in a variety of sampling problems for other locations and species.

  8. Use of statistically and dynamically downscaled atmospheric model output for hydrologic simulations in three mountainous basins in the western United States

    USGS Publications Warehouse

    Hay, L.E.; Clark, M.P.

    2003-01-01

    This paper examines the hydrologic model performance in three snowmelt-dominated basins in the western United States to dynamically- and statistically downscaled output from the National Centers for Environmental Prediction/National Center for Atmospheric Research Reanalysis (NCEP). Runoff produced using a distributed hydrologic model is compared using daily precipitation and maximum and minimum temperature timeseries derived from the following sources: (1) NCEP output (horizontal grid spacing of approximately 210 km); (2) dynamically downscaled (DDS) NCEP output using a Regional Climate Model (RegCM2, horizontal grid spacing of approximately 52 km); (3) statistically downscaled (SDS) NCEP output; (4) spatially averaged measured data used to calibrate the hydrologic model (Best-Sta) and (5) spatially averaged measured data derived from stations located within the area of the RegCM2 model output used for each basin, but excluding Best-Sta set (All-Sta). In all three basins the SDS-based simulations of daily runoff were as good as runoff produced using the Best-Sta timeseries. The NCEP, DDS, and All-Sta timeseries were able to capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all three basins, the NCEP-, DDS-, and All-Sta-based simulations of runoff showed little skill on a daily basis. When the precipitation and temperature biases were corrected in the NCEP, DDS, and All-Sta timeseries, the accuracy of the daily runoff simulations improved dramatically, but, with the exception of the bias-corrected All-Sta data set, these simulations were never as accurate as the SDS-based simulations. This need for a bias correction may be somewhat troubling, but in the case of the large station-timeseries (All-Sta), the bias correction did indeed 'correct' for the change in scale. It is unknown if bias corrections to model output will be valid in a future climate. Future work is warranted to identify the causes for (and removal of) systematic biases in DDS simulations, and improve DDS simulations of daily variability in local climate. Until then, SDS based simulations of runoff appear to be the safer downscaling choice.

  9. Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Cooley, R.L.; Christensen, S.

    2006-01-01

    Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.

  10. A robust method using propensity score stratification for correcting verification bias for binary tests

    PubMed Central

    He, Hua; McDermott, Michael P.

    2012-01-01

    Sensitivity and specificity are common measures of the accuracy of a diagnostic test. The usual estimators of these quantities are unbiased if data on the diagnostic test result and the true disease status are obtained from all subjects in an appropriately selected sample. In some studies, verification of the true disease status is performed only for a subset of subjects, possibly depending on the result of the diagnostic test and other characteristics of the subjects. Estimators of sensitivity and specificity based on this subset of subjects are typically biased; this is known as verification bias. Methods have been proposed to correct verification bias under the assumption that the missing data on disease status are missing at random (MAR), that is, the probability of missingness depends on the true (missing) disease status only through the test result and observed covariate information. When some of the covariates are continuous, or the number of covariates is relatively large, the existing methods require parametric models for the probability of disease or the probability of verification (given the test result and covariates), and hence are subject to model misspecification. We propose a new method for correcting verification bias based on the propensity score, defined as the predicted probability of verification given the test result and observed covariates. This is estimated separately for those with positive and negative test results. The new method classifies the verified sample into several subsamples that have homogeneous propensity scores and allows correction for verification bias. Simulation studies demonstrate that the new estimators are more robust to model misspecification than existing methods, but still perform well when the models for the probability of disease and probability of verification are correctly specified. PMID:21856650

  11. Data assimilation in integrated hydrological modelling in the presence of observation bias

    NASA Astrophysics Data System (ADS)

    Rasmussen, J.; Madsen, H.; Jensen, K. H.; Refsgaard, J. C.

    2015-08-01

    The use of bias-aware Kalman filters for estimating and correcting observation bias in groundwater head observations is evaluated using both synthetic and real observations. In the synthetic test, groundwater head observations with a constant bias and unbiased stream discharge observations are assimilated in a catchment scale integrated hydrological model with the aim of updating stream discharge and groundwater head, as well as several model parameters relating to both stream flow and groundwater modeling. The Colored Noise Kalman filter (ColKF) and the Separate bias Kalman filter (SepKF) are tested and evaluated for correcting the observation biases. The study found that both methods were able to estimate most of the biases and that using any of the two bias estimation methods resulted in significant improvements over using a bias-unaware Kalman Filter. While the convergence of the ColKF was significantly faster than the convergence of the SepKF, a much larger ensemble size was required as the estimation of biases would otherwise fail. Real observations of groundwater head and stream discharge were also assimilated, resulting in improved stream flow modeling in terms of an increased Nash-Sutcliffe coefficient while no clear improvement in groundwater head modeling was observed. Both the ColKF and the SepKF tended to underestimate the biases, which resulted in drifting model behavior and sub-optimal parameter estimation, but both methods provided better state updating and parameter estimation than using a bias-unaware filter.

  12. Data assimilation in integrated hydrological modelling in the presence of observation bias

    NASA Astrophysics Data System (ADS)

    Rasmussen, Jørn; Madsen, Henrik; Høgh Jensen, Karsten; Refsgaard, Jens Christian

    2016-05-01

    The use of bias-aware Kalman filters for estimating and correcting observation bias in groundwater head observations is evaluated using both synthetic and real observations. In the synthetic test, groundwater head observations with a constant bias and unbiased stream discharge observations are assimilated in a catchment-scale integrated hydrological model with the aim of updating stream discharge and groundwater head, as well as several model parameters relating to both streamflow and groundwater modelling. The coloured noise Kalman filter (ColKF) and the separate-bias Kalman filter (SepKF) are tested and evaluated for correcting the observation biases. The study found that both methods were able to estimate most of the biases and that using any of the two bias estimation methods resulted in significant improvements over using a bias-unaware Kalman filter. While the convergence of the ColKF was significantly faster than the convergence of the SepKF, a much larger ensemble size was required as the estimation of biases would otherwise fail. Real observations of groundwater head and stream discharge were also assimilated, resulting in improved streamflow modelling in terms of an increased Nash-Sutcliffe coefficient while no clear improvement in groundwater head modelling was observed. Both the ColKF and the SepKF tended to underestimate the biases, which resulted in drifting model behaviour and sub-optimal parameter estimation, but both methods provided better state updating and parameter estimation than using a bias-unaware filter.

  13. Analytic Methods for Adjusting Subjective Rating Schemes.

    ERIC Educational Resources Information Center

    Cooper, Richard V. L.; Nelson, Gary R.

    Statistical and econometric techniques of correcting for supervisor bias in models of individual performance appraisal were developed, using a variant of the classical linear regression model. Location bias occurs when individual performance is systematically overestimated or underestimated, while scale bias results when raters either exaggerate…

  14. Bootstrap Estimation of Sample Statistic Bias in Structural Equation Modeling.

    ERIC Educational Resources Information Center

    Thompson, Bruce; Fan, Xitao

    This study empirically investigated bootstrap bias estimation in the area of structural equation modeling (SEM). Three correctly specified SEM models were used under four different sample size conditions. Monte Carlo experiments were carried out to generate the criteria against which bootstrap bias estimation should be judged. For SEM fit indices,…

  15. Bias correction of satellite precipitation products for flood forecasting application at the Upper Mahanadi River Basin in Eastern India

    NASA Astrophysics Data System (ADS)

    Beria, H.; Nanda, T., Sr.; Chatterjee, C.

    2015-12-01

    High resolution satellite precipitation products such as Tropical Rainfall Measuring Mission (TRMM), Climate Forecast System Reanalysis (CFSR), European Centre for Medium-Range Weather Forecasts (ECMWF), etc., offer a promising alternative to flood forecasting in data scarce regions. At the current state-of-art, these products cannot be used in the raw form for flood forecasting, even at smaller lead times. In the current study, these precipitation products are bias corrected using statistical techniques, such as additive and multiplicative bias corrections, and wavelet multi-resolution analysis (MRA) with India Meteorological Department (IMD) gridded precipitation product,obtained from gauge-based rainfall estimates. Neural network based rainfall-runoff modeling using these bias corrected products provide encouraging results for flood forecasting upto 48 hours lead time. We will present various statistical and graphical interpretations of catchment response to high rainfall events using both the raw and bias corrected precipitation products at different lead times.

  16. Performance evaluation and bias correction of DBS measurements for a 1290-MHz boundary layer profiler.

    PubMed

    Liu, Zhao; Zheng, Chaorong; Wu, Yue

    2018-02-01

    Recently, the government installed a boundary layer profiler (BLP), which is operated under the Doppler beam swinging mode, in a coastal area of China, to acquire useful wind field information in the atmospheric boundary layer for several purposes. And under strong wind conditions, the performance of the BLP is evaluated. It is found that, even though the quality controlled BLP data show good agreement with the balloon observations, a systematic bias can always be found for the BLP data. For the low wind velocities, the BLP data tend to overestimate the atmospheric wind. However, with the increment of wind velocity, the BLP data show a tendency of underestimation. In order to remove the effect of poor quality data on bias correction, the probability distribution function of the differences between the two instruments is discussed, and it is found that the t location scale distribution is the most suitable probability model when compared to other probability models. After the outliers with a large discrepancy, which are outside of 95% confidence interval of the t location scale distribution, are discarded, the systematic bias can be successfully corrected using a first-order polynomial correction function. The methodology of bias correction used in the study not only can be referred for the correction of other wind profiling radars, but also can lay a solid basis for further analysis of the wind profiles.

  17. Performance evaluation and bias correction of DBS measurements for a 1290-MHz boundary layer profiler

    NASA Astrophysics Data System (ADS)

    Liu, Zhao; Zheng, Chaorong; Wu, Yue

    2018-02-01

    Recently, the government installed a boundary layer profiler (BLP), which is operated under the Doppler beam swinging mode, in a coastal area of China, to acquire useful wind field information in the atmospheric boundary layer for several purposes. And under strong wind conditions, the performance of the BLP is evaluated. It is found that, even though the quality controlled BLP data show good agreement with the balloon observations, a systematic bias can always be found for the BLP data. For the low wind velocities, the BLP data tend to overestimate the atmospheric wind. However, with the increment of wind velocity, the BLP data show a tendency of underestimation. In order to remove the effect of poor quality data on bias correction, the probability distribution function of the differences between the two instruments is discussed, and it is found that the t location scale distribution is the most suitable probability model when compared to other probability models. After the outliers with a large discrepancy, which are outside of 95% confidence interval of the t location scale distribution, are discarded, the systematic bias can be successfully corrected using a first-order polynomial correction function. The methodology of bias correction used in the study not only can be referred for the correction of other wind profiling radars, but also can lay a solid basis for further analysis of the wind profiles.

  18. Impact of Bias-Correction Type and Conditional Training on Bayesian Model Averaging over the Northeast United States

    Treesearch

    Michael J. Erickson; Brian A. Colle; Joseph J. Charney

    2012-01-01

    The performance of a multimodel ensemble over the northeast United States is evaluated before and after applying bias correction and Bayesian model averaging (BMA). The 13-member Stony Brook University (SBU) ensemble at 0000 UTC is combined with the 21-member National Centers for Environmental Prediction (NCEP) Short-Range Ensemble Forecast (SREF) system at 2100 UTC....

  19. Bias correction of satellite-based rainfall data

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Biswa; Solomatine, Dimitri

    2015-04-01

    Limitation in hydro-meteorological data availability in many catchments limits the possibility of reliable hydrological analyses especially for near-real-time predictions. However, the variety of satellite based and meteorological model products for rainfall provides new opportunities. Often times the accuracy of these rainfall products, when compared to rain gauge measurements, is not impressive. The systematic differences of these rainfall products from gauge observations can be partially compensated by adopting a bias (error) correction. Many of such methods correct the satellite based rainfall data by comparing their mean value to the mean value of rain gauge data. Refined approaches may also first find out a suitable time scale at which different data products are better comparable and then employ a bias correction at that time scale. More elegant methods use quantile-to-quantile bias correction, which however, assumes that the available (often limited) sample size can be useful in comparing probabilities of different rainfall products. Analysis of rainfall data and understanding of the process of its generation reveals that the bias in different rainfall data varies in space and time. The time aspect is sometimes taken into account by considering the seasonality. In this research we have adopted a bias correction approach that takes into account the variation of rainfall in space and time. A clustering based approach is employed in which every new data point (e.g. of Tropical Rainfall Measuring Mission (TRMM)) is first assigned to a specific cluster of that data product and then, by identifying the corresponding cluster of gauge data, the bias correction specific to that cluster is adopted. The presented approach considers the space-time variation of rainfall and as a result the corrected data is more realistic. Keywords: bias correction, rainfall, TRMM, satellite rainfall

  20. Analysis and correction of gradient nonlinearity bias in apparent diffusion coefficient measurements.

    PubMed

    Malyarenko, Dariya I; Ross, Brian D; Chenevert, Thomas L

    2014-03-01

    Gradient nonlinearity of MRI systems leads to spatially dependent b-values and consequently high non-uniformity errors (10-20%) in apparent diffusion coefficient (ADC) measurements over clinically relevant field-of-views. This work seeks practical correction procedure that effectively reduces observed ADC bias for media of arbitrary anisotropy in the fewest measurements. All-inclusive bias analysis considers spatial and time-domain cross-terms for diffusion and imaging gradients. The proposed correction is based on rotation of the gradient nonlinearity tensor into the diffusion gradient frame where spatial bias of b-matrix can be approximated by its Euclidean norm. Correction efficiency of the proposed procedure is numerically evaluated for a range of model diffusion tensor anisotropies and orientations. Spatial dependence of nonlinearity correction terms accounts for the bulk (75-95%) of ADC bias for FA = 0.3-0.9. Residual ADC non-uniformity errors are amplified for anisotropic diffusion. This approximation obviates need for full diffusion tensor measurement and diagonalization to derive a corrected ADC. Practical scenarios are outlined for implementation of the correction on clinical MRI systems. The proposed simplified correction algorithm appears sufficient to control ADC non-uniformity errors in clinical studies using three orthogonal diffusion measurements. The most efficient reduction of ADC bias for anisotropic medium is achieved with non-lab-based diffusion gradients. Copyright © 2013 Wiley Periodicals, Inc.

  1. Analysis and correction of gradient nonlinearity bias in ADC measurements

    PubMed Central

    Malyarenko, Dariya I.; Ross, Brian D.; Chenevert, Thomas L.

    2013-01-01

    Purpose Gradient nonlinearity of MRI systems leads to spatially-dependent b-values and consequently high non-uniformity errors (10–20%) in ADC measurements over clinically relevant field-of-views. This work seeks practical correction procedure that effectively reduces observed ADC bias for media of arbitrary anisotropy in the fewest measurements. Methods All-inclusive bias analysis considers spatial and time-domain cross-terms for diffusion and imaging gradients. The proposed correction is based on rotation of the gradient nonlinearity tensor into the diffusion gradient frame where spatial bias of b-matrix can be approximated by its Euclidean norm. Correction efficiency of the proposed procedure is numerically evaluated for a range of model diffusion tensor anisotropies and orientations. Results Spatial dependence of nonlinearity correction terms accounts for the bulk (75–95%) of ADC bias for FA = 0.3–0.9. Residual ADC non-uniformity errors are amplified for anisotropic diffusion. This approximation obviates need for full diffusion tensor measurement and diagonalization to derive a corrected ADC. Practical scenarios are outlined for implementation of the correction on clinical MRI systems. Conclusions The proposed simplified correction algorithm appears sufficient to control ADC non-uniformity errors in clinical studies using three orthogonal diffusion measurements. The most efficient reduction of ADC bias for anisotropic medium is achieved with non-lab-based diffusion gradients. PMID:23794533

  2. Investigating bias in squared regression structure coefficients

    PubMed Central

    Nimon, Kim F.; Zientek, Linda R.; Thompson, Bruce

    2015-01-01

    The importance of structure coefficients and analogs of regression weights for analysis within the general linear model (GLM) has been well-documented. The purpose of this study was to investigate bias in squared structure coefficients in the context of multiple regression and to determine if a formula that had been shown to correct for bias in squared Pearson correlation coefficients and coefficients of determination could be used to correct for bias in squared regression structure coefficients. Using data from a Monte Carlo simulation, this study found that squared regression structure coefficients corrected with Pratt's formula produced less biased estimates and might be more accurate and stable estimates of population squared regression structure coefficients than estimates with no such corrections. While our findings are in line with prior literature that identified multicollinearity as a predictor of bias in squared regression structure coefficients but not coefficients of determination, the findings from this study are unique in that the level of predictive power, number of predictors, and sample size were also observed to contribute bias in squared regression structure coefficients. PMID:26217273

  3. Bias of shear wave elasticity measurements in thin layer samples and a simple correction strategy.

    PubMed

    Mo, Jianqiang; Xu, Hao; Qiang, Bo; Giambini, Hugo; Kinnick, Randall; An, Kai-Nan; Chen, Shigao; Luo, Zongping

    2016-01-01

    Shear wave elastography (SWE) is an emerging technique for measuring biological tissue stiffness. However, the application of SWE in thin layer tissues is limited by bias due to the influence of geometry on measured shear wave speed. In this study, we investigated the bias of Young's modulus measured by SWE in thin layer gelatin-agar phantoms, and compared the result with finite element method and Lamb wave model simulation. The result indicated that the Young's modulus measured by SWE decreased continuously when the sample thickness decreased, and this effect was more significant for smaller thickness. We proposed a new empirical formula which can conveniently correct the bias without the need of using complicated mathematical modeling. In summary, we confirmed the nonlinear relation between thickness and Young's modulus measured by SWE in thin layer samples, and offered a simple and practical correction strategy which is convenient for clinicians to use.

  4. Assessing climate change impacts on the rape stem weevil, Ceutorhynchus napi Gyll., based on bias- and non-bias-corrected regional climate change projections.

    PubMed

    Junk, J; Ulber, B; Vidal, S; Eickermann, M

    2015-11-01

    Agricultural production is directly affected by projected increases in air temperature and changes in precipitation. A multi-model ensemble of regional climate change projections indicated shifts towards higher air temperatures and changing precipitation patterns during the summer and winter seasons up to the year 2100 for the region of Goettingen (Lower Saxony, Germany). A second major controlling factor of the agricultural production is the infestation level by pests. Based on long-term field surveys and meteorological observations, a calibration of an existing model describing the migration of the pest insect Ceutorhynchus napi was possible. To assess the impacts of climate on pests under projected changing environmental conditions, we combined the results of regional climate models with the phenological model to describe the crop invasion of this species. In order to reduce systematic differences between the output of the regional climate models and observational data sets, two different bias correction methods were applied: a linear correction for air temperature and a quantile mapping approach for precipitation. Only the results derived from the bias-corrected output of the regional climate models showed satisfying results. An earlier onset, as well as a prolongation of the possible time window for the immigration of Ceutorhynchus napi, was projected by the majority of the ensemble members.

  5. Assessing climate change impacts on the rape stem weevil, Ceutorhynchus napi Gyll., based on bias- and non-bias-corrected regional climate change projections

    NASA Astrophysics Data System (ADS)

    Junk, J.; Ulber, B.; Vidal, S.; Eickermann, M.

    2015-11-01

    Agricultural production is directly affected by projected increases in air temperature and changes in precipitation. A multi-model ensemble of regional climate change projections indicated shifts towards higher air temperatures and changing precipitation patterns during the summer and winter seasons up to the year 2100 for the region of Goettingen (Lower Saxony, Germany). A second major controlling factor of the agricultural production is the infestation level by pests. Based on long-term field surveys and meteorological observations, a calibration of an existing model describing the migration of the pest insect Ceutorhynchus napi was possible. To assess the impacts of climate on pests under projected changing environmental conditions, we combined the results of regional climate models with the phenological model to describe the crop invasion of this species. In order to reduce systematic differences between the output of the regional climate models and observational data sets, two different bias correction methods were applied: a linear correction for air temperature and a quantile mapping approach for precipitation. Only the results derived from the bias-corrected output of the regional climate models showed satisfying results. An earlier onset, as well as a prolongation of the possible time window for the immigration of Ceutorhynchus napi, was projected by the majority of the ensemble members.

  6. Assessing the Added Value of Dynamical Downscaling in the Context of Hydrologic Implication

    NASA Astrophysics Data System (ADS)

    Lu, M.; IM, E. S.; Lee, M. H.

    2017-12-01

    There is a scientific consensus that high-resolution climate simulations downscaled by Regional Climate Models (RCMs) can provide valuable refined information over the target region. However, a significant body of hydrologic impact assessment has been performing using the climate information provided by Global Climate Models (GCMs) in spite of a fundamental spatial scale gap. It is probably based on the assumption that the substantial biases and spatial scale gap from GCMs raw data can be simply removed by applying the statistical bias correction and spatial disaggregation. Indeed, many previous studies argue that the benefit of dynamical downscaling using RCMs is minimal when linking climate data with the hydrological model, from the comparison of the impact between bias-corrected GCMs and bias-corrected RCMs on hydrologic simulations. It may be true for long-term averaged climatological pattern, but it is not necessarily the case when looking into variability across various temporal spectrum. In this study, we investigate the added value of dynamical downscaling focusing on the performance in capturing climate variability. For doing this, we evaluate the performance of the distributed hydrological model over the Korean river basin using the raw output from GCM and RCM, and bias-corrected output from GCM and RCM. The impacts of climate input data on streamflow simulation are comprehensively analyzed. [Acknowledgements]This research is supported by the Korea Agency for Infrastructure Technology Advancement (KAIA) grant funded by the Ministry of Land, Infrastructure and Transport (Grant 17AWMP-B083066-04).

  7. Climate projections and extremes in dynamically downscaled CMIP5 model outputs over the Bengal delta: a quartile based bias-correction approach with new gridded data

    NASA Astrophysics Data System (ADS)

    Hasan, M. Alfi; Islam, A. K. M. Saiful; Akanda, Ali Shafqat

    2017-11-01

    In the era of global warning, the insight of future climate and their changing extremes is critical for climate-vulnerable regions of the world. In this study, we have conducted a robust assessment of Regional Climate Model (RCM) results in a monsoon-dominated region within the new Coupled Model Intercomparison Project Phase 5 (CMIP5) and the latest Representative Concentration Pathways (RCP) scenarios. We have applied an advanced bias correction approach to five RCM simulations in order to project future climate and associated extremes over Bangladesh, a critically climate-vulnerable country with a complex monsoon system. We have also generated a new gridded product that performed better in capturing observed climatic extremes than existing products. The bias-correction approach provided a notable improvement in capturing the precipitation extremes as well as mean climate. The majority of projected multi-model RCMs indicate an increase of rainfall, where one model shows contrary results during the 2080s (2071-2100) era. The multi-model mean shows that nighttime temperatures will increase much faster than daytime temperatures and the average annual temperatures are projected to be as hot as present-day summer temperatures. The expected increase of precipitation and temperature over the hilly areas are higher compared to other parts of the country. Overall, the projected extremities of future rainfall are more variable than temperature. According to the majority of the models, the number of the heavy rainy days will increase in future years. The severity of summer-day temperatures will be alarming, especially over hilly regions, where winters are relatively warm. The projected rise of both precipitation and temperature extremes over the intense rainfall-prone northeastern region of the country creates a possibility of devastating flash floods with harmful impacts on agriculture. Moreover, the effect of bias-correction, as presented in probable changes of both bias-corrected and uncorrected extremes, can be considered in future policy making.

  8. Joint release rate estimation and measurement-by-measurement model correction for atmospheric radionuclide emission in nuclear accidents: An application to wind tunnel experiments.

    PubMed

    Li, Xinpeng; Li, Hong; Liu, Yun; Xiong, Wei; Fang, Sheng

    2018-03-05

    The release rate of atmospheric radionuclide emissions is a critical factor in the emergency response to nuclear accidents. However, there are unavoidable biases in radionuclide transport models, leading to inaccurate estimates. In this study, a method that simultaneously corrects these biases and estimates the release rate is developed. Our approach provides a more complete measurement-by-measurement correction of the biases with a coefficient matrix that considers both deterministic and stochastic deviations. This matrix and the release rate are jointly solved by the alternating minimization algorithm. The proposed method is generic because it does not rely on specific features of transport models or scenarios. It is validated against wind tunnel experiments that simulate accidental releases in a heterogonous and densely built nuclear power plant site. The sensitivities to the position, number, and quality of measurements and extendibility of the method are also investigated. The results demonstrate that this method effectively corrects the model biases, and therefore outperforms Tikhonov's method in both release rate estimation and model prediction. The proposed approach is robust to uncertainties and extendible with various center estimators, thus providing a flexible framework for robust source inversion in real accidents, even if large uncertainties exist in multiple factors. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Improved Correction of Misclassification Bias With Bootstrap Imputation.

    PubMed

    van Walraven, Carl

    2018-07-01

    Diagnostic codes used in administrative database research can create bias due to misclassification. Quantitative bias analysis (QBA) can correct for this bias, requires only code sensitivity and specificity, but may return invalid results. Bootstrap imputation (BI) can also address misclassification bias but traditionally requires multivariate models to accurately estimate disease probability. This study compared misclassification bias correction using QBA and BI. Serum creatinine measures were used to determine severe renal failure status in 100,000 hospitalized patients. Prevalence of severe renal failure in 86 patient strata and its association with 43 covariates was determined and compared with results in which renal failure status was determined using diagnostic codes (sensitivity 71.3%, specificity 96.2%). Differences in results (misclassification bias) were then corrected with QBA or BI (using progressively more complex methods to estimate disease probability). In total, 7.4% of patients had severe renal failure. Imputing disease status with diagnostic codes exaggerated prevalence estimates [median relative change (range), 16.6% (0.8%-74.5%)] and its association with covariates [median (range) exponentiated absolute parameter estimate difference, 1.16 (1.01-2.04)]. QBA produced invalid results 9.3% of the time and increased bias in estimates of both disease prevalence and covariate associations. BI decreased misclassification bias with increasingly accurate disease probability estimates. QBA can produce invalid results and increase misclassification bias. BI avoids invalid results and can importantly decrease misclassification bias when accurate disease probability estimates are used.

  10. Attenuation correction for the large non-human primate brain imaging using microPET.

    PubMed

    Naidoo-Variawa, S; Lehnert, W; Kassiou, M; Banati, R; Meikle, S R

    2010-04-21

    Assessment of the biodistribution and pharmacokinetics of radiopharmaceuticals in vivo is often performed on animal models of human disease prior to their use in humans. The baboon brain is physiologically and neuro-anatomically similar to the human brain and is therefore a suitable model for evaluating novel CNS radioligands. We previously demonstrated the feasibility of performing baboon brain imaging on a dedicated small animal PET scanner provided that the data are accurately corrected for degrading physical effects such as photon attenuation in the body. In this study, we investigated factors affecting the accuracy and reliability of alternative attenuation correction strategies when imaging the brain of a large non-human primate (papio hamadryas) using the microPET Focus 220 animal scanner. For measured attenuation correction, the best bias versus noise performance was achieved using a (57)Co transmission point source with a 4% energy window. The optimal energy window for a (68)Ge transmission source operating in singles acquisition mode was 20%, independent of the source strength, providing bias-noise performance almost as good as for (57)Co. For both transmission sources, doubling the acquisition time had minimal impact on the bias-noise trade-off for corrected emission images, despite observable improvements in reconstructed attenuation values. In a [(18)F]FDG brain scan of a female baboon, both measured attenuation correction strategies achieved good results and similar SNR, while segmented attenuation correction (based on uncorrected emission images) resulted in appreciable regional bias in deep grey matter structures and the skull. We conclude that measured attenuation correction using a single pass (57)Co (4% energy window) or (68)Ge (20% window) transmission scan achieves an excellent trade-off between bias and propagation of noise when imaging the large non-human primate brain with a microPET scanner.

  11. Attenuation correction for the large non-human primate brain imaging using microPET

    NASA Astrophysics Data System (ADS)

    Naidoo-Variawa, S.; Lehnert, W.; Kassiou, M.; Banati, R.; Meikle, S. R.

    2010-04-01

    Assessment of the biodistribution and pharmacokinetics of radiopharmaceuticals in vivo is often performed on animal models of human disease prior to their use in humans. The baboon brain is physiologically and neuro-anatomically similar to the human brain and is therefore a suitable model for evaluating novel CNS radioligands. We previously demonstrated the feasibility of performing baboon brain imaging on a dedicated small animal PET scanner provided that the data are accurately corrected for degrading physical effects such as photon attenuation in the body. In this study, we investigated factors affecting the accuracy and reliability of alternative attenuation correction strategies when imaging the brain of a large non-human primate (papio hamadryas) using the microPET Focus 220 animal scanner. For measured attenuation correction, the best bias versus noise performance was achieved using a 57Co transmission point source with a 4% energy window. The optimal energy window for a 68Ge transmission source operating in singles acquisition mode was 20%, independent of the source strength, providing bias-noise performance almost as good as for 57Co. For both transmission sources, doubling the acquisition time had minimal impact on the bias-noise trade-off for corrected emission images, despite observable improvements in reconstructed attenuation values. In a [18F]FDG brain scan of a female baboon, both measured attenuation correction strategies achieved good results and similar SNR, while segmented attenuation correction (based on uncorrected emission images) resulted in appreciable regional bias in deep grey matter structures and the skull. We conclude that measured attenuation correction using a single pass 57Co (4% energy window) or 68Ge (20% window) transmission scan achieves an excellent trade-off between bias and propagation of noise when imaging the large non-human primate brain with a microPET scanner.

  12. Modeling Unconscious Gender Bias in Fame Judgments: Finding the Proper Branch of the Correct (Multinomial) Tree

    PubMed

    Draine; Greenwald; Banaji

    1996-03-01

    In the preceding article, Buchner and Wippich used a guessing-corrected, multinomial process-dissociation analysis to test whether a gender bias in fame judgments reported by Banaji and Greenwald (Journal of Personality and Social Psychology, 1995, 68, 181-198) was unconscious. In their two experiments, Buchner and Wippich found no evidence for unconscious mediation of this gender bias. Their conclusion can be questioned by noting that (a) the gender difference in familiarity of previously seen names that Buchner and Wippich modeled was different from the gender difference in criterion for fame judgments reported by Banaji and Greenwald, (b) the assumptions of Buchner and Wippich's multinomial model excluded processes that are plausibly involved in the fame judgment task, and (c) the constructs of Buchner and Wippich's model that corresponded most closely to Banaji and Greenwald's gender-bias interpretation were formulated so as to preclude the possibility of modeling that interpretation. Perhaps a more complex multinomial model can model the Banaji and Greenwald interpretation.

  13. Modeling unconscious gender bias in fame judgments: finding the proper branch of the correct (multinomial) tree.

    PubMed

    Draine, S C; Greenwald, A G; Banaji, M R

    1996-01-01

    In the preceding article, Buchner and Wippich used a guessing-corrected, multinomial process-dissociation analysis to test whether a gender bias in fame judgements reported by Banaji and Greenwald (Journal of Personality and Social Psychology, 1995, 68, 181-198) was unconscious. In their two experiments, Buchner and Wippich found no evidence for unconscious mediation of this gender bias. Their conclusion can be questioned by noting that (a) the gender difference in familiarity of previously seen names that Buchner and Wippich modeled was different from the gender difference in criterion for fame judgements reported by Banaji and Greenwald, (b) the assumptions of Buchner and Wippich's multinomial model excluded processes that are plausibly involved in the fame judgement task, and (c) the constructs of Buchner and Wippich's model that corresponded most closely to Banaji and Greenwald's gender-bias interpretation were formulated so as to preclude the possibility of modeling that interpretation. Perhaps a more complex multinomial model can model the Banaji and Greenwald interpretation.

  14. Experiences from the testing of a theory for modelling groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    2002-01-01

    Usually, small-scale model error is present in groundwater modelling because the model only represents average system characteristics having the same form as the drift and small-scale variability is neglected. These errors cause the true errors of a regression model to be correlated. Theory and an example show that the errors also contribute to bias in the estimates of model parameters. This bias originates from model nonlinearity. In spite of this bias, predictions of hydraulic head are nearly unbiased if the model intrinsic nonlinearity is small. Individual confidence and prediction intervals are accurate if the t-statistic is multiplied by a correction factor. The correction factor can be computed from the true error second moment matrix, which can be determined when the stochastic properties of the system characteristics are known.

  15. Experience gained in testing a theory for modelling groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    2002-01-01

    Usually, small-scale model error is present in groundwater modelling because the model only represents average system characteristics having the same form as the drift, and small-scale variability is neglected. These errors cause the true errors of a regression model to be correlated. Theory and an example show that the errors also contribute to bias in the estimates of model parameters. This bias originates from model nonlinearity. In spite of this bias, predictions of hydraulic head are nearly unbiased if the model intrinsic nonlinearity is small. Individual confidence and prediction intervals are accurate if the t-statistic is multiplied by a correction factor. The correction factor can be computed from the true error second moment matrix, which can be determined when the stochastic properties of the system characteristics are known.

  16. Climate model biases and statistical downscaling for application in hydrologic model

    USDA-ARS?s Scientific Manuscript database

    Climate change impact studies use global climate model (GCM) simulations to define future temperature and precipitation. The best available bias-corrected GCM output was obtained from Coupled Model Intercomparison Project phase 5 (CMIP5). CMIP5 data (temperature and precipitation) are available in d...

  17. Evaluation of the Klobuchar model in TaiWan

    NASA Astrophysics Data System (ADS)

    Li, Jinghua; Wan, Qingtao; Ma, Guanyi; Zhang, Jie; Wang, Xiaolan; Fan, Jiangtao

    2017-09-01

    Ionospheric delay is the mainly error source in Global Navigation Satellite System (GNSS). Ionospheric model is one of the ways to correct the ionospheric delay. The single-frequency GNSS users modify the ionospheric delay by receiving the correction parameters broadcasted by satellites. Klobuchar model is widely used in Global Positioning System (GPS) and COMPASS because it is simple and convenient for real-time calculation. This model is established on the observations mainly from Europe and USA. It does not describe the equatorial anomaly region. South of China is located near the north crest of the equatorial anomaly, where the ionosphere has complex spatial and temporal variation. The assessment on the validation of Klobuchar model in this area is important to improve this model. Eleven years (2003-2014) data from one GPS receiver located at Taoyuan Taiwan (121°E, 25°N) are used to assess the validation of Klobuchar model in Taiwan. Total electron content (TEC) from the dual-frequency GPS observations is calculated and used as the reference, and TEC based on the Klobuchar model is compared with the reference. The residual is defined as the difference between the TEC from Klobuchar model and the reference. It is a parameter to reflect the absolute correction of the model. RMS correction percentage presents the validation of the model relative to the observations. The residuals' long-term variation, the RMS correction percentage, and their changes with the latitudes are analyzed respectively to access the model. In some months the RMS correction did not reach the goal of 50% purposed by Klobuchar, especially in the winter of the low solar activity years and at nighttime. RMS correction did not depend on the 11-years solar activity, neither the latitudes. Different from RMS correction, the residuals changed with the solar activity, similar to the variation of TEC. The residuals were large in the daytime, during the equinox seasons and in the high solar activity years; they are small at night, during the solstice seasons, and in the low activity years. During 1300-1500 BJT in the high solar activity years, the mean bias was negative, implying the model underestimated TEC on average. The maximum mean bias was 33TECU in April 2014, and the maximum underestimation reached 97TECU in October 2011. During 0000-0200 BJT, the residuals had small mean bias, small variation range and small standard deviation. It suggested that the model could describe the TEC of the ionosphere better than that in the daytime. Besides the variation with the solar activity, the residuals also vary with the latitudes. The means bias reached the maximum at 20-22°N, corresponding to the north crest of the equatorial anomaly. At this latitude, the maximum mean bias was 47TECU lower than the observation in the high activity years, and 12TECU lower in the low activity years. The minimum variation range appeared at 30-32°N in high and low activity years. But the minimum mean bias was at different latitudes in the high and low activity years. In the high activity years, it appeared at 30-32°N, and in the low years it was at 24-26°N. For an ideal model, the residuals should have small mean bias and small variation range. Further study is needed to learn the distribution of the residuals and to improve the model.

  18. Evaluating NMME Seasonal Forecast Skill for use in NASA SERVIR Hub Regions

    NASA Technical Reports Server (NTRS)

    Roberts, J. Brent; Roberts, Franklin R.

    2013-01-01

    The U.S. National Multi-Model Ensemble seasonal forecasting system is providing hindcast and real-time data streams to be used in assessing and improving seasonal predictive capacity. The coupled forecasts have numerous potential applications, both national and international in scope. The NASA / USAID SERVIR project, which leverages satellite and modeling-based resources for environmental decision making in developing nations, is focusing on the evaluation of NMME forecasts specifically for use in driving applications models in hub regions including East Africa, the Hindu Kush- Himalayan (HKH) region and Mesoamerica. A prerequisite for seasonal forecast use in application modeling (e.g. hydrology, agriculture) is bias correction and skill assessment. Efforts to address systematic biases and multi-model combination in support of NASA SERVIR impact modeling requirements will be highlighted. Specifically, quantilequantile mapping for bias correction has been implemented for all archived NMME hindcasts. Both deterministic and probabilistic skill estimates for raw, bias-corrected, and multi-model ensemble forecasts as a function of forecast lead will be presented for temperature and precipitation. Complementing this statistical assessment will be case studies of significant events, for example, the ability of the NMME forecasts suite to anticipate the 2010/2011 drought in the Horn of Africa and its relationship to evolving SST patterns.

  19. The Impact of Satellite Time Group Delay and Inter-Frequency Differential Code Bias Corrections on Multi-GNSS Combined Positioning

    PubMed Central

    Ge, Yulong; Zhou, Feng; Sun, Baoqi; Wang, Shengli; Shi, Bo

    2017-01-01

    We present quad-constellation (namely, GPS, GLONASS, BeiDou and Galileo) time group delay (TGD) and differential code bias (DCB) correction models to fully exploit the code observations of all the four global navigation satellite systems (GNSSs) for navigation and positioning. The relationship between TGDs and DCBs for multi-GNSS is clearly figured out, and the equivalence of TGD and DCB correction models combining theory with practice is demonstrated. Meanwhile, the TGD/DCB correction models have been extended to various standard point positioning (SPP) and precise point positioning (PPP) scenarios in a multi-GNSS and multi-frequency context. To evaluate the effectiveness and practicability of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both single-frequency GNSS ionosphere-corrected SPP and dual-frequency GNSS ionosphere-free SPP/PPP tests are carried out with quad-constellation signals. Furthermore, the author investigates the influence of differential code biases on GNSS positioning estimates. The experiments show that multi-constellation combination SPP performs better after DCB/TGD correction, for example, for GPS-only b1-based SPP, the positioning accuracies can be improved by 25.0%, 30.6% and 26.7%, respectively, in the N, E, and U components, after the differential code biases correction, while GPS/GLONASS/BDS b1-based SPP can be improved by 16.1%, 26.1% and 9.9%. For GPS/BDS/Galileo the 3rd frequency based SPP, the positioning accuracies are improved by 2.0%, 2.0% and 0.4%, respectively, in the N, E, and U components, after Galileo satellites DCB correction. The accuracy of Galileo-only b1-based SPP are improved about 48.6%, 34.7% and 40.6% with DCB correction, respectively, in the N, E, and U components. The estimates of multi-constellation PPP are subject to different degrees of influence. For multi-constellation combination SPP, the accuracy of single-frequency is slightly better than that of dual-frequency combinations. Dual-frequency combinations are more sensitive to the differential code biases, especially for the 2nd and 3rd frequency combination, such as for GPS/BDS SPP, accuracy improvements of 60.9%, 26.5% and 58.8% in the three coordinate components is achieved after DCB parameters correction. For multi-constellation PPP, the convergence time can be reduced significantly with differential code biases correction. And the accuracy of positioning is slightly better with TGD/DCB correction. PMID:28300787

  20. The Impact of Satellite Time Group Delay and Inter-Frequency Differential Code Bias Corrections on Multi-GNSS Combined Positioning.

    PubMed

    Ge, Yulong; Zhou, Feng; Sun, Baoqi; Wang, Shengli; Shi, Bo

    2017-03-16

    We present quad-constellation (namely, GPS, GLONASS, BeiDou and Galileo) time group delay (TGD) and differential code bias (DCB) correction models to fully exploit the code observations of all the four global navigation satellite systems (GNSSs) for navigation and positioning. The relationship between TGDs and DCBs for multi-GNSS is clearly figured out, and the equivalence of TGD and DCB correction models combining theory with practice is demonstrated. Meanwhile, the TGD/DCB correction models have been extended to various standard point positioning (SPP) and precise point positioning (PPP) scenarios in a multi-GNSS and multi-frequency context. To evaluate the effectiveness and practicability of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both single-frequency GNSS ionosphere-corrected SPP and dual-frequency GNSS ionosphere-free SPP/PPP tests are carried out with quad-constellation signals. Furthermore, the author investigates the influence of differential code biases on GNSS positioning estimates. The experiments show that multi-constellation combination SPP performs better after DCB/TGD correction, for example, for GPS-only b1-based SPP, the positioning accuracies can be improved by 25.0%, 30.6% and 26.7%, respectively, in the N, E, and U components, after the differential code biases correction, while GPS/GLONASS/BDS b1-based SPP can be improved by 16.1%, 26.1% and 9.9%. For GPS/BDS/Galileo the 3rd frequency based SPP, the positioning accuracies are improved by 2.0%, 2.0% and 0.4%, respectively, in the N, E, and U components, after Galileo satellites DCB correction. The accuracy of Galileo-only b1-based SPP are improved about 48.6%, 34.7% and 40.6% with DCB correction, respectively, in the N, E, and U components. The estimates of multi-constellation PPP are subject to different degrees of influence. For multi-constellation combination SPP, the accuracy of single-frequency is slightly better than that of dual-frequency combinations. Dual-frequency combinations are more sensitive to the differential code biases, especially for the 2nd and 3rd frequency combination, such as for GPS/BDS SPP, accuracy improvements of 60.9%, 26.5% and 58.8% in the three coordinate components is achieved after DCB parameters correction. For multi-constellation PPP, the convergence time can be reduced significantly with differential code biases correction. And the accuracy of positioning is slightly better with TGD/DCB correction.

  1. Modulation of Soil Initial State on WRF Model Performance Over China

    NASA Astrophysics Data System (ADS)

    Xue, Haile; Jin, Qinjian; Yi, Bingqi; Mullendore, Gretchen L.; Zheng, Xiaohui; Jin, Hongchun

    2017-11-01

    The soil state (e.g., temperature and moisture) in a mesoscale numerical prediction model is typically initialized by reanalysis or analysis data that may be subject to large bias. Such bias may lead to unrealistic land-atmosphere interactions. This study shows that the Climate Forecast System Reanalysis (CFSR) dramatically underestimates soil temperature and overestimates soil moisture over most parts of China in the first (0-10 cm) and second (10-25 cm) soil layers compared to in situ observations in July 2013. A correction based on the global optimal dual kriging is employed to correct CFSR bias in soil temperature and moisture using in situ observations. To investigate the impacts of the corrected soil state on model forecasts, two numerical model simulations—a control run with CFSR soil state and a disturbed run with the corrected soil state—were conducted using the Weather Research and Forecasting model. All the simulations are initiated 4 times per day and run 48 h. Model results show that the corrected soil state, for example, warmer and drier surface over the most parts of China, can enhance evaporation over wet regions, which changes the overlying atmospheric temperature and moisture. The changes of the lifting condensation level, level of free convection, and water transport due to corrected soil state favor precipitation over wet regions, while prohibiting precipitation over dry regions. Moreover, diagnoses indicate that the remote moisture flux convergence plays a dominant role in the precipitation changes over the wet regions.

  2. Lessons learnt on biases and uncertainties in personal exposure measurement surveys of radiofrequency electromagnetic fields with exposimeters.

    PubMed

    Bolte, John F B

    2016-09-01

    Personal exposure measurements of radio frequency electromagnetic fields are important for epidemiological studies and developing prediction models. Minimizing biases and uncertainties and handling spatial and temporal variability are important aspects of these measurements. This paper reviews the lessons learnt from testing the different types of exposimeters and from personal exposure measurement surveys performed between 2005 and 2015. Applying them will improve the comparability and ranking of exposure levels for different microenvironments, activities or (groups of) people, such that epidemiological studies are better capable of finding potential weak correlations with health effects. Over 20 papers have been published on how to prevent biases and minimize uncertainties due to: mechanical errors; design of hardware and software filters; anisotropy; and influence of the body. A number of biases can be corrected for by determining multiplicative correction factors. In addition a good protocol on how to wear the exposimeter, a sufficiently small sampling interval and sufficiently long measurement duration will minimize biases. Corrections to biases are possible for: non-detects through detection limit, erroneous manufacturer calibration and temporal drift. Corrections not deemed necessary, because no significant biases have been observed, are: linearity in response and resolution. Corrections difficult to perform after measurements are for: modulation/duty cycle sensitivity; out of band response aka cross talk; temperature and humidity sensitivity. Corrections not possible to perform after measurements are for: multiple signals detection in one band; flatness of response within a frequency band; anisotropy to waves of different elevation angle. An analysis of 20 microenvironmental surveys showed that early studies using exposimeters with logarithmic detectors, overestimated exposure to signals with bursts, such as in uplink signals from mobile phones and WiFi appliances. Further, the possible corrections for biases have not been fully applied. The main findings are that if the biases are not corrected for, the actual exposure will on average be underestimated. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Timing group delay and differential code bias corrections for BeiDou positioning

    NASA Astrophysics Data System (ADS)

    Guo, Fei; Zhang, Xiaohong; Wang, Jinling

    2015-05-01

    This article first clearly figures out the relationship between parameters of timing group delay (TGD) and differential code bias (DCB) for BDS, and demonstrates the equivalence of TGD and DCB correction models combining theory with practice. The TGD/DCB correction models have been extended to various occasions for BDS positioning, and such models have been evaluated by real triple-frequency datasets. To test the effectiveness of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both standard point positioning (SPP) and precise point positioning (PPP) tests are carried out for BDS signals with different schemes. Furthermore, the influence of differential code biases on BDS positioning estimates such as coordinates, receiver clock biases, tropospheric delays and carrier phase ambiguities is investigated comprehensively. Comparative analysis show that the unmodeled differential code biases degrade the performance of BDS SPP by a factor of two or more, whereas the estimates of PPP are subject to varying degrees of influences. For SPP, the accuracy of dual-frequency combinations is slightly worse than that of single-frequency, and they are much more sensitive to the differential code biases, particularly for the B2B3 combination. For PPP, the uncorrected differential code biases are mostly absorbed into the receiver clock bias and carrier phase ambiguities and thus resulting in a much longer convergence time. Even though the influence of the differential code biases could be mitigated over time and comparable positioning accuracy could be achieved after convergence, it is suggested to properly handle with the differential code biases since it is vital for PPP convergence and integer ambiguity resolution.

  4. Process-conditioned bias correction for seasonal forecasting: a case-study with ENSO in Peru

    NASA Astrophysics Data System (ADS)

    Manzanas, R.; Gutiérrez, J. M.

    2018-05-01

    This work assesses the suitability of a first simple attempt for process-conditioned bias correction in the context of seasonal forecasting. To do this, we focus on the northwestern part of Peru and bias correct 1- and 4-month lead seasonal predictions of boreal winter (DJF) precipitation from the ECMWF System4 forecasting system for the period 1981-2010. In order to include information about the underlying large-scale circulation which may help to discriminate between precipitation affected by different processes, we introduce here an empirical quantile-quantile mapping method which runs conditioned on the state of the Southern Oscillation Index (SOI), which is accurately predicted by System4 and is known to affect the local climate. Beyond the reduction of model biases, our results show that the SOI-conditioned method yields better ROC skill scores and reliability than the raw model output over the entire region of study, whereas the standard unconditioned implementation provides no added value for any of these metrics. This suggests that conditioning the bias correction on simple but well-simulated large-scale processes relevant to the local climate may be a suitable approach for seasonal forecasting. Yet, further research on the suitability of the application of similar approaches to the one considered here for other regions, seasons and/or variables is needed.

  5. Atlantic Meridional Overturning Circulation Influence on North Atlantic Sector Surface Air Temperature and its Predictability in the Kiel Climate Model

    NASA Astrophysics Data System (ADS)

    Latif, M.

    2017-12-01

    We investigate the influence of the Atlantic Meridional Overturning Circulation (AMOC) on the North Atlantic sector surface air temperature (SAT) in two multi-millennial control integrations of the Kiel Climate Model (KCM). One model version employs a freshwater flux correction over the North Atlantic, while the other does not. A clear influence of the AMOC on North Atlantic sector SAT only is simulated in the corrected model that depicts much reduced upper ocean salinity and temperature biases in comparison to the uncorrected model. Further, the model with much reduced biases depicts significantly enhanced multiyear SAT predictability in the North Atlantic sector relative to the uncorrected model. The enhanced SAT predictability in the corrected model is due to a stronger and more variable AMOC and its enhanced influence on North Atlantic sea surface temperature (SST). Results obtained from preindustrial control integrations of models participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5) support the findings obtained from the KCM: models with large North Atlantic biases tend to have a weak AMOC influence on SST and exhibit a smaller SAT predictability over the North Atlantic sector.

  6. Development of Spatiotemporal Bias-Correction Techniques for Downscaling GCM Predictions

    NASA Astrophysics Data System (ADS)

    Hwang, S.; Graham, W. D.; Geurink, J.; Adams, A.; Martinez, C. J.

    2010-12-01

    Accurately representing the spatial variability of precipitation is an important factor for predicting watershed response to climatic forcing, particularly in small, low-relief watersheds affected by convective storm systems. Although Global Circulation Models (GCMs) generally preserve spatial relationships between large-scale and local-scale mean precipitation trends, most GCM downscaling techniques focus on preserving only observed temporal variability on point by point basis, not spatial patterns of events. Downscaled GCM results (e.g., CMIP3 ensembles) have been widely used to predict hydrologic implications of climate variability and climate change in large snow-dominated river basins in the western United States (Diffenbaugh et al., 2008; Adam et al., 2009). However fewer applications to smaller rain-driven river basins in the southeastern US (where preserving spatial variability of rainfall patterns may be more important) have been reported. In this study a new method was developed to bias-correct GCMs to preserve both the long term temporal mean and variance of the precipitation data, and the spatial structure of daily precipitation fields. Forty-year retrospective simulations (1960-1999) from 16 GCMs were collected (IPCC, 2007; WCRP CMIP3 multi-model database: https://esg.llnl.gov:8443/), and the daily precipitation data at coarse resolution (i.e., 280km) were interpolated to 12km spatial resolution and bias corrected using gridded observations over the state of Florida (Maurer et al., 2002; Wood et al, 2002; Wood et al, 2004). In this method spatial random fields which preserved the observed spatial correlation structure of the historic gridded observations and the spatial mean corresponding to the coarse scale GCM daily rainfall were generated. The spatiotemporal variability of the spatio-temporally bias-corrected GCMs were evaluated against gridded observations, and compared to the original temporally bias-corrected and downscaled CMIP3 data for the central Florida. The hydrologic response of two southwest Florida watersheds to the gridded observation data, the original bias corrected CMIP3 data, and the new spatiotemporally corrected CMIP3 predictions was compared using an integrated surface-subsurface hydrologic model developed by Tampa Bay Water.

  7. A meta-analysis of priming effects on impression formation supporting a general model of informational biases.

    PubMed

    DeCoster, Jamie; Claypool, Heather M

    2004-01-01

    Priming researchers have long investigated how providing information about traits in one context can influence the impressions people form of social targets in another. The literature has demonstrated that this can have 3 different effects: Sometimes primes become incorporated in the impression of the target (assimilation), sometimes they are used as standards of comparison (anchoring), and sometimes they cause people to consciously alter their judgments (correction). In this article, we present meta-analyses of these 3 effects. The mean effect size was significant in each case, such that assimilation resulted in impressions biased toward the primes, whereas anchoring and correction resulted in impressions biased away from the primes. Additionally, moderator analyses uncovered a number of variables that influence the strength of these effects, such as applicability, processing capacity, and the type of response measure. Based on these results, we propose a general model of how irrelevant information can bias judgments, detailing when and why assimilation and contrast effects result from default and corrective processes.

  8. Transport through correlated systems with density functional theory

    NASA Astrophysics Data System (ADS)

    Kurth, S.; Stefanucci, G.

    2017-10-01

    We present recent advances in density functional theory (DFT) for applications in the field of quantum transport, with particular emphasis on transport through strongly correlated systems. We review the foundations of the popular Landauer-Büttiker(LB)  +  DFT approach. This formalism, when using approximations to the exchange-correlation (xc) potential with steps at integer occupation, correctly captures the Kondo plateau in the zero bias conductance at zero temperature but completely fails to capture the transition to the Coulomb blockade (CB) regime as the temperature increases. To overcome the limitations of LB  +  DFT, the quantum transport problem is treated from a time-dependent (TD) perspective using TDDFT, an exact framework to deal with nonequilibrium situations. The steady-state limit of TDDFT shows that in addition to an xc potential in the junction, there also exists an xc correction to the applied bias. Open shell molecules in the CB regime provide the most striking examples of the importance of the xc bias correction. Using the Anderson model as guidance we estimate these corrections in the limit of zero bias. For the general case we put forward a steady-state DFT which is based on one-to-one correspondence between the pair of basic variables, steady density on and steady current across the junction and the pair local potential on and bias across the junction. Like TDDFT, this framework also leads to both an xc potential in the junction and an xc correction to the bias. Unlike TDDFT, these potentials are independent of history. We highlight the universal features of both xc potential and xc bias corrections for junctions in the CB regime and provide an accurate parametrization for the Anderson model at arbitrary temperatures and interaction strengths, thus providing a unified DFT description for both Kondo and CB regimes and the transition between them.

  9. Sensitivity of Hydrologic Response to Climate Model Debiasing Procedures

    NASA Astrophysics Data System (ADS)

    Channell, K.; Gronewold, A.; Rood, R. B.; Xiao, C.; Lofgren, B. M.; Hunter, T.

    2017-12-01

    Climate change is already having a profound impact on the global hydrologic cycle. In the Laurentian Great Lakes, changes in long-term evaporation and precipitation can lead to rapid water level fluctuations in the lakes, as evidenced by unprecedented change in water levels seen in the last two decades. These fluctuations often have an adverse impact on the region's human, environmental, and economic well-being, making accurate long-term water level projections invaluable to regional water resources management planning. Here we use hydrological components from a downscaled climate model (GFDL-CM3/WRF), to obtain future water supplies for the Great Lakes. We then apply a suite of bias correction procedures before propagating these water supplies through a routing model to produce lake water levels. Results using conventional bias correction methods suggest that water levels will decline by several feet in the coming century. However, methods that reflect the seasonal water cycle and explicitly debias individual hydrological components (overlake precipitation, overlake evaporation, runoff) imply that future water levels may be closer to their historical average. This discrepancy between debiased results indicates that water level forecasts are highly influenced by the bias correction method, a source of sensitivity that is commonly overlooked. Debiasing, however, does not remedy misrepresentation of the underlying physical processes in the climate model that produce these biases and contribute uncertainty to the hydrological projections. This uncertainty coupled with the differences in water level forecasts from varying bias correction methods are important for water management and long term planning in the Great Lakes region.

  10. Impact of Atmospheric Chromatic Effects on Weak Lensing Measurements

    NASA Astrophysics Data System (ADS)

    Meyers, Joshua E.; Burchat, Patricia R.

    2015-07-01

    Current and future imaging surveys will measure cosmic shear with statistical precision that demands a deeper understanding of potential systematic biases in galaxy shape measurements than has been achieved to date. We use analytic and computational techniques to study the impact on shape measurements of two atmospheric chromatic effects for ground-based surveys such as the Dark Energy Survey and the Large Synoptic Survey Telescope (LSST): (1) atmospheric differential chromatic refraction and (2) wavelength dependence of seeing. We investigate the effects of using the point-spread function (PSF) measured with stars to determine the shapes of galaxies that have different spectral energy distributions than the stars. We find that both chromatic effects lead to significant biases in galaxy shape measurements for current and future surveys, if not corrected. Using simulated galaxy images, we find a form of chromatic “model bias” that arises when fitting a galaxy image with a model that has been convolved with a stellar, instead of galactic, PSF. We show that both forms of atmospheric chromatic biases can be predicted (and corrected) with minimal model bias by applying an ordered set of perturbative PSF-level corrections based on machine-learning techniques applied to six-band photometry. Catalog-level corrections do not address the model bias. We conclude that achieving the ultimate precision for weak lensing from current and future ground-based imaging surveys requires a detailed understanding of the wavelength dependence of the PSF from the atmosphere, and from other sources such as optics and sensors. The source code for this analysis is available at https://github.com/DarkEnergyScienceCollaboration/chroma.

  11. Assessment of radar altimetry correction slopes for marine gravity recovery: A case study of Jason-1 GM data

    NASA Astrophysics Data System (ADS)

    Zhang, Shengjun; Li, Jiancheng; Jin, Taoyong; Che, Defu

    2018-04-01

    Marine gravity anomaly derived from satellite altimetry can be computed using either sea surface height or sea surface slope measurements. Here we consider the slope method and evaluate the errors in the slope of the corrections supplied with the Jason-1 geodetic mission data. The slope corrections are divided into three groups based on whether they are small, comparable, or large with respect to the 1 microradian error in the current sea surface slope models. (1) The small and thus negligible corrections include dry tropospheric correction, inverted barometer correction, solid earth tide and geocentric pole tide. (2) The moderately important corrections include wet tropospheric correction, dual-frequency ionospheric correction and sea state bias. The radiometer measurements are more preferred than model values in the geophysical data records for constraining wet tropospheric effect owing to the highly variable water-vapor structure in atmosphere. The items of dual-frequency ionospheric correction and sea state bias should better not be directly added to range observations for obtaining sea surface slopes since their inherent errors may cause abnormal sea surface slopes and along-track smoothing with uniform distribution weight in certain width is an effective strategy for avoiding introducing extra noises. The slopes calculated from radiometer wet tropospheric corrections, and along-track smoothed dual-frequency ionospheric corrections, sea state bias are generally within ±0.5 microradians and no larger than 1 microradians. (3) Ocean tide has the largest influence on obtaining sea surface slopes while most of ocean tide slopes distribute within ±3 microradians. Larger ocean tide slopes mostly occur over marginal and island-surrounding seas, and extra tidal models with better precision or with extending process (e.g. Got-e) are strongly recommended for updating corrections in geophysical data records.

  12. Methodological challenges to bridge the gap between regional climate and hydrology models

    NASA Astrophysics Data System (ADS)

    Bozhinova, Denica; José Gómez-Navarro, Juan; Raible, Christoph; Felder, Guido

    2017-04-01

    The frequency and severity of floods worldwide, together with their impacts, are expected to increase under climate change scenarios. It is therefore very important to gain insight into the physical mechanisms responsible for such events in order to constrain the associated uncertainties. Model simulations of the climate and hydrological processes are important tools that can provide insight in the underlying physical processes and thus enable an accurate assessment of the risks. Coupled together, they can provide a physically consistent picture that allows to assess the phenomenon in a comprehensive way. However, climate and hydrological models work at different temporal and spatial scales, so there are a number of methodological challenges that need to be carefully addressed. An important issue pertains the presence of biases in the simulation of precipitation. Climate models in general, and Regional Climate models (RCMs) in particular, are affected by a number of systematic biases that limit their reliability. In many studies, prominently the assessment of changes due to climate change, such biases are minimised by applying the so-called delta approach, which focuses on changes disregarding absolute values that are more affected by biases. However, this approach is not suitable in this scenario, as the absolute value of precipitation, rather than the change, is fed into the hydrological model. Therefore, bias has to be previously removed, being this a complex matter where various methodologies have been proposed. In this study, we apply and discuss the advantages and caveats of two different methodologies that correct the simulated precipitation to minimise differences with respect an observational dataset: a linear fit (FIT) of the accumulated distributions and Quantile Mapping (QM). The target region is Switzerland, and therefore the observational dataset is provided by MeteoSwiss. The RCM is the Weather Research and Forecasting model (WRF), driven at the boundaries by the Community Earth System Model (CESM). The raw simulation driven by CESM exhibit prominent biases that stand out in the evolution of the annual cycle and demonstrate that the correction of biases is mandatory in this type of studies, rather than a minor correction that might be neglected. The simulation spans the period 1976 - 2005, although the application of the correction is carried out on a daily basis. Both methods lead to a corrected field of precipitation that respects the temporal evolution of the simulated precipitation, at the same time that mimics the distribution of precipitation according to the one in the observations. Due to the nature of the two methodologies, there are important differences between the products of both corrections, that lead to dataset with different properties. FIT is generally more accurate regarding the reproduction of the tails of the distribution, i.e. extreme events, whereas the nature of QM renders it a general-purpose correction whose skill is equally distributed across the full distribution of precipitation, including central values.

  13. Simulating Streamflow Using Bias-corrected Multiple Satellite Rainfall Products in the Tekeze Basin, Ethiopia

    NASA Astrophysics Data System (ADS)

    Abitew, T. A.; Roy, T.; Serrat-Capdevila, A.; van Griensven, A.; Bauwens, W.; Valdes, J. B.

    2016-12-01

    The Tekeze Basin supports one of Africans largest Arch Dam located in northern Ethiopian has vital role in hydropower generation. However, little has been done on the hydrology of the basin due to limited in situ hydroclimatological data. Therefore, the main objective of this research is to simulate streamflow upstream of the Tekeze Dam using Soil and Water Assessment Tool (SWAT) forced by bias-corrected multiple satellite rainfall products (CMORPH, TMPA and PERSIANN-CCS). This talk will present the potential as well as skills of bias-corrected satellite rainfall products for streamflow prediction in in Tropical Africa. Additionally, the SWAT model results will also be compared with previous conceptual Hydrological models (HyMOD and HBV) from SERVIR Streamflow forecasting in African Basin project (http://www.swaat.arizona.edu/index.html).

  14. Assessment of bias correction under transient climate change

    NASA Astrophysics Data System (ADS)

    Van Schaeybroeck, Bert; Vannitsem, Stéphane

    2015-04-01

    Calibration of climate simulations is necessary since large systematic discrepancies are generally found between the model climate and the observed climate. Recent studies have cast doubt upon the common assumption of the bias being stationary when the climate changes. This led to the development of new methods, mostly based on linear sensitivity of the biases as a function of time or forcing (Kharin et al. 2012). However, recent studies uncovered more fundamental problems using both low-order systems (Vannitsem 2011) and climate models, showing that the biases may display complicated non-linear variations under climate change. This last analysis focused on biases derived from the equilibrium climate sensitivity, thereby ignoring the effect of the transient climate sensitivity. Based on the linear response theory, a general method of bias correction is therefore proposed that can be applied on any climate forcing scenario. The validity of the method is addressed using twin experiments with a climate model of intermediate complexity LOVECLIM (Goosse et al., 2010). We evaluate to what extent the bias change is sensitive to the structure (frequency) of the applied forcing (here greenhouse gases) and whether the linear response theory is valid for global and/or local variables. To answer these question we perform large-ensemble simulations using different 300-year scenarios of forced carbon-dioxide concentrations. Reality and simulations are assumed to differ by a model error emulated as a parametric error in the wind drag or in the radiative scheme. References [1] H. Goosse et al., 2010: Description of the Earth system model of intermediate complexity LOVECLIM version 1.2, Geosci. Model Dev., 3, 603-633. [2] S. Vannitsem, 2011: Bias correction and post-processing under climate change, Nonlin. Processes Geophys., 18, 911-924. [3] V.V. Kharin, G. J. Boer, W. J. Merryfield, J. F. Scinocca, and W.-S. Lee, 2012: Statistical adjustment of decadal predictions in a changing climate, Geophys. Res. Lett., 39, L19705.

  15. The response of future projections of the North American monsoon when combining dynamical downscaling and bias correction of CCSM4 output

    NASA Astrophysics Data System (ADS)

    Meyer, Jonathan D. D.; Jin, Jiming

    2017-07-01

    A 20-km regional climate model (RCM) dynamically downscaled the Community Climate System Model version 4 (CCSM4) to compare 32-year historical and future "end-of-the-century" climatologies of the North American Monsoon (NAM). CCSM4 and other phase 5 Coupled Model Intercomparison Project models have indicated a delayed NAM and overall general drying trend. Here, we test the suggested mechanism for this drier NAM where increasing atmospheric static stability and reduced early-season evapotranspiration under global warming will limit early-season convection and compress the mature-season of the NAM. Through our higher resolution RCM, we found the role of accelerated evaporation under a warmer climate is likely understated in coarse resolution models such as CCSM4. Improving the representation of mesoscale interactions associated with the Gulf of California and surrounding topography produced additional surface evaporation, which overwhelmed the convection-suppressing effects of a warmer troposphere. Furthermore, the improved land-sea temperature gradient helped drive stronger southerly winds and greater moisture transport. Finally, we addressed limitations from inherent CCSM4 biases through a form of mean bias correction, which resulted in a more accurate seasonality of the atmospheric thermodynamic profile. After bias correction, greater surface evaporation from average peak GoC SSTs of 32 °C compared to 29 °C from the original CCSM4 led to roughly 50 % larger changes to low-level moist static energy compared to that produced by the downscaled original CCSM4. The increasing destabilization of the NAM environment produced onset dates that were one to 2 weeks earlier in the core of the NAM and northern extent, respectively. Furthermore, a significantly more vigorous NAM signal was produced after bias correction, with >50 mm month-1 increases to the June-September precipitation found along east and west coasts of Mexico and into parts of Texas. A shift towards more extreme daily precipitation was found in both downscaled climatologies, with the bias-corrected climatology containing a much more apparent and extreme shift.

  16. Bias correction in the hierarchical likelihood approach to the analysis of multivariate survival data.

    PubMed

    Jeon, Jihyoun; Hsu, Li; Gorfine, Malka

    2012-07-01

    Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.

  17. An optimized data fusion method and its application to improve lateral boundary conditions in winter for Pearl River Delta regional PM2.5 modeling, China

    NASA Astrophysics Data System (ADS)

    Huang, Zhijiong; Hu, Yongtao; Zheng, Junyu; Zhai, Xinxin; Huang, Ran

    2018-05-01

    Lateral boundary conditions (LBCs) are essential for chemical transport models to simulate regional transport; however they often contain large uncertainties. This study proposes an optimized data fusion approach to reduce the bias of LBCs by fusing gridded model outputs, from which the daughter domain's LBCs are derived, with ground-level measurements. The optimized data fusion approach follows the framework of a previous interpolation-based fusion method but improves it by using a bias kriging method to correct the spatial bias in gridded model outputs. Cross-validation shows that the optimized approach better estimates fused fields in areas with a large number of observations compared to the previous interpolation-based method. The optimized approach was applied to correct LBCs of PM2.5 concentrations for simulations in the Pearl River Delta (PRD) region as a case study. Evaluations show that the LBCs corrected by data fusion improve in-domain PM2.5 simulations in terms of the magnitude and temporal variance. Correlation increases by 0.13-0.18 and fractional bias (FB) decreases by approximately 3%-15%. This study demonstrates the feasibility of applying data fusion to improve regional air quality modeling.

  18. The contribution of natural variability to GCM bias: Can we effectively bias-correct climate projections?

    NASA Astrophysics Data System (ADS)

    McAfee, S. A.; DeLaFrance, A.

    2017-12-01

    Investigating the impacts of climate change often entails using projections from inherently imperfect general circulation models (GCMs) to drive models that simulate biophysical or societal systems in great detail. Error or bias in the GCM output is often assessed in relation to observations, and the projections are adjusted so that the output from impacts models can be compared to historical or observed conditions. Uncertainty in the projections is typically accommodated by running more than one future climate trajectory to account for differing emissions scenarios, model simulations, and natural variability. The current methods for dealing with error and uncertainty treat them as separate problems. In places where observed and/or simulated natural variability is large, however, it may not be possible to identify a consistent degree of bias in mean climate, blurring the lines between model error and projection uncertainty. Here we demonstrate substantial instability in mean monthly temperature bias across a suite of GCMs used in CMIP5. This instability is greatest in the highest latitudes during the cool season, where shifts from average temperatures below to above freezing could have profound impacts. In models with the greatest degree of bias instability, the timing of regional shifts from below to above average normal temperatures in a single climate projection can vary by about three decades, depending solely on the degree of bias assessed. This suggests that current bias correction methods based on comparison to 20- or 30-year normals may be inappropriate, particularly in the polar regions.

  19. Evaluation of bias in lower and middle tropospheric GOSAT/TANSO-FTS TIR V1.0 CO2 data through comparisons with aircraft and NICAM-TM CO2 data

    NASA Astrophysics Data System (ADS)

    Saitoh, N.; Hatta, H.; Imasu, R.; Shiomi, K.; Kuze, A.; Niwa, Y.; Machida, T.; Sawa, Y.; Matsueda, H.

    2016-12-01

    Thermal and Near Infrared Sensor for Carbon Observation (TANSO)-Fourier Transform Spectrometer (FTS) on board the Greenhouse Gases Observing Satellite (GOSAT) has been observing carbon dioxide (CO2) concentrations in several atmospheric layers in the thermal infrared (TIR) band since its launch on 23 January 2009. We have compared TANSO-FTS TIR Version 1 (V1) CO2 data from 2010 to 2012 and CO2 data obtained by the Continuous CO2 Measuring Equipment (CME) installed on several JAL aircraft in the framework of the Comprehensive Observation Network for TRace gases by AIrLiner (CONTRAIL) project to evaluate bias in the TIR CO2 data in the lower and middle troposphere. Here, we have regarded the CME data obtained during the ascent and descent flights over several airports as part of CO2 vertical profiles there. The comparisons showed that the TIR V1 CO2 data had a negative bias against the CME CO2 data; the magnitude of the bias varied depending on season and latitude. We have estimated bias correction values for the TIR V1 lower and middle tropospheric CO2 data in each latitude band from 40°S to 60°N in each season on the basis of the comparisons with the CME CO2 profiles in limited areas over airports, applied the bias correction values to the TIR V1 CO2 data, and evaluated the quality of the bias-corrected TIR CO2 data globally through comparisons with CO2 data taken from the Nonhydrostatic Icosahedral Atmospheric Model (NICAM)-based Transport Model (TM). The bias-corrected TIR CO2 data showed a better agreement with the NICAM-TM CO2 than the original TIR data, which suggests that the bias correction values estimated in the limited areas are basically applicable to global TIR CO2 data. We have compared XCO2 data calculated from both the original and bias-corrected TIR CO2 data with TANSO-FTS SWIR and NICAM-TM XCO2 data; both the TIR XCO2 data agreed with SWIR and NICAM-TM XCO2 data within 1% except over the Sahara desert and strong source and sink regions.

  20. Eliminating bias in rainfall estimates from microwave links due to antenna wetting

    NASA Astrophysics Data System (ADS)

    Fencl, Martin; Rieckermann, Jörg; Bareš, Vojtěch

    2014-05-01

    Commercial microwave links (MWLs) are point-to-point radio systems which are widely used in telecommunication systems. They operate at frequencies where the transmitted power is mainly disturbed by precipitation. Thus, signal attenuation from MWLs can be used to estimate path-averaged rain rates, which is conceptually very promising, since MWLs cover about 20 % of surface area. Unfortunately, MWL rainfall estimates are often positively biased due to additional attenuation caused by antenna wetting. To correct MWL observations a posteriori to reduce the wet antenna effect (WAE), both empirically and physically based models have been suggested. However, it is challenging to calibrate these models, because the wet antenna attenuation depends both on the MWL properties (frequency, type of antennas, shielding etc.) and different climatic factors (temperature, due point, wind velocity and direction, etc.). Instead, it seems straight forward to keep antennas dry by shielding them. In this investigation we compare the effectiveness of antenna shielding to model-based corrections to reduce the WAE. The experimental setup, located in Dübendorf-Switzerland, consisted of 1.85-km long commercial dual-polarization microwave link at 38 GHz and 5 optical disdrometers. The MWL was operated without shielding in the period from March to October 2011 and with shielding from October 2011 to July 2012. This unique experimental design made it possible to identify the attenuation due to antenna wetting, which can be computed as the difference between the measured and theoretical attenuation. The theoretical path-averaged attenuation was calculated from the path-averaged drop size distribution. During the unshielded periods, the total bias caused by WAE was 0.74 dB, which was reduced by shielding to 0.39 dB for the horizontal polarization (vertical: reduction from 0.96 dB to 0.44 dB). Interestingly, the model-based correction (Schleiss et al. 2013) was more effective because it reduced the bias of unshielded periods to 0.07 dB for the horizontal polarization (vertical: 0.06 dB). Applying the same model-based correction to shielded periods reduces the bias even more, to -0.03 dB and -0.01 dB, respectively. This indicates that additional attenuation could be caused also by different effects, such as reflection of sidelobes from wet surfaces and other environmental factors. Further, model-based corrections do not capture correctly the nature of WAE, but more likely provide only an empirical correction. This claim is supported by the fact that detailed analysis of particular events reveals that both antenna shielding and model-based correction performance differ substantially from event to event. Further investigation based on direct observation of antenna wetting and other environmental variables needs to be performed to identify more properly the nature of the attenuation bias. Schleiss, M., J. Rieckermann, and A. Berne, 2013: Quantification and modeling of wet-antenna attenuation for commercial microwave links. IEEE Geosci. Remote Sens. Lett., 10.1109/LGRS.2012.2236074.

  1. Explanation of Two Anomalous Results in Statistical Mediation Analysis.

    PubMed

    Fritz, Matthew S; Taylor, Aaron B; Mackinnon, David P

    2012-01-01

    Previous studies of different methods of testing mediation models have consistently found two anomalous results. The first result is elevated Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap tests not found in nonresampling tests or in resampling tests that did not include a bias correction. This is of special concern as the bias-corrected bootstrap is often recommended and used due to its higher statistical power compared with other tests. The second result is statistical power reaching an asymptote far below 1.0 and in some conditions even declining slightly as the size of the relationship between X and M , a , increased. Two computer simulations were conducted to examine these findings in greater detail. Results from the first simulation found that the increased Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap are a function of an interaction between the size of the individual paths making up the mediated effect and the sample size, such that elevated Type I error rates occur when the sample size is small and the effect size of the nonzero path is medium or larger. Results from the second simulation found that stagnation and decreases in statistical power as a function of the effect size of the a path occurred primarily when the path between M and Y , b , was small. Two empirical mediation examples are provided using data from a steroid prevention and health promotion program aimed at high school football players (Athletes Training and Learning to Avoid Steroids; Goldberg et al., 1996), one to illustrate a possible Type I error for the bias-corrected bootstrap test and a second to illustrate a loss in power related to the size of a . Implications of these findings are discussed.

  2. Correction of Gradient Nonlinearity Bias in Quantitative Diffusion Parameters of Renal Tissue with Intra Voxel Incoherent Motion.

    PubMed

    Malyarenko, Dariya I; Pang, Yuxi; Senegas, Julien; Ivancevic, Marko K; Ross, Brian D; Chenevert, Thomas L

    2015-12-01

    Spatially non-uniform diffusion weighting bias due to gradient nonlinearity (GNL) causes substantial errors in apparent diffusion coefficient (ADC) maps for anatomical regions imaged distant from magnet isocenter. Our previously-described approach allowed effective removal of spatial ADC bias from three orthogonal DWI measurements for mono-exponential media of arbitrary anisotropy. The present work evaluates correction feasibility and performance for quantitative diffusion parameters of the two-component IVIM model for well-perfused and nearly isotropic renal tissue. Sagittal kidney DWI scans of a volunteer were performed on a clinical 3T MRI scanner near isocenter and offset superiorly. Spatially non-uniform diffusion weighting due to GNL resulted both in shift and broadening of perfusion-suppressed ADC histograms for off-center DWI relative to unbiased measurements close to isocenter. Direction-average DW-bias correctors were computed based on the known gradient design provided by vendor. The computed bias maps were empirically confirmed by coronal DWI measurements for an isotropic gel-flood phantom. Both phantom and renal tissue ADC bias for off-center measurements was effectively removed by applying pre-computed 3D correction maps. Comparable ADC accuracy was achieved for corrections of both b -maps and DWI intensities in presence of IVIM perfusion. No significant bias impact was observed for IVIM perfusion fraction.

  3. Correction of Gradient Nonlinearity Bias in Quantitative Diffusion Parameters of Renal Tissue with Intra Voxel Incoherent Motion

    PubMed Central

    Malyarenko, Dariya I.; Pang, Yuxi; Senegas, Julien; Ivancevic, Marko K.; Ross, Brian D.; Chenevert, Thomas L.

    2015-01-01

    Spatially non-uniform diffusion weighting bias due to gradient nonlinearity (GNL) causes substantial errors in apparent diffusion coefficient (ADC) maps for anatomical regions imaged distant from magnet isocenter. Our previously-described approach allowed effective removal of spatial ADC bias from three orthogonal DWI measurements for mono-exponential media of arbitrary anisotropy. The present work evaluates correction feasibility and performance for quantitative diffusion parameters of the two-component IVIM model for well-perfused and nearly isotropic renal tissue. Sagittal kidney DWI scans of a volunteer were performed on a clinical 3T MRI scanner near isocenter and offset superiorly. Spatially non-uniform diffusion weighting due to GNL resulted both in shift and broadening of perfusion-suppressed ADC histograms for off-center DWI relative to unbiased measurements close to isocenter. Direction-average DW-bias correctors were computed based on the known gradient design provided by vendor. The computed bias maps were empirically confirmed by coronal DWI measurements for an isotropic gel-flood phantom. Both phantom and renal tissue ADC bias for off-center measurements was effectively removed by applying pre-computed 3D correction maps. Comparable ADC accuracy was achieved for corrections of both b-maps and DWI intensities in presence of IVIM perfusion. No significant bias impact was observed for IVIM perfusion fraction. PMID:26811845

  4. Developing an approach to effectively use super ensemble experiments for the projection of hydrological extremes under climate change

    NASA Astrophysics Data System (ADS)

    Watanabe, S.; Kim, H.; Utsumi, N.

    2017-12-01

    This study aims to develop a new approach which projects hydrology under climate change using super ensemble experiments. The use of multiple ensemble is essential for the estimation of extreme, which is a major issue in the impact assessment of climate change. Hence, the super ensemble experiments are recently conducted by some research programs. While it is necessary to use multiple ensemble, the multiple calculations of hydrological simulation for each output of ensemble simulations needs considerable calculation costs. To effectively use the super ensemble experiments, we adopt a strategy to use runoff projected by climate models directly. The general approach of hydrological projection is to conduct hydrological model simulations which include land-surface and river routing process using atmospheric boundary conditions projected by climate models as inputs. This study, on the other hand, simulates only river routing model using runoff projected by climate models. In general, the climate model output is systematically biased so that a preprocessing which corrects such bias is necessary for impact assessments. Various bias correction methods have been proposed, but, to the best of our knowledge, no method has proposed for variables other than surface meteorology. Here, we newly propose a method for utilizing the projected future runoff directly. The developed method estimates and corrects the bias based on the pseudo-observation which is a result of retrospective offline simulation. We show an application of this approach to the super ensemble experiments conducted under the program of Half a degree Additional warming, Prognosis and Projected Impacts (HAPPI). More than 400 ensemble experiments from multiple climate models are available. The results of the validation using historical simulations by HAPPI indicates that the output of this approach can effectively reproduce retrospective runoff variability. Likewise, the bias of runoff from super ensemble climate projections is corrected, and the impact of climate change on hydrologic extremes is assessed in a cost-efficient way.

  5. Regional Climate Simulations over North America: Interaction of Local Processes with Improved Large-Scale Flow.

    NASA Astrophysics Data System (ADS)

    Miguez-Macho, Gonzalo; Stenchikov, Georgiy L.; Robock, Alan

    2005-04-01

    The reasons for biases in regional climate simulations were investigated in an attempt to discern whether they arise from deficiencies in the model parameterizations or are due to dynamical problems. Using the Regional Atmospheric Modeling System (RAMS) forced by the National Centers for Environmental Prediction-National Center for Atmospheric Research reanalysis, the detailed climate over North America at 50-km resolution for June 2000 was simulated. First, the RAMS equations were modified to make them applicable to a large region, and its turbulence parameterization was corrected. The initial simulations showed large biases in the location of precipitation patterns and surface air temperatures. By implementing higher-resolution soil data, soil moisture and soil temperature initialization, and corrections to the Kain-Fritch convective scheme, the temperature biases and precipitation amount errors could be removed, but the precipitation location errors remained. The precipitation location biases could only be improved by implementing spectral nudging of the large-scale (wavelength of 2500 km) dynamics in RAMS. This corrected for circulation errors produced by interactions and reflection of the internal domain dynamics with the lateral boundaries where the model was forced by the reanalysis.

  6. Bias correction for rainrate retrievals from satellite passive microwave sensors

    NASA Technical Reports Server (NTRS)

    Short, David A.

    1990-01-01

    Rainrates retrieved from past and present satellite-borne microwave sensors are affected by a fundamental remote sensing problem. Sensor fields-of-view are typically large enough to encompass substantial rainrate variability, whereas the retrieval algorithms, based on radiative transfer calculations, show a non-linear relationship between rainrate and microwave brightness temperature. Retrieved rainrates are systematically too low. A statistical model of the bias problem shows that bias correction factors depend on the probability distribution of instantaneous rainrate and on the average thickness of the rain layer.

  7. Data Assimilation to Extract Soil Moisture Information From SMAP Observations

    NASA Technical Reports Server (NTRS)

    Kolassa, J.; Reichle, R. H.; Liu, Q.; Alemohammad, S. H.; Gentine, P.

    2017-01-01

    Statistical techniques permit the retrieval of soil moisture estimates in a model climatology while retaining the spatial and temporal signatures of the satellite observations. As a consequence, they can be used to reduce the need for localized bias correction techniques typically implemented in data assimilation (DA) systems that tend to remove some of the independent information provided by satellite observations. Here, we use a statistical neural network (NN) algorithm to retrieve SMAP (Soil Moisture Active Passive) surface soil moisture estimates in the climatology of the NASA Catchment land surface model. Assimilating these estimates without additional bias correction is found to significantly reduce the model error and increase the temporal correlation against SMAP CalVal in situ observations over the contiguous United States. A comparison with assimilation experiments using traditional bias correction techniques shows that the NN approach better retains the independent information provided by the SMAP observations and thus leads to larger model skill improvements during the assimilation. A comparison with the SMAP Level 4 product shows that the NN approach is able to provide comparable skill improvements and thus represents a viable assimilation approach.

  8. Use of regional climate model output for hydrologic simulations

    USGS Publications Warehouse

    Hay, L.E.; Clark, M.P.; Wilby, R.L.; Gutowski, W.J.; Leavesley, G.H.; Pan, Z.; Arritt, R.W.; Takle, E.S.

    2002-01-01

    Daily precipitation and maximum and minimum temperature time series from a regional climate model (RegCM2) configured using the continental United States as a domain and run on a 52-km (approximately) spatial resolution were used as input to a distributed hydrologic model for one rainfall-dominated basin (Alapaha River at Statenville, Georgia) and three snowmelt-dominated basins (Animas River at Durango. Colorado; east fork of the Carson River near Gardnerville, Nevada: and Cle Elum River near Roslyn, Washington). For comparison purposes, spatially averaged daily datasets of precipitation and maximum and minimum temperature were developed from measured data for each basin. These datasets included precipitation and temperature data for all stations (hereafter, All-Sta) located within the area of the RegCM2 output used for each basin, but excluded station data used to calibrate the hydrologic model. Both the RegCM2 output and All-Sta data capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all four basins, the RegCM2- and All-Sta-based simulations of runoff show little skill on a daily basis [Nash-Sutcliffe (NS) values range from 0.05 to 0.37 for RegCM2 and -0.08 to 0.65 for All-Sta]. When the precipitation and temperature biases are corrected in the RegCM2 output and All-Sta data (Bias-RegCM2 and Bias-All, respectively) the accuracy of the daily runoff simulations improve dramatically for the snowmelt-dominated basins (NS values range from 0.41 to 0.66 for RegCM2 and 0.60 to 0.76 for All-Sta). In the rainfall-dominated basin, runoff simulations based on the Bias-RegCM2 output show no skill (NS value of 0.09) whereas Bias-All simulated runoff improves (NS value improved from - 0.08 to 0.72). These results indicate that measured data at the coarse resolution of the RegCM2 output can be made appropriate for basin-scale modeling through bias correction (essentially a magnitude correction). However, RegCM2 output, even when bias corrected, does not contain the day-to-day variability present in the All-Sta dataset that is necessary for basin-scale modeling. Future work is warranted to identify the causes for systematic biases in RegCM2 simulations, develop methods to remove the biases, and improve RegCM2 simulations of daily variability in local climate.

  9. Bias correction by use of errors-in-variables regression models in studies with K-X-ray fluorescence bone lead measurements.

    PubMed

    Lamadrid-Figueroa, Héctor; Téllez-Rojo, Martha M; Angeles, Gustavo; Hernández-Ávila, Mauricio; Hu, Howard

    2011-01-01

    In-vivo measurement of bone lead by means of K-X-ray fluorescence (KXRF) is the preferred biological marker of chronic exposure to lead. Unfortunately, considerable measurement error associated with KXRF estimations can introduce bias in estimates of the effect of bone lead when this variable is included as the exposure in a regression model. Estimates of uncertainty reported by the KXRF instrument reflect the variance of the measurement error and, although they can be used to correct the measurement error bias, they are seldom used in epidemiological statistical analyzes. Errors-in-variables regression (EIV) allows for correction of bias caused by measurement error in predictor variables, based on the knowledge of the reliability of such variables. The authors propose a way to obtain reliability coefficients for bone lead measurements from uncertainty data reported by the KXRF instrument and compare, by the use of Monte Carlo simulations, results obtained using EIV regression models vs. those obtained by the standard procedures. Results of the simulations show that Ordinary Least Square (OLS) regression models provide severely biased estimates of effect, and that EIV provides nearly unbiased estimates. Although EIV effect estimates are more imprecise, their mean squared error is much smaller than that of OLS estimates. In conclusion, EIV is a better alternative than OLS to estimate the effect of bone lead when measured by KXRF. Copyright © 2010 Elsevier Inc. All rights reserved.

  10. Temperature effects on pitfall catches of epigeal arthropods: a model and method for bias correction.

    PubMed

    Saska, Pavel; van der Werf, Wopke; Hemerik, Lia; Luff, Martin L; Hatten, Timothy D; Honek, Alois; Pocock, Michael

    2013-02-01

    Carabids and other epigeal arthropods make important contributions to biodiversity, food webs and biocontrol of invertebrate pests and weeds. Pitfall trapping is widely used for sampling carabid populations, but this technique yields biased estimates of abundance ('activity-density') because individual activity - which is affected by climatic factors - affects the rate of catch. To date, the impact of temperature on pitfall catches, while suspected to be large, has not been quantified, and no method is available to account for it. This lack of knowledge and the unavailability of a method for bias correction affect the confidence that can be placed on results of ecological field studies based on pitfall data.Here, we develop a simple model for the effect of temperature, assuming a constant proportional change in the rate of catch per °C change in temperature, r , consistent with an exponential Q 10 response to temperature. We fit this model to 38 time series of pitfall catches and accompanying temperature records from the literature, using first differences and other detrending methods to account for seasonality. We use meta-analysis to assess consistency of the estimated parameter r among studies.The mean rate of increase in total catch across data sets was 0·0863 ± 0·0058 per °C of maximum temperature and 0·0497 ± 0·0107 per °C of minimum temperature. Multiple regression analyses of 19 data sets showed that temperature is the key climatic variable affecting total catch. Relationships between temperature and catch were also identified at species level. Correction for temperature bias had substantial effects on seasonal trends of carabid catches. Synthesis and Applications . The effect of temperature on pitfall catches is shown here to be substantial and worthy of consideration when interpreting results of pitfall trapping. The exponential model can be used both for effect estimation and for bias correction of observed data. Correcting for temperature-related trapping bias is straightforward and enables population estimates to be more comparable. It may thus improve data interpretation in ecological, conservation and monitoring studies, and assist in better management and conservation of habitats and ecosystem services. Nevertheless, field ecologists should remain vigilant for other sources of bias.

  11. Considerations for analysis of time-to-event outcomes measured with error: Bias and correction with SIMEX.

    PubMed

    Oh, Eric J; Shepherd, Bryan E; Lumley, Thomas; Shaw, Pamela A

    2018-04-15

    For time-to-event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression-free survival or time to AIDS progression) can be difficult to assess or reliant on self-report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log-linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Revisiting the Logan plot to account for non-negligible blood volume in brain tissue.

    PubMed

    Schain, Martin; Fazio, Patrik; Mrzljak, Ladislav; Amini, Nahid; Al-Tawil, Nabil; Fitzer-Attas, Cheryl; Bronzova, Juliana; Landwehrmeyer, Bernhard; Sampaio, Christina; Halldin, Christer; Varrone, Andrea

    2017-08-18

    Reference tissue-based quantification of brain PET data does not typically include correction for signal originating from blood vessels, which is known to result in biased outcome measures. The bias extent depends on the amount of radioactivity in the blood vessels. In this study, we seek to revisit the well-established Logan plot and derive alternative formulations that provide estimation of distribution volume ratios (DVRs) that are corrected for the signal originating from the vasculature. New expressions for the Logan plot based on arterial input function and reference tissue were derived, which included explicit terms for whole blood radioactivity. The new methods were evaluated using PET data acquired using [ 11 C]raclopride and [ 18 F]MNI-659. The two-tissue compartment model (2TCM), with which signal originating from blood can be explicitly modeled, was used as a gold standard. DVR values obtained for [ 11 C]raclopride using the either blood-based or reference tissue-based Logan plot were systematically underestimated compared to 2TCM, and for [ 18 F]MNI-659, a proportionality bias was observed, i.e., the bias varied across regions. The biases disappeared when optimal blood-signal correction was used for respective tracer, although for the case of [ 18 F]MNI-659 a small but systematic overestimation of DVR was still observed. The new method appears to remove the bias introduced due to absence of correction for blood volume in regular graphical analysis and can be considered in clinical studies. Further studies are however required to derive a generic mapping between plasma and whole-blood radioactivity levels.

  13. Hydraulic correction method (HCM) to enhance the efficiency of SRTM DEM in flood modeling

    NASA Astrophysics Data System (ADS)

    Chen, Huili; Liang, Qiuhua; Liu, Yong; Xie, Shuguang

    2018-04-01

    Digital Elevation Model (DEM) is one of the most important controlling factors determining the simulation accuracy of hydraulic models. However, the currently available global topographic data is confronted with limitations for application in 2-D hydraulic modeling, mainly due to the existence of vegetation bias, random errors and insufficient spatial resolution. A hydraulic correction method (HCM) for the SRTM DEM is proposed in this study to improve modeling accuracy. Firstly, we employ the global vegetation corrected DEM (i.e. Bare-Earth DEM), developed from the SRTM DEM to include both vegetation height and SRTM vegetation signal. Then, a newly released DEM, removing both vegetation bias and random errors (i.e. Multi-Error Removed DEM), is employed to overcome the limitation of height errors. Last, an approach to correct the Multi-Error Removed DEM is presented to account for the insufficiency of spatial resolution, ensuring flow connectivity of the river networks. The approach involves: (a) extracting river networks from the Multi-Error Removed DEM using an automated algorithm in ArcGIS; (b) correcting the location and layout of extracted streams with the aid of Google Earth platform and Remote Sensing imagery; and (c) removing the positive biases of the raised segment in the river networks based on bed slope to generate the hydraulically corrected DEM. The proposed HCM utilizes easily available data and tools to improve the flow connectivity of river networks without manual adjustment. To demonstrate the advantages of HCM, an extreme flood event in Huifa River Basin (China) is simulated on the original DEM, Bare-Earth DEM, Multi-Error removed DEM, and hydraulically corrected DEM using an integrated hydrologic-hydraulic model. A comparative analysis is subsequently performed to assess the simulation accuracy and performance of four different DEMs and favorable results have been obtained on the corrected DEM.

  14. Data-Adaptive Bias-Reduced Doubly Robust Estimation.

    PubMed

    Vermeulen, Karel; Vansteelandt, Stijn

    2016-05-01

    Doubly robust estimators have now been proposed for a variety of target parameters in the causal inference and missing data literature. These consistently estimate the parameter of interest under a semiparametric model when one of two nuisance working models is correctly specified, regardless of which. The recently proposed bias-reduced doubly robust estimation procedure aims to partially retain this robustness in more realistic settings where both working models are misspecified. These so-called bias-reduced doubly robust estimators make use of special (finite-dimensional) nuisance parameter estimators that are designed to locally minimize the squared asymptotic bias of the doubly robust estimator in certain directions of these finite-dimensional nuisance parameters under misspecification of both parametric working models. In this article, we extend this idea to incorporate the use of data-adaptive estimators (infinite-dimensional nuisance parameters), by exploiting the bias reduction estimation principle in the direction of only one nuisance parameter. We additionally provide an asymptotic linearity theorem which gives the influence function of the proposed doubly robust estimator under correct specification of a parametric nuisance working model for the missingness mechanism/propensity score but a possibly misspecified (finite- or infinite-dimensional) outcome working model. Simulation studies confirm the desirable finite-sample performance of the proposed estimators relative to a variety of other doubly robust estimators.

  15. Bootstrap confidence intervals and bias correction in the estimation of HIV incidence from surveillance data with testing for recent infection.

    PubMed

    Carnegie, Nicole Bohme

    2011-04-15

    The incidence of new infections is a key measure of the status of the HIV epidemic, but accurate measurement of incidence is often constrained by limited data. Karon et al. (Statist. Med. 2008; 27:4617–4633) developed a model to estimate the incidence of HIV infection from surveillance data with biologic testing for recent infection for newly diagnosed cases. This method has been implemented by public health departments across the United States and is behind the new national incidence estimates, which are about 40 per cent higher than previous estimates. We show that the delta method approximation given for the variance of the estimator is incomplete, leading to an inflated variance estimate. This contributes to the generation of overly conservative confidence intervals, potentially obscuring important differences between populations. We demonstrate via simulation that an innovative model-based bootstrap method using the specified model for the infection and surveillance process improves confidence interval coverage and adjusts for the bias in the point estimate. Confidence interval coverage is about 94–97 per cent after correction, compared with 96–99 per cent before. The simulated bias in the estimate of incidence ranges from −6.3 to +14.6 per cent under the original model but is consistently under 1 per cent after correction by the model-based bootstrap. In an application to data from King County, Washington in 2007 we observe correction of 7.2 per cent relative bias in the incidence estimate and a 66 per cent reduction in the width of the 95 per cent confidence interval using this method. We provide open-source software to implement the method that can also be extended for alternate models.

  16. Extracting muon momentum scale corrections for hadron collider experiments

    NASA Astrophysics Data System (ADS)

    Bodek, A.; van Dyne, A.; Han, J. Y.; Sakumoto, W.; Strelnikov, A.

    2012-10-01

    We present a simple method for the extraction of corrections for bias in the measurement of the momentum of muons in hadron collider experiments. Such bias can originate from a variety of sources such as detector misalignment, software reconstruction bias, and uncertainties in the magnetic field. The two step method uses the mean <1/p^{μ}T rangle for muons from Z→ μμ decays to determine the momentum scale corrections in bins of charge, η and ϕ. In the second step, the corrections are tuned by using the average invariant mass < MZ_{μμ }rangle of Z→ μμ events in the same bins of charge η and ϕ. The forward-backward asymmetry of Z/ γ ∗→ μμ pairs as a function of μ + μ - mass, and the ϕ distribution of Z bosons in the Collins-Soper frame are used to ascertain that the corrections remove the bias in the momentum measurements for positive versus negatively charged muons. By taking the sum and difference of the momentum scale corrections for positive and negative muons, we isolate additive corrections to 1/p^{μ}T that may originate from misalignments and multiplicative corrections that may originate from mis-modeling of the magnetic field (∫ Bṡ d L). This method has recently been used in the CDF experiment at Fermilab and in the CMS experiment at the Large Hadron Collider at CERN.

  17. Maximum likelihood estimation and EM algorithm of Copas-like selection model for publication bias correction.

    PubMed

    Ning, Jing; Chen, Yong; Piao, Jin

    2017-07-01

    Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. Process-based evaluation of the ÖKS15 Austrian climate scenarios: First results

    NASA Astrophysics Data System (ADS)

    Mendlik, Thomas; Truhetz, Heimo; Jury, Martin; Maraun, Douglas

    2017-04-01

    The climate scenarios for Austria from the ÖKS15 project consists of 13 downscaled and bias-corrected RCMs from the EURO-CORDEX project. This dataset is meant for the broad public and is now available at the central national archive for climate data (CCCA Data Center). Because of this huge public outreach it is absolutely necessary to objectively discuss the limitations of this dataset and to publish these limitations, which should also be understood by a non-scientific audience. Even though systematical climatological biases have been accounted for by the Scaled-Distribution-Mapping (SDM) bias-correction method, it is not guaranteed that the model biases have been removed for the right reasons. If climate scenarios do not get the patterns of synoptic variability right, biases will still prevail in certain weather patterns. Ultimately this will have consequences for the projected climate change signals. In this study we derive typical weather types in the Alpine Region based on patterns from mean sea level pressure from ERA-INTERIM data and check the occurrence of these synoptic phenomena in EURO-CORDEX data and their corresponding driving GCMs. Based on these weather patterns we analyze the remaining biases of the downscaled and bias-corrected scenarios. We argue that such a process-based evaluation is not only necessary from a scientific point of view, but can also help the broader public to understand the limitations of downscaled climate scenarios, as model errors can be interpreted in terms of everyday observable weather.

  19. Contributions of different bias-correction methods and reference meteorological forcing data sets to uncertainty in projected temperature and precipitation extremes

    NASA Astrophysics Data System (ADS)

    Iizumi, Toshichika; Takikawa, Hiroki; Hirabayashi, Yukiko; Hanasaki, Naota; Nishimori, Motoki

    2017-08-01

    The use of different bias-correction methods and global retrospective meteorological forcing data sets as the reference climatology in the bias correction of general circulation model (GCM) daily data is a known source of uncertainty in projected climate extremes and their impacts. Despite their importance, limited attention has been given to these uncertainty sources. We compare 27 projected temperature and precipitation indices over 22 regions of the world (including the global land area) in the near (2021-2060) and distant future (2061-2100), calculated using four Representative Concentration Pathways (RCPs), five GCMs, two bias-correction methods, and three reference forcing data sets. To widen the variety of forcing data sets, we developed a new forcing data set, S14FD, and incorporated it into this study. The results show that S14FD is more accurate than other forcing data sets in representing the observed temperature and precipitation extremes in recent decades (1961-2000 and 1979-2008). The use of different bias-correction methods and forcing data sets contributes more to the total uncertainty in the projected precipitation index values in both the near and distant future than the use of different GCMs and RCPs. However, GCM appears to be the most dominant uncertainty source for projected temperature index values in the near future, and RCP is the most dominant source in the distant future. Our findings encourage climate risk assessments, especially those related to precipitation extremes, to employ multiple bias-correction methods and forcing data sets in addition to using different GCMs and RCPs.

  20. Calibration transfer of a Raman spectroscopic quantification method for the assessment of liquid detergent compositions from at-line laboratory to in-line industrial scale.

    PubMed

    Brouckaert, D; Uyttersprot, J-S; Broeckx, W; De Beer, T

    2018-03-01

    Calibration transfer or standardisation aims at creating a uniform spectral response on different spectroscopic instruments or under varying conditions, without requiring a full recalibration for each situation. In the current study, this strategy is applied to construct at-line multivariate calibration models and consequently employ them in-line in a continuous industrial production line, using the same spectrometer. Firstly, quantitative multivariate models are constructed at-line at laboratory scale for predicting the concentration of two main ingredients in hard surface cleaners. By regressing the Raman spectra of a set of small-scale calibration samples against their reference concentration values, partial least squares (PLS) models are developed to quantify the surfactant levels in the liquid detergent compositions under investigation. After evaluating the models performance with a set of independent validation samples, a univariate slope/bias correction is applied in view of transporting these at-line calibration models to an in-line manufacturing set-up. This standardisation technique allows a fast and easy transfer of the PLS regression models, by simply correcting the model predictions on the in-line set-up, without adjusting anything to the original multivariate calibration models. An extensive statistical analysis is performed in order to assess the predictive quality of the transferred regression models. Before and after transfer, the R 2 and RMSEP of both models is compared for evaluating if their magnitude is similar. T-tests are then performed to investigate whether the slope and intercept of the transferred regression line are not statistically different from 1 and 0, respectively. Furthermore, it is inspected whether no significant bias can be noted. F-tests are executed as well, for assessing the linearity of the transfer regression line and for investigating the statistical coincidence of the transfer and validation regression line. Finally, a paired t-test is performed to compare the original at-line model to the slope/bias corrected in-line model, using interval hypotheses. It is shown that the calibration models of Surfactant 1 and Surfactant 2 yield satisfactory in-line predictions after slope/bias correction. While Surfactant 1 passes seven out of eight statistical tests, the recommended validation parameters are 100% successful for Surfactant 2. It is hence concluded that the proposed strategy for transferring at-line calibration models to an in-line industrial environment via a univariate slope/bias correction of the predicted values offers a successful standardisation approach. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Consistent Transition of Salinity Retrievals From Aquarius to SMAP

    NASA Astrophysics Data System (ADS)

    Mears, C. A.; Meissner, T.; Wentz, F. J.; Manaster, A.

    2017-12-01

    The Aquarius Version 5.0 release in late 2017 has achieved an excellent level of accuracy and significantly mitigated most of the regional and seasonal biases that had been observed in prior releases. The SMAP NASA/RSS Version 2.0 release does not quite yet reach that level of accuracy. Our presentation discusses the necessary steps that need to be undertaken in the upcoming V 3.0 of the SMAP salinity retrieval algorithm to achieve a seamless transition between the salinity products from the two instruments. We also discuss where fundamental differences in the sensors make it difficult to reach complete consistency. In the Aquarius V 4.0 and earlier releases, comparison with ARGO floats have revealed small fresh biases at low latitudes and larger seasonally varying salty biases at high latitudes. These biases have been tracked back to inaccuracies in the models that are used for correcting the absorption by atmospheric oxygen and for correcting the wind induced roughness. The geophysical models have been changed in Aquarius V5.0, which resulted in a significant improvement of these biases. The upcoming SMAP V3 release will implement the same geophysical model. In deriving the changes of the geophysical model, monthly ARGO analyzed fields from Scripps are now being used consistently as reference salinity for both Aquarius V5.0 and the upcoming SMAP V3.0 releases. Earlier versions had used HYOCM as reference salinity field. The development of the Aquarius V 5.0 algorithm has already strongly benefited from the full 360o look capability of SMAP. This aided in deriving the correction of the reflected galaxy, which is a strong spurious signal for both sensors. Consistent corrections for the galactic signal are now used for both Aquarius and SMAP. It is also important to filter out rain when developing the GMF and when validating the satellite salinities versus in-situ measurements on order to avoid mismatches due to salinity stratification in the upper ocean layer. One major difference between Aquarius and SMAP is the emissive SMAP mesh antenna. In order to correct for it an accurate thermal model for the physical temperature of the SMAP antenna needs to be developed.

  2. Correcting surface solar radiation of two data assimilation systems against FLUXNET observations in North America

    NASA Astrophysics Data System (ADS)

    Zhao, Lei; Lee, Xuhui; Liu, Shoudong

    2013-09-01

    Solar radiation at the Earth's surface is an important driver of meteorological and ecological processes. The objective of this study is to evaluate the accuracy of the reanalysis solar radiation produced by NARR (North American Regional Reanalysis) and MERRA (Modern-Era Retrospective Analysis for Research and Applications) against the FLUXNET measurements in North America. We found that both assimilation systems systematically overestimated the surface solar radiation flux on the monthly and annual scale, with an average bias error of +37.2 Wm-2 for NARR and of +20.2 Wm-2 for MERRA. The bias errors were larger under cloudy skies than under clear skies. A postreanalysis algorithm consisting of empirical relationships between model bias, a clearness index, and site elevation was proposed to correct the model errors. Results show that the algorithm can remove the systematic bias errors for both FLUXNET calibration sites (sites used to establish the algorithm) and independent validation sites. After correction, the average annual mean bias errors were reduced to +1.3 Wm-2 for NARR and +2.7 Wm-2 for MERRA. Applying the correction algorithm to the global domain of MERRA brought the global mean surface incoming shortwave radiation down by 17.3 W m-2 to 175.5 W m-2. Under the constraint of the energy balance, other radiation and energy balance terms at the Earth's surface, estimated from independent global data products, also support the need for a downward adjustment of the MERRA surface solar radiation.

  3. A parametric approach for simultaneous bias correction and high-resolution downscaling of climate model rainfall

    NASA Astrophysics Data System (ADS)

    Mamalakis, Antonios; Langousis, Andreas; Deidda, Roberto; Marrocu, Marino

    2017-03-01

    Distribution mapping has been identified as the most efficient approach to bias-correct climate model rainfall, while reproducing its statistics at spatial and temporal resolutions suitable to run hydrologic models. Yet its implementation based on empirical distributions derived from control samples (referred to as nonparametric distribution mapping) makes the method's performance sensitive to sample length variations, the presence of outliers, the spatial resolution of climate model results, and may lead to biases, especially in extreme rainfall estimation. To address these shortcomings, we propose a methodology for simultaneous bias correction and high-resolution downscaling of climate model rainfall products that uses: (a) a two-component theoretical distribution model (i.e., a generalized Pareto (GP) model for rainfall intensities above a specified threshold u*, and an exponential model for lower rainrates), and (b) proper interpolation of the corresponding distribution parameters on a user-defined high-resolution grid, using kriging for uncertain data. We assess the performance of the suggested parametric approach relative to the nonparametric one, using daily raingauge measurements from a dense network in the island of Sardinia (Italy), and rainfall data from four GCM/RCM model chains of the ENSEMBLES project. The obtained results shed light on the competitive advantages of the parametric approach, which is proved more accurate and considerably less sensitive to the characteristics of the calibration period, independent of the GCM/RCM combination used. This is especially the case for extreme rainfall estimation, where the GP assumption allows for more accurate and robust estimates, also beyond the range of the available data.

  4. Isotopic fractionation studies of uranium and plutonium using porous ion emitters as thermal ionization mass spectrometry sources

    DOE PAGES

    Baruzzini, Matthew L.; Hall, Howard L.; Spencer, Khalil J.; ...

    2018-04-22

    Investigations of the isotope fractionation behaviors of plutonium and uranium reference standards were conducted employing platinum and rhenium (Pt/Re) porous ion emitter (PIE) sources, a relatively new thermal ionization mass spectrometry (TIMS) ion source strategy. The suitability of commonly employed, empirically developed mass bias correction laws (i.e., the Linear, Power, and Russell's laws) for correcting such isotope ratio data was also determined. Corrected plutonium isotope ratio data, regardless of mass bias correction strategy, were statistically identical to that of the certificate, however, the process of isotope fractionation behavior of plutonium using the adopted experimental conditions was determined to be bestmore » described by the Power law. Finally, the fractionation behavior of uranium, using the analytical conditions described herein, is also most suitably modeled using the Power law, though Russell's and the Linear law for mass bias correction rendered results that were identical, within uncertainty, to the certificate value.« less

  5. Isotopic fractionation studies of uranium and plutonium using porous ion emitters as thermal ionization mass spectrometry sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baruzzini, Matthew L.; Hall, Howard L.; Spencer, Khalil J.

    Investigations of the isotope fractionation behaviors of plutonium and uranium reference standards were conducted employing platinum and rhenium (Pt/Re) porous ion emitter (PIE) sources, a relatively new thermal ionization mass spectrometry (TIMS) ion source strategy. The suitability of commonly employed, empirically developed mass bias correction laws (i.e., the Linear, Power, and Russell's laws) for correcting such isotope ratio data was also determined. Corrected plutonium isotope ratio data, regardless of mass bias correction strategy, were statistically identical to that of the certificate, however, the process of isotope fractionation behavior of plutonium using the adopted experimental conditions was determined to be bestmore » described by the Power law. Finally, the fractionation behavior of uranium, using the analytical conditions described herein, is also most suitably modeled using the Power law, though Russell's and the Linear law for mass bias correction rendered results that were identical, within uncertainty, to the certificate value.« less

  6. A Bayesian approach to truncated data sets: An application to Malmquist bias in Supernova Cosmology

    NASA Astrophysics Data System (ADS)

    March, Marisa Cristina

    2018-01-01

    A problem commonly encountered in statistical analysis of data is that of truncated data sets. A truncated data set is one in which a number of data points are completely missing from a sample, this is in contrast to a censored sample in which partial information is missing from some data points. In astrophysics this problem is commonly seen in a magnitude limited survey such that the survey is incomplete at fainter magnitudes, that is, certain faint objects are simply not observed. The effect of this `missing data' is manifested as Malmquist bias and can result in biases in parameter inference if it is not accounted for. In Frequentist methodologies the Malmquist bias is often corrected for by analysing many simulations and computing the appropriate correction factors. One problem with this methodology is that the corrections are model dependent. In this poster we derive a Bayesian methodology for accounting for truncated data sets in problems of parameter inference and model selection. We first show the methodology for a simple Gaussian linear model and then go on to show the method for accounting for a truncated data set in the case for cosmological parameter inference with a magnitude limited supernova Ia survey.

  7. Evaluating the Utility of Satellite Soil Moisture Retrievals over Irrigated Areas and the Ability of Land Data Assimilation Methods to Correct for Unmodeled Processes

    NASA Technical Reports Server (NTRS)

    Kumar, S. V.; Peters-Lidard, C. D.; Santanello, J. A.; Reichle, R. H.; Draper, C. S.; Koster, R. D.; Nearing, G.; Jasinski, M. F.

    2015-01-01

    Earth's land surface is characterized by tremendous natural heterogeneity and human-engineered modifications, both of which are challenging to represent in land surface models. Satellite remote sensing is often the most practical and effective method to observe the land surface over large geographical areas. Agricultural irrigation is an important human-induced modification to natural land surface processes, as it is pervasive across the world and because of its significant influence on the regional and global water budgets. In this article, irrigation is used as an example of a human-engineered, often unmodeled land surface process, and the utility of satellite soil moisture retrievals over irrigated areas in the continental US is examined. Such retrievals are based on passive or active microwave observations from the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E), the Advanced Microwave Scanning Radiometer 2 (AMSR2), the Soil Moisture Ocean Salinity (SMOS) mission, WindSat and the Advanced Scatterometer (ASCAT). The analysis suggests that the skill of these retrievals for representing irrigation effects is mixed, with ASCAT-based products somewhat more skillful than SMOS and AMSR2 products. The article then examines the suitability of typical bias correction strategies in current land data assimilation systems when unmodeled processes dominate the bias between the model and the observations. Using a suite of synthetic experiments that includes bias correction strategies such as quantile mapping and trained forward modeling, it is demonstrated that the bias correction practices lead to the exclusion of the signals from unmodeled processes, if these processes are the major source of the biases. It is further shown that new methods are needed to preserve the observational information about unmodeled processes during data assimilation.

  8. A parametric and non-parametric metamodeling approach for the bias-correction of Satellite Rainfall Estimates using rain gauge measurements. Cases of study: Magdalena Basin (Colombia), Imperial Basin (Chile) and Paraiba do Sul (Brazil).

    NASA Astrophysics Data System (ADS)

    Rebolledo Coy, M. A.; Villanueva, O. M. B.; Bartz-Beielstein, T.; Ribbe, L.

    2017-12-01

    Rainfall measurement plays an important role on the understanding and modeling of the water cycle. However, the assessment of scarce data regions using common rain gauge information, cannot be done using a straightforward approach. Some of the main problems concerning rainfall assessment are; the lack of a sufficiently dense grid of ground stations in extensive areas and the unstable spatial accuracy of the Satellite Rainfall Estimates (SREs). Following previous works on SREs analysis and bias-correction, we generate an ensemble model that corrects the bias error on a seasonal and yearly basis using six different state-of-the-art SREs (TRMM 3B42RT, TRMM 3B42v7, PERSIANN-CDR, CHIRPSv2, CMORPH and MSWEPv1.2) in a point-to-pixel approach for the studied period (2003-2015). Three different basins; Magdalena in Colombia, Imperial in Chile and Paraiba do Sul in Brazil are evaluated. Using Gaussian process regression and Bayesian robust regression we model the behavior of the ground stations and evaluate its goodness-of-fit by using the modified Kling-Gupta efficiency (KGE'). Following this evaluation, the models are re-fitted by taking into account the error distribution in each point and the corresponding KGE' is evaluated again. Both models were specified using the probabilistic language STAN. To improve the efficiency of the Gaussian model a clustering of the data was implemented. We also compared the performance of both models in term of uncertainty and stability against the raw input concluding that both models represent better the study areas. The results show that the error displays an exponential behavior for days where precipitation was present, this allows the models to be corrected according to the observed rainfall values. The seasonal evaluations also show improved performance in relation to the yearly evaluations. The use of bias-corrected SREs for hydrologic purposes in scarce data regions is highly recommended in order to merge the punctual values from the ground measurements and the spatial distribution of rainfall from the satellite estimates.

  9. Accurate and predictive antibody repertoire profiling by molecular amplification fingerprinting.

    PubMed

    Khan, Tarik A; Friedensohn, Simon; Gorter de Vries, Arthur R; Straszewski, Jakub; Ruscheweyh, Hans-Joachim; Reddy, Sai T

    2016-03-01

    High-throughput antibody repertoire sequencing (Ig-seq) provides quantitative molecular information on humoral immunity. However, Ig-seq is compromised by biases and errors introduced during library preparation and sequencing. By using synthetic antibody spike-in genes, we determined that primer bias from multiplex polymerase chain reaction (PCR) library preparation resulted in antibody frequencies with only 42 to 62% accuracy. Additionally, Ig-seq errors resulted in antibody diversity measurements being overestimated by up to 5000-fold. To rectify this, we developed molecular amplification fingerprinting (MAF), which uses unique molecular identifier (UID) tagging before and during multiplex PCR amplification, which enabled tagging of transcripts while accounting for PCR efficiency. Combined with a bioinformatic pipeline, MAF bias correction led to measurements of antibody frequencies with up to 99% accuracy. We also used MAF to correct PCR and sequencing errors, resulting in enhanced accuracy of full-length antibody diversity measurements, achieving 98 to 100% error correction. Using murine MAF-corrected data, we established a quantitative metric of recent clonal expansion-the intraclonal diversity index-which measures the number of unique transcripts associated with an antibody clone. We used this intraclonal diversity index along with antibody frequencies and somatic hypermutation to build a logistic regression model for prediction of the immunological status of clones. The model was able to predict clonal status with high confidence but only when using MAF error and bias corrected Ig-seq data. Improved accuracy by MAF provides the potential to greatly advance Ig-seq and its utility in immunology and biotechnology.

  10. Accurate and predictive antibody repertoire profiling by molecular amplification fingerprinting

    PubMed Central

    Khan, Tarik A.; Friedensohn, Simon; de Vries, Arthur R. Gorter; Straszewski, Jakub; Ruscheweyh, Hans-Joachim; Reddy, Sai T.

    2016-01-01

    High-throughput antibody repertoire sequencing (Ig-seq) provides quantitative molecular information on humoral immunity. However, Ig-seq is compromised by biases and errors introduced during library preparation and sequencing. By using synthetic antibody spike-in genes, we determined that primer bias from multiplex polymerase chain reaction (PCR) library preparation resulted in antibody frequencies with only 42 to 62% accuracy. Additionally, Ig-seq errors resulted in antibody diversity measurements being overestimated by up to 5000-fold. To rectify this, we developed molecular amplification fingerprinting (MAF), which uses unique molecular identifier (UID) tagging before and during multiplex PCR amplification, which enabled tagging of transcripts while accounting for PCR efficiency. Combined with a bioinformatic pipeline, MAF bias correction led to measurements of antibody frequencies with up to 99% accuracy. We also used MAF to correct PCR and sequencing errors, resulting in enhanced accuracy of full-length antibody diversity measurements, achieving 98 to 100% error correction. Using murine MAF-corrected data, we established a quantitative metric of recent clonal expansion—the intraclonal diversity index—which measures the number of unique transcripts associated with an antibody clone. We used this intraclonal diversity index along with antibody frequencies and somatic hypermutation to build a logistic regression model for prediction of the immunological status of clones. The model was able to predict clonal status with high confidence but only when using MAF error and bias corrected Ig-seq data. Improved accuracy by MAF provides the potential to greatly advance Ig-seq and its utility in immunology and biotechnology. PMID:26998518

  11. --No Title--

    Science.gov Websites

    2008112500 2008112400 Background information bias reduction = ( | domain-averaged ensemble mean bias | - | domain-averaged bias-corrected ensemble mean bias | / | domain-averaged bias-corrected ensemble mean bias

  12. Recent Advances in the Salinity Retrieval Algorithms for Aquarius and SMAP

    NASA Astrophysics Data System (ADS)

    Meissner, T.; Wentz, F. J.

    2016-12-01

    Our presentation discusses the latest improvements in the salinity retrievals for both Aquarius and SMAP since the last releases. The Aquarius V4.0 was released in June 2015 and the SMAP V 1.0 was released in November 2015. Upcoming releases are planned for SMAP (V 2.0) in August 2016 and for Aquarius (V 5.0) late 2017. The full 360o look capability of SMAP makes it possible to take observations from the forward and backward looking direction at the same instance of time. This two-look capability strongly aids the salinity retrievals. One of the largest spurious contaminations in the salinity retrievals is caused by the galaxy that is reflected from the ocean surface. Because in most instances the reflected galaxy appears only in either the forward or the backward look, it is possible to determine its contribution by taking the difference of the measured SMAP brightness temperatures between the two looks. Our result suggests that the surface roughness that is used in the galactic correction needs to be increased and also the strength of some of the galactic sources need to be slightly adjusted. The improved galaxy correction is getting implemented in upcoming Aquarius and SMAP salinity releases and strongly aids the mitigation of residual zonal and temporal biases that are observed in both products. Another major cause of the observed zonal biases in SMAP is the emissive SMAP mesh antenna. In order to correct for it the physical temperature of the antenna is needed. No direct measurements but only a thermal model are available. We discuss recent improvements in the correction for the emissive SMAP antenna and show how most of the zonal biases in V1.0 can be mitigated. Finally, we show that observed salty biases at higher Northern latitudes can be explained by inaccuracies in the model that is used in correcting for the absorption by atmospheric oxygen. These biases can be decreased by fine-tuning the parameters in the absorption model.

  13. An Improved BeiDou-2 Satellite-Induced Code Bias Estimation Method.

    PubMed

    Fu, Jingyang; Li, Guangyun; Wang, Li

    2018-04-27

    Different from GPS, GLONASS, GALILEO and BeiDou-3, it is confirmed that the code multipath bias (CMB), which originate from the satellite end and can be over 1 m, are commonly found in the code observations of BeiDou-2 (BDS) IGSO and MEO satellites. In order to mitigate their adverse effects on absolute precise applications which use the code measurements, we propose in this paper an improved correction model to estimate the CMB. Different from the traditional model which considering the correction values are orbit-type dependent (estimating two sets of values for IGSO and MEO, respectively) and modeling the CMB as a piecewise linear function with a elevation node separation of 10°, we estimate the corrections for each BDS IGSO + MEO satellite on one hand, and a denser elevation node separation of 5° is used to model the CMB variations on the other hand. Currently, the institutions such as IGS-MGEX operate over 120 stations which providing the daily BDS observations. These large amounts of data provide adequate support to refine the CMB estimation satellite by satellite in our improved model. One month BDS observations from MGEX are used for assessing the performance of the improved CMB model by means of precise point positioning (PPP). Experimental results show that for the satellites on the same orbit type, obvious differences can be found in the CMB at the same node and frequency. Results show that the new correction model can improve the wide-lane (WL) ambiguity usage rate for WL fractional cycle bias estimation, shorten the WL and narrow-lane (NL) time to first fix (TTFF) in PPP ambiguity resolution (AR) as well as improve the PPP positioning accuracy. With our improved correction model, the usage of WL ambiguity is increased from 94.1% to 96.0%, the WL and NL TTFF of PPP AR is shorten from 10.6 to 9.3 min, 67.9 to 63.3 min, respectively, compared with the traditional correction model. In addition, both the traditional and improved CMB model have a better performance in these aspects compared with the model which does not account for the CMB correction.

  14. An Improved BeiDou-2 Satellite-Induced Code Bias Estimation Method

    PubMed Central

    Fu, Jingyang; Li, Guangyun; Wang, Li

    2018-01-01

    Different from GPS, GLONASS, GALILEO and BeiDou-3, it is confirmed that the code multipath bias (CMB), which originate from the satellite end and can be over 1 m, are commonly found in the code observations of BeiDou-2 (BDS) IGSO and MEO satellites. In order to mitigate their adverse effects on absolute precise applications which use the code measurements, we propose in this paper an improved correction model to estimate the CMB. Different from the traditional model which considering the correction values are orbit-type dependent (estimating two sets of values for IGSO and MEO, respectively) and modeling the CMB as a piecewise linear function with a elevation node separation of 10°, we estimate the corrections for each BDS IGSO + MEO satellite on one hand, and a denser elevation node separation of 5° is used to model the CMB variations on the other hand. Currently, the institutions such as IGS-MGEX operate over 120 stations which providing the daily BDS observations. These large amounts of data provide adequate support to refine the CMB estimation satellite by satellite in our improved model. One month BDS observations from MGEX are used for assessing the performance of the improved CMB model by means of precise point positioning (PPP). Experimental results show that for the satellites on the same orbit type, obvious differences can be found in the CMB at the same node and frequency. Results show that the new correction model can improve the wide-lane (WL) ambiguity usage rate for WL fractional cycle bias estimation, shorten the WL and narrow-lane (NL) time to first fix (TTFF) in PPP ambiguity resolution (AR) as well as improve the PPP positioning accuracy. With our improved correction model, the usage of WL ambiguity is increased from 94.1% to 96.0%, the WL and NL TTFF of PPP AR is shorten from 10.6 to 9.3 min, 67.9 to 63.3 min, respectively, compared with the traditional correction model. In addition, both the traditional and improved CMB model have a better performance in these aspects compared with the model which does not account for the CMB correction. PMID:29702559

  15. Estimating Gravity Biases with Wavelets in Support of a 1-cm Accurate Geoid Model

    NASA Astrophysics Data System (ADS)

    Ahlgren, K.; Li, X.

    2017-12-01

    Systematic errors that reside in surface gravity datasets are one of the major hurdles in constructing a high-accuracy geoid model at high resolutions. The National Oceanic and Atmospheric Administration's (NOAA) National Geodetic Survey (NGS) has an extensive historical surface gravity dataset consisting of approximately 10 million gravity points that are known to have systematic biases at the mGal level (Saleh et al. 2013). As most relevant metadata is absent, estimating and removing these errors to be consistent with a global geopotential model and airborne data in the corresponding wavelength is quite a difficult endeavor. However, this is crucial to support a 1-cm accurate geoid model for the United States. With recently available independent gravity information from GRACE/GOCE and airborne gravity from the NGS Gravity for the Redefinition of the American Vertical Datum (GRAV-D) project, several different methods of bias estimation are investigated which utilize radial basis functions and wavelet decomposition. We estimate a surface gravity value by incorporating a satellite gravity model, airborne gravity data, and forward-modeled topography at wavelet levels according to each dataset's spatial wavelength. Considering the estimated gravity values over an entire gravity survey, an estimate of the bias and/or correction for the entire survey can be found and applied. In order to assess the accuracy of each bias estimation method, two techniques are used. First, each bias estimation method is used to predict the bias for two high-quality (unbiased and high accuracy) geoid slope validation surveys (GSVS) (Smith et al. 2013 & Wang et al. 2017). Since these surveys are unbiased, the various bias estimation methods should reflect that and provide an absolute accuracy metric for each of the bias estimation methods. Secondly, the corrected gravity datasets from each of the bias estimation methods are used to build a geoid model. The accuracy of each geoid model provides an additional metric to assess the performance of each bias estimation method. The geoid model accuracies are assessed using the two GSVS lines and GPS-leveling data across the United States.

  16. Adaptable gene-specific dye bias correction for two-channel DNA microarrays.

    PubMed

    Margaritis, Thanasis; Lijnzaad, Philip; van Leenen, Dik; Bouwmeester, Diane; Kemmeren, Patrick; van Hooff, Sander R; Holstege, Frank C P

    2009-01-01

    DNA microarray technology is a powerful tool for monitoring gene expression or for finding the location of DNA-bound proteins. DNA microarrays can suffer from gene-specific dye bias (GSDB), causing some probes to be affected more by the dye than by the sample. This results in large measurement errors, which vary considerably for different probes and also across different hybridizations. GSDB is not corrected by conventional normalization and has been difficult to address systematically because of its variance. We show that GSDB is influenced by label incorporation efficiency, explaining the variation of GSDB across different hybridizations. A correction method (Gene- And Slide-Specific Correction, GASSCO) is presented, whereby sequence-specific corrections are modulated by the overall bias of individual hybridizations. GASSCO outperforms earlier methods and works well on a variety of publically available datasets covering a range of platforms, organisms and applications, including ChIP on chip. A sequence-based model is also presented, which predicts which probes will suffer most from GSDB, useful for microarray probe design and correction of individual hybridizations. Software implementing the method is publicly available.

  17. Adaptable gene-specific dye bias correction for two-channel DNA microarrays

    PubMed Central

    Margaritis, Thanasis; Lijnzaad, Philip; van Leenen, Dik; Bouwmeester, Diane; Kemmeren, Patrick; van Hooff, Sander R; Holstege, Frank CP

    2009-01-01

    DNA microarray technology is a powerful tool for monitoring gene expression or for finding the location of DNA-bound proteins. DNA microarrays can suffer from gene-specific dye bias (GSDB), causing some probes to be affected more by the dye than by the sample. This results in large measurement errors, which vary considerably for different probes and also across different hybridizations. GSDB is not corrected by conventional normalization and has been difficult to address systematically because of its variance. We show that GSDB is influenced by label incorporation efficiency, explaining the variation of GSDB across different hybridizations. A correction method (Gene- And Slide-Specific Correction, GASSCO) is presented, whereby sequence-specific corrections are modulated by the overall bias of individual hybridizations. GASSCO outperforms earlier methods and works well on a variety of publically available datasets covering a range of platforms, organisms and applications, including ChIP on chip. A sequence-based model is also presented, which predicts which probes will suffer most from GSDB, useful for microarray probe design and correction of individual hybridizations. Software implementing the method is publicly available. PMID:19401678

  18. Ethnic Group Bias in Intelligence Test Items.

    ERIC Educational Resources Information Center

    Scheuneman, Janice

    In previous studies of ethnic group bias in intelligence test items, the question of bias has been confounded with ability differences between the ethnic group samples compared. The present study is based on a conditional probability model in which an unbiased item is defined as one where the probability of a correct response to an item is the…

  19. Bias assessment of lower and middle tropospheric CO2 concentrations of GOSAT/TANSO-FTS TIR version 1 product

    NASA Astrophysics Data System (ADS)

    Saitoh, Naoko; Kimoto, Shuhei; Sugimura, Ryo; Imasu, Ryoichi; Shiomi, Kei; Kuze, Akihiko; Niwa, Yosuke; Machida, Toshinobu; Sawa, Yousuke; Matsueda, Hidekazu

    2017-10-01

    CO2 observations in the free troposphere can be useful for constraining CO2 source and sink estimates at the surface since they represent CO2 concentrations away from point source emissions. The thermal infrared (TIR) band of the Thermal and Near Infrared Sensor for Carbon Observation (TANSO) Fourier transform spectrometer (FTS) on board the Greenhouse Gases Observing Satellite (GOSAT) has been observing global CO2 concentrations in the free troposphere for about 8 years and thus could provide a dataset with which to evaluate the vertical transport of CO2 from the surface to the upper atmosphere. This study evaluated biases in the TIR version 1 (V1) CO2 product in the lower troposphere (LT) and the middle troposphere (MT) (736-287 hPa), on the basis of comparisons with CO2 profiles obtained over airports using Continuous CO2 Measuring Equipment (CME) in the Comprehensive Observation Network for Trace gases by AIrLiner (CONTRAIL) project. Bias-correction values are presented for TIR CO2 data for each pressure layer in the LT and MT regions during each season and in each latitude band: 40-20° S, 20° S-20° N, 20-40° N, and 40-60° N. TIR V1 CO2 data had consistent negative biases of 1-1.5 % compared with CME CO2 data in the LT and MT regions, with the largest negative biases at 541-398 hPa, partly due to the use of 10 µm CO2 absorption band in conjunction with 15 and 9 µm absorption bands in the V1 retrieval algorithm. Global comparisons between TIR CO2 data to which the bias-correction values were applied and CO2 data simulated by a transport model based on the Nonhydrostatic ICosahedral Atmospheric Model (NICAM-TM) confirmed the validity of the bias-correction values evaluated over airports in limited areas. In low latitudes in the upper MT region (398-287 hPa), however, TIR CO2 data in northern summer were overcorrected by these bias-correction values; this is because the bias-correction values were determined using comparisons mainly over airports in Southeast Asia, where CO2 concentrations in the upper atmosphere display relatively large variations due to strong updrafts.

  20. Model Development for MODIS Thermal Band Electronic Crosstalk

    NASA Technical Reports Server (NTRS)

    Chang, Tiejun; Wu, Aisheng; Geng, Xu; Li, Yonghonh; Brinkman, Jake; Keller, Graziela; Xiong, Xiaoxiong

    2016-01-01

    MODerate-resolution Imaging Spectroradiometer (MODIS) has 36 bands. Among them, 16 thermal emissive bands covering a wavelength range from 3.8 to 14.4 m. After 16 years on-orbit operation, the electronic crosstalk of a few Terra MODIS thermal emissive bands developed substantial issues that cause biases in the EV brightness temperature measurements and surface feature contamination. The crosstalk effects on band 27 with center wavelength at 6.7 m and band 29 at 8.5 m increased significantly in recent years, affecting downstream products such as water vapor and cloud mask. The crosstalk effect is evident in the near-monthly scheduled lunar measurements, from which the crosstalk coefficients can be derived. The development of an alternative approach is very helpful for independent verification.In this work, a physical model was developed to assess the crosstalk impact on calibration as well as in Earth view brightness temperature retrieval. This model was applied to Terra MODIS band 29 empirically to correct the Earth brightness temperature measurements. In the model development, the detectors nonlinear response is considered. The impact of the electronic crosstalk is assessed in two steps. The first step consists of determining the impact on calibration using the on-board blackbody (BB). Due to the detectors nonlinear response and large background signal, both linear and nonlinear coefficients are affected by the crosstalk from sending bands. The second step is to calculate the effects on the Earth view brightness temperature retrieval. The effects include those from affected calibration coefficients and the contamination of Earth view measurements. This model links the measurement bias with crosstalk coefficients, detector non-linearity, and the ratio of Earth measurements between the sending and receiving bands. The correction of the electronic cross talk can be implemented empirically from the processed bias at different brightness temperature. The implementation can be done through two approaches. As routine calibration assessment for thermal infrared bands, the trending over select Earth scenes is processed for all the detectors in a band and the band averaged bias is derived at a certain time. In this case, the correction of an affected band can be made using the regression of the model with band averaged bias and then corrections of detector differences are applied. The second approach requires the trending for individual detectors and the bias for each detector is used for regression with the model. A test using the first approach was made for Terra MODIS band 29 with the biases derived from long-term trending of brightness temperature over ocean and Dome-C.

  1. Comparing multilayer brain networks between groups: Introducing graph metrics and recommendations.

    PubMed

    Mandke, Kanad; Meier, Jil; Brookes, Matthew J; O'Dea, Reuben D; Van Mieghem, Piet; Stam, Cornelis J; Hillebrand, Arjan; Tewarie, Prejaas

    2018-02-01

    There is an increasing awareness of the advantages of multi-modal neuroimaging. Networks obtained from different modalities are usually treated in isolation, which is however contradictory to accumulating evidence that these networks show non-trivial interdependencies. Even networks obtained from a single modality, such as frequency-band specific functional networks measured from magnetoencephalography (MEG) are often treated independently. Here, we discuss how a multilayer network framework allows for integration of multiple networks into a single network description and how graph metrics can be applied to quantify multilayer network organisation for group comparison. We analyse how well-known biases for single layer networks, such as effects of group differences in link density and/or average connectivity, influence multilayer networks, and we compare four schemes that aim to correct for such biases: the minimum spanning tree (MST), effective graph resistance cost minimisation, efficiency cost optimisation (ECO) and a normalisation scheme based on singular value decomposition (SVD). These schemes can be applied to the layers independently or to the multilayer network as a whole. For correction applied to whole multilayer networks, only the SVD showed sufficient bias correction. For correction applied to individual layers, three schemes (ECO, MST, SVD) could correct for biases. By using generative models as well as empirical MEG and functional magnetic resonance imaging (fMRI) data, we further demonstrated that all schemes were sensitive to identify network topology when the original networks were perturbed. In conclusion, uncorrected multilayer network analysis leads to biases. These biases may differ between centres and studies and could consequently lead to unreproducible results in a similar manner as for single layer networks. We therefore recommend using correction schemes prior to multilayer network analysis for group comparisons. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. A Statistical Bias Correction Tool for Generating Climate Change Scenarios in Indonesia based on CMIP5 Datasets

    NASA Astrophysics Data System (ADS)

    Faqih, A.

    2017-03-01

    Providing information regarding future climate scenarios is very important in climate change study. The climate scenario can be used as basic information to support adaptation and mitigation studies. In order to deliver future climate scenarios over specific region, baseline and projection data from the outputs of global climate models (GCM) is needed. However, due to its coarse resolution, the data have to be downscaled and bias corrected in order to get scenario data with better spatial resolution that match the characteristics of the observed data. Generating this downscaled data is mostly difficult for scientist who do not have specific background, experience and skill in dealing with the complex data from the GCM outputs. In this regards, it is necessary to develop a tool that can be used to simplify the downscaling processes in order to help scientist, especially in Indonesia, for generating future climate scenario data that can be used for their climate change-related studies. In this paper, we introduce a tool called as “Statistical Bias Correction for Climate Scenarios (SiBiaS)”. The tool is specially designed to facilitate the use of CMIP5 GCM data outputs and process their statistical bias corrections relative to the reference data from observations. It is prepared for supporting capacity building in climate modeling in Indonesia as part of the Indonesia 3rd National Communication (TNC) project activities.

  3. Corrections for the effects of significant wave height and attitude on Geosat radar altimeter measurements

    NASA Technical Reports Server (NTRS)

    Hayne, G. S.; Hancock, D. W., III

    1990-01-01

    Range estimates from a radar altimeter have biases which are a function of the significant wave height (SWH) and the satellite attitude angle (AA). Based on results of prelaunch Geosat modeling and simulation, a correction for SWH and AA was already applied to the sea-surface height estimates from Geosat's production data processing. By fitting a detailed model radar return waveform to Geosat waveform sampler data, it is possible to provide independent estimates of the height bias, the SWH, and the AA. The waveform fitting has been carried out for 10-sec averages of Geosat waveform sampler data over a wide range of SWH and AA values. The results confirm that Geosat sea-surface-height correction is good to well within the original dm-level specification, but that an additional height correction can be made at the level of several cm.

  4. Augmented Hebbian reweighting accounts for accuracy and induced bias in perceptual learning with reverse feedback

    PubMed Central

    Liu, Jiajuan; Dosher, Barbara Anne; Lu, Zhong-Lin

    2015-01-01

    Using an asymmetrical set of vernier stimuli (−15″, −10″, −5″, +10″, +15″) together with reverse feedback on the small subthreshold offset stimulus (−5″) induces response bias in performance (Aberg & Herzog, 2012; Herzog, Eward, Hermens, & Fahle, 2006; Herzog & Fahle, 1999). These conditions are of interest for testing models of perceptual learning because the world does not always present balanced stimulus frequencies or accurate feedback. Here we provide a comprehensive model for the complex set of asymmetric training results using the augmented Hebbian reweighting model (Liu, Dosher, & Lu, 2014; Petrov, Dosher, & Lu, 2005, 2006) and the multilocation integrated reweighting theory (Dosher, Jeter, Liu, & Lu, 2013). The augmented Hebbian learning algorithm incorporates trial-by-trial feedback, when present, as another input to the decision unit and uses the observer's internal response to update the weights otherwise; block feedback alters the weights on bias correction (Liu et al., 2014). Asymmetric training with reversed feedback incorporates biases into the weights between representation and decision. The model correctly predicts the basic induction effect, its dependence on trial-by-trial feedback, and the specificity of bias to stimulus orientation and spatial location, extending the range of augmented Hebbian reweighting accounts of perceptual learning. PMID:26418382

  5. Augmented Hebbian reweighting accounts for accuracy and induced bias in perceptual learning with reverse feedback.

    PubMed

    Liu, Jiajuan; Dosher, Barbara Anne; Lu, Zhong-Lin

    2015-01-01

    Using an asymmetrical set of vernier stimuli (-15″, -10″, -5″, +10″, +15″) together with reverse feedback on the small subthreshold offset stimulus (-5″) induces response bias in performance (Aberg & Herzog, 2012; Herzog, Eward, Hermens, & Fahle, 2006; Herzog & Fahle, 1999). These conditions are of interest for testing models of perceptual learning because the world does not always present balanced stimulus frequencies or accurate feedback. Here we provide a comprehensive model for the complex set of asymmetric training results using the augmented Hebbian reweighting model (Liu, Dosher, & Lu, 2014; Petrov, Dosher, & Lu, 2005, 2006) and the multilocation integrated reweighting theory (Dosher, Jeter, Liu, & Lu, 2013). The augmented Hebbian learning algorithm incorporates trial-by-trial feedback, when present, as another input to the decision unit and uses the observer's internal response to update the weights otherwise; block feedback alters the weights on bias correction (Liu et al., 2014). Asymmetric training with reversed feedback incorporates biases into the weights between representation and decision. The model correctly predicts the basic induction effect, its dependence on trial-by-trial feedback, and the specificity of bias to stimulus orientation and spatial location, extending the range of augmented Hebbian reweighting accounts of perceptual learning.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steigies, C. T.; Barjatya, A.

    Langmuir probes are standard instruments for plasma density measurements on many sounding rockets. These probes can be operated in swept-bias as well as in fixed-bias modes. In swept-bias Langmuir probes, contamination effects are frequently visible as a hysteresis between consecutive up and down voltage ramps. This hysteresis, if not corrected, leads to poorly determined plasma densities and temperatures. With a properly chosen sweep function, the contamination parameters can be determined from the measurements and correct plasma parameters can then be determined. In this paper, we study the contamination effects on fixed-bias Langmuir probes, where no hysteresis type effect is seenmore » in the data. Even though the contamination is not evident from the measurements, it does affect the plasma density fluctuation spectrum as measured by the fixed-bias Langmuir probe. We model the contamination as a simple resistor-capacitor circuit between the probe surface and the plasma. We find that measurements of small scale plasma fluctuations (meter to sub-meter scale) along a rocket trajectory are not affected, but the measured amplitude of large scale plasma density variation (tens of meters or larger) is attenuated. From the model calculations, we determine amplitude and cross-over frequency of the contamination effect on fixed-bias probes for different contamination parameters. The model results also show that a fixed bias probe operating in the ion-saturation region is affected less by contamination as compared to a fixed bias probe operating in the electron saturation region.« less

  7. On the importance of appropriate precipitation gauge catch correction for hydrological modelling at mid to high latitudes

    NASA Astrophysics Data System (ADS)

    Stisen, S.; Højberg, A. L.; Troldborg, L.; Refsgaard, J. C.; Christensen, B. S. B.; Olsen, M.; Henriksen, H. J.

    2012-11-01

    Precipitation gauge catch correction is often given very little attention in hydrological modelling compared to model parameter calibration. This is critical because significant precipitation biases often make the calibration exercise pointless, especially when supposedly physically-based models are in play. This study addresses the general importance of appropriate precipitation catch correction through a detailed modelling exercise. An existing precipitation gauge catch correction method addressing solid and liquid precipitation is applied, both as national mean monthly correction factors based on a historic 30 yr record and as gridded daily correction factors based on local daily observations of wind speed and temperature. The two methods, named the historic mean monthly (HMM) and the time-space variable (TSV) correction, resulted in different winter precipitation rates for the period 1990-2010. The resulting precipitation datasets were evaluated through the comprehensive Danish National Water Resources model (DK-Model), revealing major differences in both model performance and optimised model parameter sets. Simulated stream discharge is improved significantly when introducing the TSV correction, whereas the simulated hydraulic heads and multi-annual water balances performed similarly due to recalibration adjusting model parameters to compensate for input biases. The resulting optimised model parameters are much more physically plausible for the model based on the TSV correction of precipitation. A proxy-basin test where calibrated DK-Model parameters were transferred to another region without site specific calibration showed better performance for parameter values based on the TSV correction. Similarly, the performances of the TSV correction method were superior when considering two single years with a much dryer and a much wetter winter, respectively, as compared to the winters in the calibration period (differential split-sample tests). We conclude that TSV precipitation correction should be carried out for studies requiring a sound dynamic description of hydrological processes, and it is of particular importance when using hydrological models to make predictions for future climates when the snow/rain composition will differ from the past climate. This conclusion is expected to be applicable for mid to high latitudes, especially in coastal climates where winter precipitation types (solid/liquid) fluctuate significantly, causing climatological mean correction factors to be inadequate.

  8. Estimation and correction of different flavors of surface observation biases in ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    Lorente-Plazas, Raquel; Hacker, Josua P.; Collins, Nancy; Lee, Jared A.

    2017-04-01

    The impact of assimilating surface observations has been shown in several publications, for improving weather prediction inside of the boundary layer as well as the flow aloft. However, the assimilation of surface observations is often far from optimal due to the presence of both model and observation biases. The sources of these biases can be diverse: an instrumental offset, errors associated to the comparison of point-based observations and grid-cell average, etc. To overcome this challenge, a method was developed using the ensemble Kalman filter. The approach consists on representing each observation bias as a parameter. These bias parameters are added to the forward operator and they extend the state vector. As opposed to the observation bias estimation approaches most common in operational systems (e.g. for satellite radiances), the state vector and parameters are simultaneously updated by applying the Kalman filter equations to the augmented state. The method to estimate and correct the observation bias is evaluated using observing system simulation experiments (OSSEs) with the Weather Research and Forecasting (WRF) model. OSSEs are constructed for the conventional observation network including radiosondes, aircraft observations, atmospheric motion vectors, and surface observations. Three different kinds of biases are added to 2-meter temperature for synthetic METARs. From the simplest to more sophisticated, imposed biases are: (1) a spatially invariant bias, (2) a spatially varying bias proportional to topographic height differences between the model and the observations, and (3) bias that is proportional to the temperature. The target region characterized by complex terrain is the western U.S. on a domain with 30-km grid spacing. Observations are assimilated every 3 hours using an 80-member ensemble during September 2012. Results demonstrate that the approach is able to estimate and correct the bias when it is spatially invariant (experiment 1). More complex bias structure in experiments (2) and (3) are more difficult to estimate, but still possible. Estimated the parameter in experiments with unbiased observations results in spatial and temporal parameter variability about zero, and establishes a threshold on the accuracy of the parameter in further experiments. When the observations are biased, the mean parameter value is close to the true bias, but temporal and spatial variability in the parameter estimates is similar to the parameters used when estimating a zero bias in the observations. The distributions are related to other errors in the forecasts, indicating that the parameters are absorbing some of the forecast error from other sources. In this presentation we elucidate the reasons for the resulting parameter estimates, and their variability.

  9. Estimation and Correction of bias of long-term simulated climate data from Global Circulation Models (GCMs)

    NASA Astrophysics Data System (ADS)

    Mehan, S.; Gitau, M. W.

    2017-12-01

    Global circulation models are often used in simulating long-term climate data for use in hydrologic studies. However, some bias (difference between simulated values and observed data) has been observed especially while simulating precipitation events. The bias is especially evident with respect to simulating dry and wet days. This is because GCMs tend to underestimate large precipitation events with the associated precipitation amounts being distributed to some dry days, thus, leading to a larger number of wet days each with some amount of rainfall. The accuracy of precipitation simulations impacts the accuracy of other simulated components such as flow and water quality. It is, thus, very important to correct the bias associated with precipitation before it is used for any modeling applications. This study aims to correct the bias specifically associated with precipitation events with a focus on the Western Lake Erie Basin (WLEB). Analytical, statistical, and extreme event analyses for three different stations (Adrian, MI; Norwalk, OH; and Fort Wayne, IN) in the WLEB were carried out to quantify the bias. Findings indicated that GCMs overestimated the wet sequences and underestimated dry day probabilities. The number of wet sequences simulated by nine GCMs each from two different open sources were 310-678 (Fort Wayne, IN); 318-600 (Adrian, MI); and 346-638 (Norwalk, OH) compared with 166, 150, and 180, respectively. Predicted conditional probabilities of a dry day followed by wet day (P (D|W)) ranged between 0.16-0.42 (Fort Wayne, IN); 0.29-0.41(Adrian, MI); and 0.13-0.40 (Norwalk, OH) from the different GCMs compared to 0.52 (Fort Wayne, IN and Norwalk, OH); and 0.54 (Adrian, MI) from the observed climate data. There was a difference of 0-8.5% between the distribution of simulated climate values and observed climate data for precipitation and temperature for all three stations (Cohen's d effective size < 0.2). Further work involves the use of Stochastic Weather Generators to correct the conditional probabilities and better capture the dry and wet events for use in the hydrologic and water resources modeling.

  10. Innovative Liner Concepts: Experiments and Impedance Modeling of Liners Including the Effect of Bias Flow

    NASA Technical Reports Server (NTRS)

    Kelly, Jeff; Betts, Juan Fernando; Fuller, Chris

    2000-01-01

    The study of normal impedance of perforated plate acoustic liners including the effect of bias flow was studied. Two impedance models were developed by modeling the internal flows of perforate orifices as infinite tubes with the inclusion of end corrections to handle finite length effects. These models assumed incompressible and compressible flows, respectively, between the far field and the perforate orifice. The incompressible model was used to predict impedance results for perforated plates with percent open areas ranging from 5% to 15%. The predicted resistance results showed better agreement with experiments for the higher percent open area samples. The agreement also tended to deteriorate as bias flow was increased. For perforated plates with percent open areas ranging from 1% to 5%, the compressible model was used to predict impedance results. The model predictions were closer to the experimental resistance results for the 2% to 3% open area samples. The predictions tended to deteriorate as bias flow was increased. The reactance results were well predicted by the models for the higher percent open area, but deteriorated as the percent open area was lowered (5%) and bias flow was increased. A fit was done on the incompressible model to the experimental database. The fit was performed using an optimization routine that found the optimal set of multiplication coefficients to the non-dimensional groups that minimized the least squares slope error between predictions and experiments. The result of the fit indicated that terms not associated with bias flow required a greater degree of correction than the terms associated with the bias flow. This model improved agreement with experiments by nearly 15% for the low percent open area (5%) samples when compared to the unfitted model. The fitted model and the unfitted model performed equally well for the higher percent open area (10% and 15%).

  11. A model-based correction for outcome reporting bias in meta-analysis.

    PubMed

    Copas, John; Dwan, Kerry; Kirkham, Jamie; Williamson, Paula

    2014-04-01

    It is often suspected (or known) that outcomes published in medical trials are selectively reported. A systematic review for a particular outcome of interest can only include studies where that outcome was reported and so may omit, for example, a study that has considered several outcome measures but only reports those giving significant results. Using the methodology of the Outcome Reporting Bias (ORB) in Trials study of (Kirkham and others, 2010. The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews. British Medical Journal 340, c365), we suggest a likelihood-based model for estimating the effect of ORB on confidence intervals and p-values in meta-analysis. Correcting for bias has the effect of moving estimated treatment effects toward the null and hence more cautious assessments of significance. The bias can be very substantial, sometimes sufficient to completely overturn previous claims of significance. We re-analyze two contrasting examples, and derive a simple fixed effects approximation that can be used to give an initial estimate of the effect of ORB in practice.

  12. Estimating the price elasticity of beer: meta-analysis of data with heterogeneity, dependence, and publication bias.

    PubMed

    Nelson, Jon P

    2014-01-01

    Precise estimates of price elasticities are important for alcohol tax policy. Using meta-analysis, this paper corrects average beer elasticities for heterogeneity, dependence, and publication selection bias. A sample of 191 estimates is obtained from 114 primary studies. Simple and weighted means are reported. Dependence is addressed by restricting number of estimates per study, author-restricted samples, and author-specific variables. Publication bias is addressed using funnel graph, trim-and-fill, and Egger's intercept model. Heterogeneity and selection bias are examined jointly in meta-regressions containing moderator variables for econometric methodology, primary data, and precision of estimates. Results for fixed- and random-effects regressions are reported. Country-specific effects and sample time periods are unimportant, but several methodology variables help explain the dispersion of estimates. In models that correct for selection bias and heterogeneity, the average beer price elasticity is about -0.20, which is less elastic by 50% compared to values commonly used in alcohol tax policy simulations. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Quality Controlled Radiosonde Profile from MC3E

    DOE Data Explorer

    Toto, Tami; Jensen, Michael

    2014-11-13

    The sonde-adjust VAP produces data that corrects documented biases in radiosonde humidity measurements. Unique fields contained within this datastream include smoothed original relative humidity, dry bias corrected relative humidity, and final corrected relative humidity. The smoothed RH field refines the relative humidity from integers - the resolution of the instrument - to fractions of a percent. This profile is then used to calculate the dry bias corrected field. The final correction fixes a time-lag problem and uses the dry-bias field as input into the algorithm. In addition to dry bias, solar heating is another correction that is encompassed in the final corrected relative humidity field. Additional corrections were made to soundings at the extended facility sites (S0*) as necessary: Corrected erroneous surface elevation (and up through rest of height of sounding), for S03, S04 and S05. Corrected erroneous surface pressure at Chanute (S02).

  14. Bias-corrected diagnostic performance of the naked-eye single-tube red-cell osmotic fragility test (NESTROFT): an effective screening tool for beta-thalassemia.

    PubMed

    Mamtani, Manju; Jawahirani, Anil; Das, Kishor; Rughwani, Vinky; Kulkarni, Hemant

    2006-08-01

    It is being increasingly recognized that a majority of the countries in the thalassemia-belt need a cost-effective screening program as the first step towards control of thalassemia. Although the naked eye single tube red cell osmotic fragility test (NESTROFT) has been considered to be a very effective screening tool for beta-thalassemia trait, assessment of its diagnostic performance has been affected with the reference test- and verification-bias. Here, we set out to provide estimates of sensitivity and specificity of NESTROFT corrected for these potential biases. We conducted a cross-sectional diagnostic test evaluation study using data from 1563 subjects from Central India with a high prevalence of beta-thalassemia. We used latent class modelling after ensuring its validity to account for the reference test bias and global sensitivity analysis to control the verification bias. We also compared the results of latent class modelling with those of five discriminant indexes. We observed that across a range of cut-offs for the mean corpuscular volume (MCV) and the hemoglobin A2 (HbA2) concentration the average sensitivity and specificity of NESTROFT obtained from latent class modelling was 99.8 and 83.7%, respectively. These estimates were comparable to those characterizing the diagnostic performance of HbA2, which is considered by many as the reference test to detect beta-thalassemia. After correction for the verification bias these estimates were 93.4 and 97.2%, respectively. Combined with the inexpensive and quick disposition of NESTROFT, these results strongly support its candidature as a screening tool-especially in the resource-poor and high-prevalence settings.

  15. Correction of gene expression data: Performance-dependency on inter-replicate and inter-treatment biases.

    PubMed

    Darbani, Behrooz; Stewart, C Neal; Noeparvar, Shahin; Borg, Søren

    2014-10-20

    This report investigates for the first time the potential inter-treatment bias source of cell number for gene expression studies. Cell-number bias can affect gene expression analysis when comparing samples with unequal total cellular RNA content or with different RNA extraction efficiencies. For maximal reliability of analysis, therefore, comparisons should be performed at the cellular level. This could be accomplished using an appropriate correction method that can detect and remove the inter-treatment bias for cell-number. Based on inter-treatment variations of reference genes, we introduce an analytical approach to examine the suitability of correction methods by considering the inter-treatment bias as well as the inter-replicate variance, which allows use of the best correction method with minimum residual bias. Analyses of RNA sequencing and microarray data showed that the efficiencies of correction methods are influenced by the inter-treatment bias as well as the inter-replicate variance. Therefore, we recommend inspecting both of the bias sources in order to apply the most efficient correction method. As an alternative correction strategy, sequential application of different correction approaches is also advised. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Assimilation of ASCAT near-surface soil moisture into the SIM hydrological model over France

    NASA Astrophysics Data System (ADS)

    Draper, C.; Mahfouf, J.-F.; Calvet, J.-C.; Martin, E.; Wagner, W.

    2011-12-01

    This study examines whether the assimilation of remotely sensed near-surface soil moisture observations might benefit an operational hydrological model, specifically Météo-France's SAFRAN-ISBA-MODCOU (SIM) model. Soil moisture data derived from ASCAT backscatter observations are assimilated into SIM using a Simplified Extended Kalman Filter (SEKF) over 3.5 years. The benefit of the assimilation is tested by comparison to a delayed cut-off version of SIM, in which the land surface is forced with more accurate atmospheric analyses, due to the availability of additional atmospheric observations after the near-real time data cut-off. However, comparing the near-real time and delayed cut-off SIM models revealed that the main difference between them is a dry bias in the near-real time precipitation forcing, which resulted in a dry bias in the root-zone soil moisture and associated surface moisture flux forecasts. While assimilating the ASCAT data did reduce the root-zone soil moisture dry bias (by nearly 50%), this was more likely due to a bias within the SEKF, than due to the assimilation having accurately responded to the precipitation errors. Several improvements to the assimilation are identified to address this, and a bias-aware strategy is suggested for explicitly correcting the model bias. However, in this experiment the moisture added by the SEKF was quickly lost from the model surface due to the enhanced surface fluxes (particularly drainage) induced by the wetter soil moisture states. Consequently, by the end of each winter, during which frozen conditions prevent the ASCAT data from being assimilated, the model land surface had returned to its original (dry-biased) climate. This highlights that it would be more effective to address the precipitation bias directly, than to correct it by constraining the model soil moisture through data assimilation.

  17. --No Title--

    Science.gov Websites

    2008073000 2008072900 2008072800 Background information bias reduction = ( | domain-averaged ensemble mean bias | - | domain-averaged bias-corrected ensemble mean bias | / | domain-averaged bias-corrected ensemble mean bias | NAEFS Products | NAEFS | EMC Ensemble Products EMC | NCEP | National Weather Service

  18. Power spectrum precision for redshift space distortions

    NASA Astrophysics Data System (ADS)

    Linder, Eric V.; Samsing, Johan

    2013-02-01

    Redshift space distortions in galaxy clustering offer a promising technique for probing the growth rate of structure and testing dark energy properties and gravity. We consider the issue of to what accuracy they need to be modeled in order not to unduly bias cosmological conclusions. Fitting for nonlinear and redshift space corrections to the linear theory real space density power spectrum in bins in wavemode, we analyze both the effect of marginalizing over these corrections and of the bias due to not correcting them fully. While naively subpercent accuracy is required to avoid bias in the unmarginalized case, in the fitting approach the Kwan-Lewis-Linder reconstruction function for redshift space distortions is found to be accurately selfcalibrated with little degradation in dark energy and gravity parameter estimation for a next generation galaxy redshift survey such as BigBOSS.

  19. Comparing State SAT Scores: Problems, Biases, and Corrections.

    ERIC Educational Resources Information Center

    Gohmann, Stephen F.

    1988-01-01

    One method to correct for selection bias in comparing Scholastic Aptitude Test (SAT) scores among states is presented, which is a modification of J. J. Heckman's Selection Bias Correction (1976, 1979). Empirical results suggest that sample selection bias is present in SAT score regressions. (SLD)

  20. Station Correction Uncertainty in Multiple Event Location Algorithms and the Effect on Error Ellipses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, Jason P.; Carlson, Deborah K.; Ortiz, Anne

    Accurate location of seismic events is crucial for nuclear explosion monitoring. There are several sources of error in seismic location that must be taken into account to obtain high confidence results. Most location techniques account for uncertainties in the phase arrival times (measurement error) and the bias of the velocity model (model error), but they do not account for the uncertainty of the velocity model bias. By determining and incorporating this uncertainty in the location algorithm we seek to improve the accuracy of the calculated locations and uncertainty ellipses. In order to correct for deficiencies in the velocity model, itmore » is necessary to apply station specific corrections to the predicted arrival times. Both master event and multiple event location techniques assume that the station corrections are known perfectly, when in reality there is an uncertainty associated with these corrections. For multiple event location algorithms that calculate station corrections as part of the inversion, it is possible to determine the variance of the corrections. The variance can then be used to weight the arrivals associated with each station, thereby giving more influence to stations with consistent corrections. We have modified an existing multiple event location program (based on PMEL, Pavlis and Booker, 1983). We are exploring weighting arrivals with the inverse of the station correction standard deviation as well using the conditional probability of the calculated station corrections. This is in addition to the weighting already given to the measurement and modeling error terms. We re-locate a group of mining explosions that occurred at Black Thunder, Wyoming, and compare the results to those generated without accounting for station correction uncertainty.« less

  1. Correction of distortions in distressed mothers' ratings of their preschool children's psychopathology.

    PubMed

    Müller, Jörg M; Furniss, Tilman

    2013-11-30

    The often-reported low informant agreement about child psychopathology between multiple informants has lead to various suggestions about how to address discrepant ratings. Among the factors that may lower agreement that have been discussed is informant credibility, reliability, or psychopathology, which is of interest in this paper. We tested three different models, namely, the accuracy, the distortion, and an integrated so-called combined model, that conceptualize parental ratings to assess child psychopathology. The data comprise ratings of child psychopathology from multiple informants (mother, therapist and kindergarten teacher) and ratings of maternal psychopathology. The children were patients in a preschool psychiatry unit (N=247). The results from structural equation modeling show that maternal ratings of child psychopathology were biased by maternal psychopathology (distortion model). Based on this statistical background, we suggest a method to adjust biased maternal ratings. We illustrate the maternal bias by comparing the ratings of mother to expert ratings (combined kindergarten teacher and therapist ratings) and show that the correction equation increases the agreement between maternal and expert ratings. We conclude that this approach may help to reduce misclassification of preschool children as 'clinical' on the basis of biased maternal ratings. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  2. A Systematic Error Correction Method for TOVS Radiances

    NASA Technical Reports Server (NTRS)

    Joiner, Joanna; Rokke, Laurie; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Treatment of systematic errors is crucial for the successful use of satellite data in a data assimilation system. Systematic errors in TOVS radiance measurements and radiative transfer calculations can be as large or larger than random instrument errors. The usual assumption in data assimilation is that observational errors are unbiased. If biases are not effectively removed prior to assimilation, the impact of satellite data will be lessened and can even be detrimental. Treatment of systematic errors is important for short-term forecast skill as well as the creation of climate data sets. A systematic error correction algorithm has been developed as part of a 1D radiance assimilation. This scheme corrects for spectroscopic errors, errors in the instrument response function, and other biases in the forward radiance calculation for TOVS. Such algorithms are often referred to as tuning of the radiances. The scheme is able to account for the complex, air-mass dependent biases that are seen in the differences between TOVS radiance observations and forward model calculations. We will show results of systematic error correction applied to the NOAA 15 Advanced TOVS as well as its predecessors. We will also discuss the ramifications of inter-instrument bias with a focus on stratospheric measurements.

  3. A groundwater data assimilation application study in the Heihe mid-reach

    NASA Astrophysics Data System (ADS)

    Ragettli, S.; Marti, B. S.; Wolfgang, K.; Li, N.

    2017-12-01

    The present work focuses on modelling of the groundwater flow in the mid-reach of the endorheic river Heihe in the Zhangye oasis (Gansu province) in arid north-west China. In order to optimise the water resources management in the oasis, reliable forecasts of groundwater level development under different management options and environmental boundary conditions have to be produced. For this means, groundwater flow is modelled with Modflow and coupled to an Ensemble Kalman Filter programmed in Matlab. The model is updated with monthly time steps, featuring perturbed boundary conditions to account for uncertainty in model forcing. Constant biases between model and observations have been corrected prior to updating and compared to model runs without bias correction. Different options for data assimilation (states and/or parameters), updating frequency, and measures against filter inbreeding (damping factor, covariance inflation, spatial localization) have been tested against each other. Results show a high dependency of the Ensemble Kalman filter performance on the selection of observations for data assimilation. For the present regional model, bias correction is necessary for a good filter performance. A combination of spatial localization and covariance inflation is further advisable to reduce filter inbreeding problems. Best performance is achieved if parameter updates are not large, an indication for good prior model calibration. Asynchronous updating of parameter values once every five years (with data of the past five years) and synchronous updating of the groundwater levels is better suited for this groundwater system with not or slow changing parameter values than synchronous updating of both groundwater levels and parameters at every time step applying a damping factor. The filter is not able to correct time lags of signals.

  4. A 2 × 2 taxonomy of multilevel latent contextual models: accuracy-bias trade-offs in full and partial error correction models.

    PubMed

    Lüdtke, Oliver; Marsh, Herbert W; Robitzsch, Alexander; Trautwein, Ulrich

    2011-12-01

    In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data when estimating contextual effects are distinguished: unreliability that is due to measurement error and unreliability that is due to sampling error. The fact that studies may or may not correct for these 2 types of error can be translated into a 2 × 2 taxonomy of multilevel latent contextual models comprising 4 approaches: an uncorrected approach, partial correction approaches correcting for either measurement or sampling error (but not both), and a full correction approach that adjusts for both sources of error. It is shown mathematically and with simulated data that the uncorrected and partial correction approaches can result in substantially biased estimates of contextual effects, depending on the number of L1 individuals per group, the number of groups, the intraclass correlation, the number of indicators, and the size of the factor loadings. However, the simulation study also shows that partial correction approaches can outperform full correction approaches when the data provide only limited information in terms of the L2 construct (i.e., small number of groups, low intraclass correlation). A real-data application from educational psychology is used to illustrate the different approaches.

  5. Implications of CO Bias for Ozone and Methane Lifetime in a CCM

    NASA Technical Reports Server (NTRS)

    Strode, Sarah; Duncan, Bryan Neal; Yegorova, Elena; Douglass, Anne

    2013-01-01

    A low bias in carbon monoxide compared to observations at high latitudes is a common feature of chemistry climate models. CO bias can both indicate and contribute to a bias in modeled OH and methane lifetime. This study examines possible causes of CO bias in the ACCMIP simulation of the GEOSCCM, and considers how attributing the CO bias to uncertainty in CO emissions versus biases in other constituents impacts the relationship between CO bias and methane lifetime. We use a simplified model of CO tagged by source with specified OH to quantify the sensitivity of the CO bias to changes in CO emissions or OH concentration, comparing the modeled CO to surface and MOPITT observations. The simplified model shows that decreasing OH in the northern hemisphere removes most of the global mean and inter-hemispheric bias in surface CO. We then use results from this analysis to explore how adjusting CO sources in the CCM impacts the concentrations of ozone, OH and methane. The CCM simulation also exhibits biases in ozone and water vapor compared to observations. We use a parameterized CO-OH-CH4 model that takes ozone and water vapor as inputs to the parameterization to examine whether correcting water and ozone biases can alter OH enough to remove the CO bias. Through this analysis, we aim to better quantify the relationship between CO bias and model biases in ozone concentrations and methane lifetime.

  6. Atlas-based analysis of cardiac shape and function: correction of regional shape bias due to imaging protocol for population studies.

    PubMed

    Medrano-Gracia, Pau; Cowan, Brett R; Bluemke, David A; Finn, J Paul; Kadish, Alan H; Lee, Daniel C; Lima, Joao A C; Suinesiaputra, Avan; Young, Alistair A

    2013-09-13

    Cardiovascular imaging studies generate a wealth of data which is typically used only for individual study endpoints. By pooling data from multiple sources, quantitative comparisons can be made of regional wall motion abnormalities between different cohorts, enabling reuse of valuable data. Atlas-based analysis provides precise quantification of shape and motion differences between disease groups and normal subjects. However, subtle shape differences may arise due to differences in imaging protocol between studies. A mathematical model describing regional wall motion and shape was used to establish a coordinate system registered to the cardiac anatomy. The atlas was applied to data contributed to the Cardiac Atlas Project from two independent studies which used different imaging protocols: steady state free precession (SSFP) and gradient recalled echo (GRE) cardiovascular magnetic resonance (CMR). Shape bias due to imaging protocol was corrected using an atlas-based transformation which was generated from a set of 46 volunteers who were imaged with both protocols. Shape bias between GRE and SSFP was regionally variable, and was effectively removed using the atlas-based transformation. Global mass and volume bias was also corrected by this method. Regional shape differences between cohorts were more statistically significant after removing regional artifacts due to imaging protocol bias. Bias arising from imaging protocol can be both global and regional in nature, and is effectively corrected using an atlas-based transformation, enabling direct comparison of regional wall motion abnormalities between cohorts acquired in separate studies.

  7. Normalization, bias correction, and peak calling for ChIP-seq

    PubMed Central

    Diaz, Aaron; Park, Kiyoub; Lim, Daniel A.; Song, Jun S.

    2012-01-01

    Next-generation sequencing is rapidly transforming our ability to profile the transcriptional, genetic, and epigenetic states of a cell. In particular, sequencing DNA from the immunoprecipitation of protein-DNA complexes (ChIP-seq) and methylated DNA (MeDIP-seq) can reveal the locations of protein binding sites and epigenetic modifications. These approaches contain numerous biases which may significantly influence the interpretation of the resulting data. Rigorous computational methods for detecting and removing such biases are still lacking. Also, multi-sample normalization still remains an important open problem. This theoretical paper systematically characterizes the biases and properties of ChIP-seq data by comparing 62 separate publicly available datasets, using rigorous statistical models and signal processing techniques. Statistical methods for separating ChIP-seq signal from background noise, as well as correcting enrichment test statistics for sequence-dependent and sonication biases, are presented. Our method effectively separates reads into signal and background components prior to normalization, improving the signal-to-noise ratio. Moreover, most peak callers currently use a generic null model which suffers from low specificity at the sensitivity level requisite for detecting subtle, but true, ChIP enrichment. The proposed method of determining a cell type-specific null model, which accounts for cell type-specific biases, is shown to be capable of achieving a lower false discovery rate at a given significance threshold than current methods. PMID:22499706

  8. Climate Impacts of CALIPSO-Guided Corrections to Black Carbon Aerosol Vertical Distributions in a Global Climate Model

    NASA Astrophysics Data System (ADS)

    Kovilakam, Mahesh; Mahajan, Salil; Saravanan, R.; Chang, Ping

    2017-10-01

    We alleviate the bias in the tropospheric vertical distribution of black carbon aerosols (BC) in the Community Atmosphere Model (CAM4) using the Cloud-Aerosol and Infrared Pathfinder Satellite Observations (CALIPSO)-derived vertical profiles. A suite of sensitivity experiments are conducted with 1x, 5x, and 10x the present-day model estimated BC concentration climatology, with (corrected, CC) and without (uncorrected, UC) CALIPSO-corrected BC vertical distribution. The globally averaged top of the atmosphere radiative flux perturbation of CC experiments is ˜8-50% smaller compared to uncorrected (UC) BC experiments largely due to an increase in low-level clouds. The global average surface temperature increases, the global average precipitation decreases, and the ITCZ moves northward with the increase in BC radiative forcing, irrespective of the vertical distribution of BC. Further, tropical expansion metrics for the poleward extent of the Northern Hemisphere Hadley cell (HC) indicate that simulated HC expansion is not sensitive to existing model biases in BC vertical distribution.

  9. Wrong, but useful: regional species distribution models may not be improved by range-wide data under biased sampling.

    PubMed

    El-Gabbas, Ahmed; Dormann, Carsten F

    2018-02-01

    Species distribution modeling (SDM) is an essential method in ecology and conservation. SDMs are often calibrated within one country's borders, typically along a limited environmental gradient with biased and incomplete data, making the quality of these models questionable. In this study, we evaluated how adequate are national presence-only data for calibrating regional SDMs. We trained SDMs for Egyptian bat species at two different scales: only within Egypt and at a species-specific global extent. We used two modeling algorithms: Maxent and elastic net, both under the point-process modeling framework. For each modeling algorithm, we measured the congruence of the predictions of global and regional models for Egypt, assuming that the lower the congruence, the lower the appropriateness of the Egyptian dataset to describe the species' niche. We inspected the effect of incorporating predictions from global models as additional predictor ("prior") to regional models, and quantified the improvement in terms of AUC and the congruence between regional models run with and without priors. Moreover, we analyzed predictive performance improvements after correction for sampling bias at both scales. On average, predictions from global and regional models in Egypt only weakly concur. Collectively, the use of priors did not lead to much improvement: similar AUC and high congruence between regional models calibrated with and without priors. Correction for sampling bias led to higher model performance, whatever prior used, making the use of priors less pronounced. Under biased and incomplete sampling, the use of global bats data did not improve regional model performance. Without enough bias-free regional data, we cannot objectively identify the actual improvement of regional models after incorporating information from the global niche. However, we still believe in great potential for global model predictions to guide future surveys and improve regional sampling in data-poor regions.

  10. Climate downscaling over South America for 1971-2000: application in SMAP rainfall-runoff model for Grande River Basin

    NASA Astrophysics Data System (ADS)

    da Silva, Felipe das Neves Roque; Alves, José Luis Drummond; Cataldi, Marcio

    2018-03-01

    This paper aims to validate inflow simulations concerning the present-day climate at Água Vermelha Hydroelectric Plant (AVHP—located on the Grande River Basin) based on the Soil Moisture Accounting Procedure (SMAP) hydrological model. In order to provide rainfall data to the SMAP model, the RegCM regional climate model was also used working with boundary conditions from the MIROC model. Initially, present-day climate simulation performed by RegCM model was analyzed. It was found that, in terms of rainfall, the model was able to simulate the main patterns observed over South America. A bias correction technique was also used and it was essential to reduce mistakes related to rainfall simulation. Comparison between rainfall simulations from RegCM and MIROC showed improvements when the dynamical downscaling was performed. Then, SMAP, a rainfall-runoff hydrological model, was used to simulate inflows at Água Vermelha Hydroelectric Plant. After calibration with observed rainfall, SMAP simulations were evaluated in two different periods from the one used in calibration. During calibration, SMAP captures the inflow variability observed at AVHP. During validation periods, the hydrological model obtained better results and statistics with observed rainfall. However, in spite of some discrepancies, the use of simulated rainfall without bias correction captured the interannual flow variability. However, the use of bias removal in the simulated rainfall performed by RegCM brought significant improvements to the simulation of natural inflows performed by SMAP. Not only the curve of simulated inflow became more similar to the observed inflow, but also the statistics improved their values. Improvements were also noticed in the inflow simulation when the rainfall was provided by the regional climate model compared to the global model. In general, results obtained so far prove that there was an added value in rainfall when regional climate model was compared to global climate model and that data from regional models must be bias-corrected so as to improve their results.

  11. Spatial measurement error and correction by spatial SIMEX in linear regression models when using predicted air pollution exposures.

    PubMed

    Alexeeff, Stacey E; Carroll, Raymond J; Coull, Brent

    2016-04-01

    Spatial modeling of air pollution exposures is widespread in air pollution epidemiology research as a way to improve exposure assessment. However, there are key sources of exposure model uncertainty when air pollution is modeled, including estimation error and model misspecification. We examine the use of predicted air pollution levels in linear health effect models under a measurement error framework. For the prediction of air pollution exposures, we consider a universal Kriging framework, which may include land-use regression terms in the mean function and a spatial covariance structure for the residuals. We derive the bias induced by estimation error and by model misspecification in the exposure model, and we find that a misspecified exposure model can induce asymptotic bias in the effect estimate of air pollution on health. We propose a new spatial simulation extrapolation (SIMEX) procedure, and we demonstrate that the procedure has good performance in correcting this asymptotic bias. We illustrate spatial SIMEX in a study of air pollution and birthweight in Massachusetts. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods

    PubMed Central

    Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun

    2016-01-01

    This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability. PMID:26751455

  13. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods.

    PubMed

    Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun

    2016-01-07

    This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses' quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups' output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.

  14. The Midlatitude Continental Convective Clouds Experiment (MC3E) sounding network: operations, processing and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jensen, M. P.; Toto, T.; Troyan, D.

    2015-01-01

    The Midlatitude Continental Convective Clouds Experiment (MC3E) took place during the spring of 2011 centered in north-central Oklahoma, USA. The main goal of this field campaign was to capture the dynamical and microphysical characteristics of precipitating convective systems in the US Central Plains. A major component of the campaign was a six-site radiosonde array designed to capture the large-scale variability of the atmospheric state with the intent of deriving model forcing data sets. Over the course of the 46-day MC3E campaign, a total of 1362 radiosondes were launched from the enhanced sonde network. This manuscript provides details on the instrumentationmore » used as part of the sounding array, the data processing activities including quality checks and humidity bias corrections and an analysis of the impacts of bias correction and algorithm assumptions on the determination of convective levels and indices. It is found that corrections for known radiosonde humidity biases and assumptions regarding the characteristics of the surface convective parcel result in significant differences in the derived values of convective levels and indices in many soundings. In addition, the impact of including the humidity corrections and quality controls on the thermodynamic profiles that are used in the derivation of a large-scale model forcing data set are investigated. The results show a significant impact on the derived large-scale vertical velocity field illustrating the importance of addressing these humidity biases.« less

  15. The Midlatitude Continental Convective Clouds Experiment (MC3E) sounding network: operations, processing and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jensen, M. P.; Toto, T.; Troyan, D.

    2015-01-27

    The Midlatitude Continental Convective Clouds Experiment (MC3E) took place during the spring of 2011 centered in north-central Oklahoma, USA. The main goal of this field campaign was to capture the dynamical and microphysical characteristics of precipitating convective systems in the US Central Plains. A major component of the campaign was a six-site radiosonde array designed to capture the large-scale variability of the atmospheric state with the intent of deriving model forcing data sets. Over the course of the 46-day MC3E campaign, a total of 1362 radiosondes were launched from the enhanced sonde network. This manuscript provides details on the instrumentationmore » used as part of the sounding array, the data processing activities including quality checks and humidity bias corrections and an analysis of the impacts of bias correction and algorithm assumptions on the determination of convective levels and indices. It is found that corrections for known radiosonde humidity biases and assumptions regarding the characteristics of the surface convective parcel result in significant differences in the derived values of convective levels and indices in many soundings. In addition, the impact of including the humidity corrections and quality controls on the thermodynamic profiles that are used in the derivation of a large-scale model forcing data set are investigated. The results show a significant impact on the derived large-scale vertical velocity field illustrating the importance of addressing these humidity biases.« less

  16. A New Variational Method for Bias Correction and Its Applications to Rodent Brain Extraction.

    PubMed

    Chang, Huibin; Huang, Weimin; Wu, Chunlin; Huang, Su; Guan, Cuntai; Sekar, Sakthivel; Bhakoo, Kishore Kumar; Duan, Yuping

    2017-03-01

    Brain extraction is an important preprocessing step for further analysis of brain MR images. Significant intensity inhomogeneity can be observed in rodent brain images due to the high-field MRI technique. Unlike most existing brain extraction methods that require bias corrected MRI, we present a high-order and L 0 regularized variational model for bias correction and brain extraction. The model is composed of a data fitting term, a piecewise constant regularization and a smooth regularization, which is constructed on a 3-D formulation for medical images with anisotropic voxel sizes. We propose an efficient multi-resolution algorithm for fast computation. At each resolution layer, we solve an alternating direction scheme, all subproblems of which have the closed-form solutions. The method is tested on three T2 weighted acquisition configurations comprising a total of 50 rodent brain volumes, which are with the acquisition field strengths of 4.7 Tesla, 9.4 Tesla and 17.6 Tesla, respectively. On one hand, we compare the results of bias correction with N3 and N4 in terms of the coefficient of variations on 20 different tissues of rodent brain. On the other hand, the results of brain extraction are compared against manually segmented gold standards, BET, BSE and 3-D PCNN based on a number of metrics. With the high accuracy and efficiency, our proposed method can facilitate automatic processing of large-scale brain studies.

  17. Bias-correction and Spatial Disaggregation for Climate Change Impact Assessments at a basin scale

    NASA Astrophysics Data System (ADS)

    Nyunt, Cho; Koike, Toshio; Yamamoto, Akio; Nemoto, Toshihoro; Kitsuregawa, Masaru

    2013-04-01

    Basin-scale climate change impact studies mainly rely on general circulation models (GCMs) comprising the related emission scenarios. Realistic and reliable data from GCM is crucial for national scale or basin scale impact and vulnerability assessments to build safety society under climate change. However, GCM fail to simulate regional climate features due to the imprecise parameterization schemes in atmospheric physics and coarse resolution scale. This study describes how to exclude some unsatisfactory GCMs with respect to focused basin, how to minimize the biases of GCM precipitation through statistical bias correction and how to cover spatial disaggregation scheme, a kind of downscaling, within in a basin. GCMs rejection is based on the regional climate features of seasonal evolution as a bench mark and mainly depends on spatial correlation and root mean square error of precipitation and atmospheric variables over the target region. Global Precipitation Climatology Project (GPCP) and Japanese 25-uear Reanalysis Project (JRA-25) are specified as references in figuring spatial pattern and error of GCM. Statistical bias-correction scheme comprises improvements of three main flaws of GCM precipitation such as low intensity drizzled rain days with no dry day, underestimation of heavy rainfall and inter-annual variability of local climate. Biases of heavy rainfall are conducted by generalized Pareto distribution (GPD) fitting over a peak over threshold series. Frequency of rain day error is fixed by rank order statistics and seasonal variation problem is solved by using a gamma distribution fitting in each month against insi-tu stations vs. corresponding GCM grids. By implementing the proposed bias-correction technique to all insi-tu stations and their respective GCM grid, an easy and effective downscaling process for impact studies at the basin scale is accomplished. The proposed method have been examined its applicability to some of the basins in various climate regions all over the world. The biases are controlled very well by using this scheme in all applied basins. After that, bias-corrected and downscaled GCM precipitation are ready to use for simulating the Water and Energy Budget based Distributed Hydrological Model (WEB-DHM) to analyse the stream flow change or water availability of a target basin under the climate change in near future. Furthermore, it can be investigated any inter-disciplinary studies such as drought, flood, food, health and so on.In summary, an effective and comprehensive statistical bias-correction method was established to fulfil the generative applicability of GCM scale to basin scale without difficulty. This gap filling also promotes the sound decision of river management in the basin with more reliable information to build the resilience society.

  18. A novel method for correcting scanline-observational bias of discontinuity orientation

    PubMed Central

    Huang, Lei; Tang, Huiming; Tan, Qinwen; Wang, Dingjian; Wang, Liangqing; Ez Eldin, Mutasim A. M.; Li, Changdong; Wu, Qiong

    2016-01-01

    Scanline observation is known to introduce an angular bias into the probability distribution of orientation in three-dimensional space. In this paper, numerical solutions expressing the functional relationship between the scanline-observational distribution (in one-dimensional space) and the inherent distribution (in three-dimensional space) are derived using probability theory and calculus under the independence hypothesis of dip direction and dip angle. Based on these solutions, a novel method for obtaining the inherent distribution (also for correcting the bias) is proposed, an approach which includes two procedures: 1) Correcting the cumulative probabilities of orientation according to the solutions, and 2) Determining the distribution of the corrected orientations using approximation methods such as the one-sample Kolmogorov-Smirnov test. The inherent distribution corrected by the proposed method can be used for discrete fracture network (DFN) modelling, which is applied to such areas as rockmass stability evaluation, rockmass permeability analysis, rockmass quality calculation and other related fields. To maximize the correction capacity of the proposed method, the observed sample size is suggested through effectiveness tests for different distribution types, dispersions and sample sizes. The performance of the proposed method and the comparison of its correction capacity with existing methods are illustrated with two case studies. PMID:26961249

  19. Robust mislabel logistic regression without modeling mislabel probabilities.

    PubMed

    Hung, Hung; Jou, Zhi-Yu; Huang, Su-Yun

    2018-03-01

    Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence. Our proposal possesses two advantageous features: (1) It does not need to model the mislabel probabilities. (2) The minimum γ-divergence estimation leads to a weighted estimating equation without the need to include any bias correction term, that is, it is automatically bias-corrected. These features make the proposed γ-logistic regression more robust in model fitting and more intuitive for model interpretation through a simple weighting scheme. Our method is also easy to implement, and two types of algorithms are included. Simulation studies and the Pima data application are presented to demonstrate the performance of γ-logistic regression. © 2017, The International Biometric Society.

  20. Model development for MODIS thermal band electronic cross-talk

    NASA Astrophysics Data System (ADS)

    Chang, Tiejun; Wu, Aisheng; Geng, Xu; Li, Yonghong; Brinkmann, Jake; Keller, Graziela; Xiong, Xiaoxiong (Jack)

    2016-10-01

    MODerate-resolution Imaging Spectroradiometer (MODIS) has 36 bands. Among them, 16 thermal emissive bands covering a wavelength range from 3.8 to 14.4 μm. After 16 years on-orbit operation, the electronic crosstalk of a few Terra MODIS thermal emissive bands develop substantial issues which cause biases in the EV brightness temperature measurements and surface feature contamination. The crosstalk effects on band 27 with center wavelength at 6.7 μm and band 29 at 8.5 μm increased significantly in recent years, affecting downstream products such as water vapor and cloud mask. The crosstalk issue can be observed from nearly monthly scheduled lunar measurements, from which the crosstalk coefficients can be derived. Most of MODIS thermal bands are saturated at moon surface temperatures and the development of an alternative approach is very helpful for verification. In this work, a physical model was developed to assess the crosstalk impact on calibration as well as in Earth view brightness temperature retrieval. This model was applied to Terra MODIS band 29 empirically for correction of Earth brightness temperature measurements. In the model development, the detector nonlinear response is considered. The impacts of the electronic crosstalk are assessed in two steps. The first step consists of determining the impact on calibration using the on-board blackbody (BB). Due to the detector nonlinear response and large background signal, both linear and nonlinear coefficients are affected by the crosstalk from sending bands. The crosstalk impact on calibration coefficients was calculated. The second step is to calculate the effects on the Earth view brightness temperature retrieval. The effects include those from affected calibration coefficients and the contamination of Earth view measurements. This model links the measurement bias with crosstalk coefficients, detector nonlinearity, and the ratio of Earth measurements between the sending and receiving bands. The correction of the electronic crosstalk can be implemented empirically from the processed bias at different brightness temperature. The implementation can be done through two approaches. As routine calibration assessment for thermal infrared bands, the trending over select Earth scenes is processed for all the detectors in a band and the band averaged bias is derived for certain time. In this case, the correction of an affected band can be made using the regression of the model with band averaged bias and then corrections of detector differences are applied. The second approach requires the trending for individual detectors and the bias for each detector is used for regression with the model. A test using the first approach was made for Terra MODIS band 29 with the biases derived from long-term trending of sea surface temperature and Dome-C surface temperature.

  1. A new approach to correct for absorbing aerosols in OMI UV

    NASA Astrophysics Data System (ADS)

    Arola, A.; Kazadzis, S.; Lindfors, A.; Krotkov, N.; Kujanpää, J.; Tamminen, J.; Bais, A.; di Sarra, A.; Villaplana, J. M.; Brogniez, C.; Siani, A. M.; Janouch, M.; Weihs, P.; Webb, A.; Koskela, T.; Kouremeti, N.; Meloni, D.; Buchard, V.; Auriol, F.; Ialongo, I.; Staneck, M.; Simic, S.; Smedley, A.; Kinne, S.

    2009-11-01

    Several validation studies of surface UV irradiance based on the Ozone Monitoring Instrument (OMI) satellite data have shown a high correlation with ground-based measurements but a positive bias in many locations. The main part of the bias can be attributed to the boundary layer aerosol absorption that is not accounted for in the current satellite UV algorithms. To correct for this shortfall, a post-correction procedure was applied, based on global climatological fields of aerosol absorption optical depth. These fields were obtained by using global aerosol optical depth and aerosol single scattering albedo data assembled by combining global aerosol model data and ground-based aerosol measurements from AERONET. The resulting improvements in the satellite-based surface UV irradiance were evaluated by comparing satellite and ground-based spectral irradiances at various European UV monitoring sites. The results generally showed a significantly reduced bias by 5-20%, a lower variability, and an unchanged, high correlation coefficient.

  2. Correcting bias in the rational polynomial coefficients of satellite imagery using thin-plate smoothing splines

    NASA Astrophysics Data System (ADS)

    Shen, Xiang; Liu, Bin; Li, Qing-Quan

    2017-03-01

    The Rational Function Model (RFM) has proven to be a viable alternative to the rigorous sensor models used for geo-processing of high-resolution satellite imagery. Because of various errors in the satellite ephemeris and instrument calibration, the Rational Polynomial Coefficients (RPCs) supplied by image vendors are often not sufficiently accurate, and there is therefore a clear need to correct the systematic biases in order to meet the requirements of high-precision topographic mapping. In this paper, we propose a new RPC bias-correction method using the thin-plate spline modeling technique. Benefiting from its excellent performance and high flexibility in data fitting, the thin-plate spline model has the potential to remove complex distortions in vendor-provided RPCs, such as the errors caused by short-period orbital perturbations. The performance of the new method was evaluated by using Ziyuan-3 satellite images and was compared against the recently developed least-squares collocation approach, as well as the classical affine-transformation and quadratic-polynomial based methods. The results show that the accuracies of the thin-plate spline and the least-squares collocation approaches were better than the other two methods, which indicates that strong non-rigid deformations exist in the test data because they cannot be adequately modeled by simple polynomial-based methods. The performance of the thin-plate spline method was close to that of the least-squares collocation approach when only a few Ground Control Points (GCPs) were used, and it improved more rapidly with an increase in the number of redundant observations. In the test scenario using 21 GCPs (some of them located at the four corners of the scene), the correction residuals of the thin-plate spline method were about 36%, 37%, and 19% smaller than those of the affine transformation method, the quadratic polynomial method, and the least-squares collocation algorithm, respectively, which demonstrates that the new method can be more effective at removing systematic biases in vendor-supplied RPCs.

  3. Improving evapotranspiration processes in distrubing hydrological models using Remote Sensing derived ET products.

    NASA Astrophysics Data System (ADS)

    Abitew, T. A.; van Griensven, A.; Bauwens, W.

    2015-12-01

    Evapotranspiration is the main process in hydrology (on average around 60%), though has not received as much attention in the evaluation and calibration of hydrological models. In this study, Remote Sensing (RS) derived Evapotranspiration (ET) is used to improve the spatially distributed processes of ET of SWAT model application in the upper Mara basin (Kenya) and the Blue Nile basin (Ethiopia). The RS derived ET data is obtained from recently compiled global datasets (continuously monthly data at 1 km resolution from MOD16NBI,SSEBop,ALEXI,CMRSET models) and from regionally applied Energy Balance Models (for several cloud free days). The RS-RT data is used in different forms: Method 1) to evaluate spatially distributed evapotransiration model resultsMethod 2) to calibrate the evotranspiration processes in hydrological modelMethod 3) to bias-correct the evapotranpiration in hydrological model during simulation after changing the SWAT codesAn inter-comparison of the RS-ET products shows that at present there is a significant bias, but at the same time an agreement on the spatial variability of ET. The ensemble mean of different ET products seems the most realistic estimation and was further used in this study.The results show that:Method 1) the spatially mapped evapotranspiration of hydrological models shows clear differences when compared to RS derived evapotranspiration (low correlations). Especially evapotranspiration in forested areas is strongly underestimated compared to other land covers.Method 2) Calibration allows to improve the correlations between the RS and hydrological model results to some extent.Method 3) Bias-corrections are efficient in producing (sesonal or annual) evapotranspiration maps from hydrological models which are very similar to the patterns obtained from RS data.Though the bias-correction is very efficient, it is advised to improve the model results by better representing the ET processes by improved plant/crop computations, improved agricultural management practices or by providing improved meteorological data.

  4. Using historical and projected future climate model simulations as drivers of agricultural and biological models (Invited)

    NASA Astrophysics Data System (ADS)

    Stefanova, L. B.

    2013-12-01

    Climate model evaluation is frequently performed as a first step in analyzing climate change simulations. Atmospheric scientists are accustomed to evaluating climate models through the assessment of model climatology and biases, the models' representation of large-scale modes of variability (such as ENSO, PDO, AMO, etc) and the relationship between these modes and local variability (e.g. the connection between ENSO and the wintertime precipitation in the Southeast US). While these provide valuable information about the fidelity of historical and projected climate model simulations from an atmospheric scientist's point of view, the application of climate model data to fields such as agriculture, ecology and biology may require additional analyses focused on the particular application's requirements and sensitivities. Typically, historical climate simulations are used to determine a mapping between the model and observed climate, either through a simple (additive for temperature or multiplicative for precipitation) or a more sophisticated (such as quantile matching) bias correction on a monthly or seasonal time scale. Plants, animals and humans however are not directly affected by monthly or seasonal means. To assess the impact of projected climate change on living organisms and related industries (e.g. agriculture, forestry, conservation, utilities, etc.), derivative measures such as the heating degree-days (HDD), cooling degree-days (CDD), growing degree-days (GDD), accumulated chill hours (ACH), wet season onset (WSO) and duration (WSD), among others, are frequently useful. We will present a comparison of the projected changes in such derivative measures calculated by applying: (a) the traditional temperature/precipitation bias correction described above versus (b) a bias correction based on the mapping between the historical model and observed derivative measures themselves. In addition, we will present and discuss examples of various application-based climate model evaluations, such as: (a) agricultural crop yield estimates and (b) species population viability estimates modeled using observed climate data vs. historical climate simulations.

  5. Factors controlling the Indian summer monsoon onset in a coupled model

    NASA Astrophysics Data System (ADS)

    Prodhomme, Chloé; Terray, Pascal; Masson, Sébastien; Izumo, Takeshi

    2013-04-01

    The observed Indian Summer Monsoon (ISM) onset occurs around 30 May and 2 June, with a standard deviation of 8 to 9 days, according to the estimates. The relationship between interannual variability of the ISM onset and SSTs (Sea Surface Temperature) remains controversial. The role of Indian Ocean SSTs remain unclear, some studies have shown a driving role while other suggests a passive relation between Indian Ocean SSTs and ISM. The intrinsic impact of ENSO (El Nino-Southern Oscillation) is also difficult to estimate from observations alone. Finally, the predictability of the ISM onset remains drastically limited by the inability of both forced and coupled model to reproduce a realistic onset date. In order to measure objectively the ISM onset, different methods have been developed based on rainfall or dynamical indices (Ananthakrishnan and Soman, 1988 ; Wang and Ho 2002 ; Joseph et al. 2006). In the study we use the Tropospheric Temperature Gradient (TTG), which is the difference between the tropospheric temperature in a northern and a southern box in the Indian areas (Xavier et al. 2007). This index measures the dynamical strength of the monsoon and provides a stable and precise onset date consistent with rainfall estimates. In the SINTEX-F2 coupled model, the ISM onset measured with the TTG is delayed of approximately 10 days and is in advance of 6 days in the atmosphere-only (ECHAM) model. The 16 days lag between atmospheric-only and coupled runs suggests a crucial role of the coupling, especially SST biases on the delayed onset. With the help of several sensitivity experiments, this study tries to identify the keys regions influencing the ISM onset. Many studies have shown a strong impact of the Arabian Sea and Indian Ocean SST on the ISM onset. Nevertheless, the correction of the SSTs, based on AVHRR, in the tropical Indian Ocean only slightly corrects the delayed onset in the coupled model, which suggests an impact of SST in others regions on the ISM onset. During May and June, the main tropical SST biases in the coupled model are a strong warm bias in the Atlantic Ocean and a warm bias in the tropical Pacific Ocean, except along the equator around 140°W-100°W, where there is a cold tongue bias. The correction of the warm bias in the Atlantic Ocean slightly improves the onset date. Conversely, the correction of SST biases in the tropical and equatorial Pacific Oceans advances the onset date of 12 and 10 days, respectively, compared to the control coupled run. This result suggests that, at least in this model, the ISM onset is mainly control by the Pacific Ocean SSTs. Even if ENSO has an impact on the onset date it does not explain the delay, which is related to the biased SST mean state in the Pacific Ocean.

  6. The Impact of Atmospheric Modeling Errors on GRACE Estimates of Mass Loss in Greenland and Antarctica

    NASA Astrophysics Data System (ADS)

    Hardy, Ryan A.; Nerem, R. Steven; Wiese, David N.

    2017-12-01

    Systematic errors in Gravity Recovery and Climate Experiment (GRACE) monthly mass estimates over the Greenland and Antarctic ice sheets can originate from low-frequency biases in the European Centre for Medium-Range Weather Forecasts (ECMWF) Operational Analysis model, the atmospheric component of the Atmospheric and Ocean Dealising Level-1B (AOD1B) product used to forward model atmospheric and ocean gravity signals in GRACE processing. These biases are revealed in differences in surface pressure between the ECMWF Operational Analysis model, state-of-the-art reanalyses, and in situ surface pressure measurements. While some of these errors are attributable to well-understood discrete model changes and have published corrections, we examine errors these corrections do not address. We compare multiple models and in situ data in Antarctica and Greenland to determine which models have the most skill relative to monthly averages of the dealiasing model. We also evaluate linear combinations of these models and synthetic pressure fields generated from direct interpolation of pressure observations. These models consistently reveal drifts in the dealiasing model that cause the acceleration of Antarctica's mass loss between April 2002 and August 2016 to be underestimated by approximately 4 Gt yr-2. We find similar results after attempting to solve the inverse problem, recovering pressure biases directly from the GRACE Jet Propulsion Laboratory RL05.1 M mascon solutions. Over Greenland, we find a 2 Gt yr-1 bias in mass trend. While our analysis focuses on errors in Release 05 of AOD1B, we also evaluate the new AOD1B RL06 product. We find that this new product mitigates some of the aforementioned biases.

  7. A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield

    PubMed Central

    Ringard, Justine; Seyler, Frederique; Linguet, Laurent

    2017-01-01

    Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale. PMID:28621723

  8. A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield.

    PubMed

    Ringard, Justine; Seyler, Frederique; Linguet, Laurent

    2017-06-16

    Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale.

  9. On the distortion of elevation dependent warming signals by quantile mapping

    NASA Astrophysics Data System (ADS)

    Jury, Martin W.; Mendlik, Thomas; Maraun, Douglas

    2017-04-01

    Elevation dependent warming (EDW), the amplification of warming under climate change with elevation, is likely to accelerate changes in e.g. cryospheric and hydrological systems. Responsible for EDW is a mixture of processes including snow albedo feedback, cloud formations or the location of aerosols. The degree of incorporation of this processes varies across state of the art climate models. In a recent study we were preparing bias corrected model output of CMIP5 GCMs and CORDEX RCMs over the Himalayan region for the glacier modelling community. In a first attempt we used quantile mapping (QM) to generate this data. A beforehand model evaluation showed that more than two third of the 49 included climate models were able to reproduce positive trend differences between areas of higher and lower elevations in winter, clearly visible in all of our five observational datasets used. Regrettably, we noticed that height dependent trend signals provided by models were distorted, most of the time in the direction of less EDW, sometimes even reversing EDW signals present in the models before the bias correction. As a consequence, we refrained from using quantile mapping for our task, as EDW poses one important factor influencing the climate in high altitudes for the nearer and more distant future, and used a climate change signal preserving bias correction approach. Here we present our findings of the distortion of the EDW temperature change by QM and discuss the influence of QM on different statistical properties as well as their modifications.

  10. Travel cost demand model based river recreation benefit estimates with on-site and household surveys: Comparative results and a correction procedure

    NASA Astrophysics Data System (ADS)

    Loomis, John

    2003-04-01

    Past recreation studies have noted that on-site or visitor intercept surveys are subject to over-sampling of avid users (i.e., endogenous stratification) and have offered econometric solutions to correct for this. However, past papers do not estimate the empirical magnitude of the bias in benefit estimates with a real data set, nor do they compare the corrected estimates to benefit estimates derived from a population sample. This paper empirically examines the magnitude of the recreation benefits per trip bias by comparing estimates from an on-site river visitor intercept survey to a household survey. The difference in average benefits is quite large, with the on-site visitor survey yielding 24 per day trip, while the household survey yields 9.67 per day trip. A simple econometric correction for endogenous stratification in our count data model lowers the benefit estimate to $9.60 per day trip, a mean value nearly identical and not statistically different from the household survey estimate.

  11. Generation of future potential scenarios in an Alpine Catchment by applying bias-correction techniques, delta-change approaches and stochastic Weather Generators at different spatial scale. Analysis of their influence on basic and drought statistics.

    NASA Astrophysics Data System (ADS)

    Collados-Lara, Antonio-Juan; Pulido-Velazquez, David; Pardo-Iguzquiza, Eulogio

    2017-04-01

    Assessing impacts of potential future climate change scenarios in precipitation and temperature is essential to design adaptive strategies in water resources systems. The objective of this work is to analyze the possibilities of different statistical downscaling methods to generate future potential scenarios in an Alpine Catchment from historical data and the available climate models simulations performed in the frame of the CORDEX EU project. The initial information employed to define these downscaling approaches are the historical climatic data (taken from the Spain02 project for the period 1971-2000 with a spatial resolution of 12.5 Km) and the future series provided by climatic models in the horizon period 2071-2100 . We have used information coming from nine climate model simulations (obtained from five different Regional climate models (RCM) nested to four different Global Climate Models (GCM)) from the European CORDEX project. In our application we have focused on the Representative Concentration Pathways (RCP) 8.5 emissions scenario, which is the most unfavorable scenario considered in the fifth Assessment Report (AR5) by the Intergovernmental Panel on Climate Change (IPCC). For each RCM we have generated future climate series for the period 2071-2100 by applying two different approaches, bias correction and delta change, and five different transformation techniques (first moment correction, first and second moment correction, regression functions, quantile mapping using distribution derived transformation and quantile mapping using empirical quantiles) for both of them. Ensembles of the obtained series were proposed to obtain more representative potential future climate scenarios to be employed to study potential impacts. In this work we propose a non-equifeaseble combination of the future series giving more weight to those coming from models (delta change approaches) or combination of models and techniques that provides better approximation to the basic and drought statistic of the historical data. A multi-objective analysis using basic statistics (mean, standard deviation and asymmetry coefficient) and droughts statistics (duration, magnitude and intensity) has been performed to identify which models are better in terms of goodness of fit to reproduce the historical series. The drought statistics have been obtained from the Standard Precipitation index (SPI) series using the Theory of Runs. This analysis allows discriminate the best RCM and the best combination of model and correction technique in the bias-correction method. We have also analyzed the possibilities of using different Stochastic Weather Generators to approximate the basic and droughts statistics of the historical series. These analyses have been performed in our case study in a lumped and in a distributed way in order to assess its sensibility to the spatial scale. The statistic of the future temperature series obtained with different ensemble options are quite homogeneous, but the precipitation shows a higher sensibility to the adopted method and spatial scale. The global increment in the mean temperature values are 31.79 %, 31.79 %, 31.03 % and 31.74 % for the distributed bias-correction, distributed delta-change, lumped bias-correction and lumped delta-change ensembles respectively and in the precipitation they are -25.48 %, -28.49 %, -26.42 % and -27.35% respectively. Acknowledgments: This research work has been partially supported by the GESINHIMPADAPT project (CGL2013-48424-C2-2-R) with Spanish MINECO funds. We would also like to thank Spain02 and CORDEX projects for the data provided for this study and the R package qmap.

  12. Practical Bias Correction in Aerial Surveys of Large Mammals: Validation of Hybrid Double-Observer with Sightability Method against Known Abundance of Feral Horse (Equus caballus) Populations

    PubMed Central

    2016-01-01

    Reliably estimating wildlife abundance is fundamental to effective management. Aerial surveys are one of the only spatially robust tools for estimating large mammal populations, but statistical sampling methods are required to address detection biases that affect accuracy and precision of the estimates. Although various methods for correcting aerial survey bias are employed on large mammal species around the world, these have rarely been rigorously validated. Several populations of feral horses (Equus caballus) in the western United States have been intensively studied, resulting in identification of all unique individuals. This provided a rare opportunity to test aerial survey bias correction on populations of known abundance. We hypothesized that a hybrid method combining simultaneous double-observer and sightability bias correction techniques would accurately estimate abundance. We validated this integrated technique on populations of known size and also on a pair of surveys before and after a known number was removed. Our analysis identified several covariates across the surveys that explained and corrected biases in the estimates. All six tests on known populations produced estimates with deviations from the known value ranging from -8.5% to +13.7% and <0.7 standard errors. Precision varied widely, from 6.1% CV to 25.0% CV. In contrast, the pair of surveys conducted around a known management removal produced an estimated change in population between the surveys that was significantly larger than the known reduction. Although the deviation between was only 9.1%, the precision estimate (CV = 1.6%) may have been artificially low. It was apparent that use of a helicopter in those surveys perturbed the horses, introducing detection error and heterogeneity in a manner that could not be corrected by our statistical models. Our results validate the hybrid method, highlight its potentially broad applicability, identify some limitations, and provide insight and guidance for improving survey designs. PMID:27139732

  13. Practical Bias Correction in Aerial Surveys of Large Mammals: Validation of Hybrid Double-Observer with Sightability Method against Known Abundance of Feral Horse (Equus caballus) Populations.

    PubMed

    Lubow, Bruce C; Ransom, Jason I

    2016-01-01

    Reliably estimating wildlife abundance is fundamental to effective management. Aerial surveys are one of the only spatially robust tools for estimating large mammal populations, but statistical sampling methods are required to address detection biases that affect accuracy and precision of the estimates. Although various methods for correcting aerial survey bias are employed on large mammal species around the world, these have rarely been rigorously validated. Several populations of feral horses (Equus caballus) in the western United States have been intensively studied, resulting in identification of all unique individuals. This provided a rare opportunity to test aerial survey bias correction on populations of known abundance. We hypothesized that a hybrid method combining simultaneous double-observer and sightability bias correction techniques would accurately estimate abundance. We validated this integrated technique on populations of known size and also on a pair of surveys before and after a known number was removed. Our analysis identified several covariates across the surveys that explained and corrected biases in the estimates. All six tests on known populations produced estimates with deviations from the known value ranging from -8.5% to +13.7% and <0.7 standard errors. Precision varied widely, from 6.1% CV to 25.0% CV. In contrast, the pair of surveys conducted around a known management removal produced an estimated change in population between the surveys that was significantly larger than the known reduction. Although the deviation between was only 9.1%, the precision estimate (CV = 1.6%) may have been artificially low. It was apparent that use of a helicopter in those surveys perturbed the horses, introducing detection error and heterogeneity in a manner that could not be corrected by our statistical models. Our results validate the hybrid method, highlight its potentially broad applicability, identify some limitations, and provide insight and guidance for improving survey designs.

  14. Comparative assessment of several post-processing methods for correcting evapotranspiration forecasts derived from TIGGE datasets.

    NASA Astrophysics Data System (ADS)

    Tian, D.; Medina, H.

    2017-12-01

    Post-processing of medium range reference evapotranspiration (ETo) forecasts based on numerical weather prediction (NWP) models has the potential of improving the quality and utility of these forecasts. This work compares the performance of several post-processing methods for correcting ETo forecasts over the continental U.S. generated from The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) database using data from Europe (EC), the United Kingdom (MO), and the United States (NCEP). The pondered post-processing techniques are: simple bias correction, the use of multimodels, the Ensemble Model Output Statistics (EMOS, Gneitting et al., 2005) and the Bayesian Model Averaging (BMA, Raftery et al., 2005). ETo estimates based on quality-controlled U.S. Regional Climate Reference Network measurements, and computed with the FAO 56 Penman Monteith equation, are adopted as baseline. EMOS and BMA are generally the most efficient post-processing techniques of the ETo forecasts. Nevertheless, the simple bias correction of the best model is commonly much more rewarding than using multimodel raw forecasts. Our results demonstrate the potential of different forecasting and post-processing frameworks in operational evapotranspiration and irrigation advisory systems at national scale.

  15. Correction factors for self-selection when evaluating screening programmes.

    PubMed

    Spix, Claudia; Berthold, Frank; Hero, Barbara; Michaelis, Jörg; Schilling, Freimut H

    2016-03-01

    In screening programmes there is recognized bias introduced through participant self-selection (the healthy screenee bias). Methods used to evaluate screening programmes include Intention-to-screen, per-protocol, and the "post hoc" approach in which, after introducing screening for everyone, the only evaluation option is participants versus non-participants. All methods are prone to bias through self-selection. We present an overview of approaches to correct for this bias. We considered four methods to quantify and correct for self-selection bias. Simple calculations revealed that these corrections are actually all identical, and can be converted into each other. Based on this, correction factors for further situations and measures were derived. The application of these correction factors requires a number of assumptions. Using as an example the German Neuroblastoma Screening Study, no relevant reduction in mortality or stage 4 incidence due to screening was observed. The largest bias (in favour of screening) was observed when comparing participants with non-participants. Correcting for bias is particularly necessary when using the post hoc evaluation approach, however, in this situation not all required data are available. External data or further assumptions may be required for estimation. © The Author(s) 2015.

  16. Quantifying Selection Bias in National Institute of Health Stroke Scale Data Documented in an Acute Stroke Registry.

    PubMed

    Thompson, Michael P; Luo, Zhehui; Gardiner, Joseph; Burke, James F; Nickles, Adrienne; Reeves, Mathew J

    2016-05-01

    As a measure of stroke severity, the National Institutes of Health Stroke Scale (NIHSS) is an important predictor of patient- and hospital-level outcomes, yet is often undocumented. The purpose of this study is to quantify and correct for potential selection bias in observed NIHSS data. Data were obtained from the Michigan Stroke Registry and included 10 262 patients with ischemic stroke aged ≥65 years discharged from 23 hospitals from 2009 to 2012, of which 74.6% of patients had documented NIHSS. We estimated models predicting NIHSS documentation and NIHSS score and used the Heckman selection model to estimate a correlation coefficient (ρ) between the 2 model error terms, which quantifies the degree of selection bias in the documentation of NIHSS. The Heckman model found modest, but significant, selection bias (ρ=0.19; 95% confidence interval: 0.09, 0.29; P<0.001), indicating that because NIHSS score increased (ie, strokes were more severe), the probability of documentation also increased. We also estimated a selection bias-corrected population mean NIHSS score of 4.8, which was substantially lower than the observed mean NIHSS score of 7.4. Evidence of selection bias was also identified using hospital-level analysis, where increased NIHSS documentation was correlated with lower mean NIHSS scores (r=-0.39; P<0.001). We demonstrate modest, but important, selection bias in documented NIHSS data, which are missing more often in patients with less severe stroke. The population mean NIHSS score was overestimated by >2 points, which could significantly alter the risk profile of hospitals treating patients with ischemic stroke and subsequent hospital risk-adjusted outcomes. © 2016 American Heart Association, Inc.

  17. Rate equation model of laser induced bias in uranium isotope ratios measured by resonance ionization mass spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isselhardt, B. H.; Prussin, S. G.; Savina, M. R.

    2016-01-01

    Resonance Ionization Mass Spectrometry (RIMS) has been developed as a method to measure uranium isotope abundances. In this approach, RIMS is used as an element-selective ionization process between uranium atoms and potential isobars without the aid of chemical purification and separation. The use of broad bandwidth lasers with automated feedback control of wavelength was applied to the measurement of the U-235/U-238 ratio to decrease laser-induced isotopic fractionation. In application, isotope standards are used to identify and correct bias in measured isotope ratios, but understanding laser-induced bias from first-principles can improve the precision and accuracy of experimental measurements. A rate equationmore » model for predicting the relative ionization probability has been developed to study the effect of variations in laser parameters on the measured isotope ratio. The model uses atomic data and empirical descriptions of laser performance to estimate the laser-induced bias expected in experimental measurements of the U-235/U-238 ratio. Empirical corrections are also included to account for ionization processes that are difficult to calculate from first principles with the available atomic data. Development of this model has highlighted several important considerations for properly interpreting experimental results.« less

  18. Rate equation model of laser induced bias in uranium isotope ratios measured by resonance ionization mass spectrometry

    DOE PAGES

    Isselhardt, B. H.; Prussin, S. G.; Savina, M. R.; ...

    2015-12-07

    Resonance Ionization Mass Spectrometry (RIMS) has been developed as a method to measure uranium isotope abundances. In this approach, RIMS is used as an element-selective ionization process between uranium atoms and potential isobars without the aid of chemical purification and separation. The use of broad bandwidth lasers with automated feedback control of wavelength was applied to the measurement of the 235U/238U ratio to decrease laser-induced isotopic fractionation. In application, isotope standards are used to identify and correct bias in measured isotope ratios, but understanding laser-induced bias from first-principles can improve the precision and accuracy of experimental measurements. A rate equationmore » model for predicting the relative ionization probability has been developed to study the effect of variations in laser parameters on the measured isotope ratio. The model uses atomic data and empirical descriptions of laser performance to estimate the laser-induced bias expected in experimental measurements of the 235U/ 238U ratio. Empirical corrections are also included to account for ionization processes that are difficult to calculate from first principles with the available atomic data. As a result, development of this model has highlighted several important considerations for properly interpreting experimental results.« less

  19. Mixture models reveal multiple positional bias types in RNA-Seq data and lead to accurate transcript concentration estimates.

    PubMed

    Tuerk, Andreas; Wiktorin, Gregor; Güler, Serhat

    2017-05-01

    Accuracy of transcript quantification with RNA-Seq is negatively affected by positional fragment bias. This article introduces Mix2 (rd. "mixquare"), a transcript quantification method which uses a mixture of probability distributions to model and thereby neutralize the effects of positional fragment bias. The parameters of Mix2 are trained by Expectation Maximization resulting in simultaneous transcript abundance and bias estimates. We compare Mix2 to Cufflinks, RSEM, eXpress and PennSeq; state-of-the-art quantification methods implementing some form of bias correction. On four synthetic biases we show that the accuracy of Mix2 overall exceeds the accuracy of the other methods and that its bias estimates converge to the correct solution. We further evaluate Mix2 on real RNA-Seq data from the Microarray and Sequencing Quality Control (MAQC, SEQC) Consortia. On MAQC data, Mix2 achieves improved correlation to qPCR measurements with a relative increase in R2 between 4% and 50%. Mix2 also yields repeatable concentration estimates across technical replicates with a relative increase in R2 between 8% and 47% and reduced standard deviation across the full concentration range. We further observe more accurate detection of differential expression with a relative increase in true positives between 74% and 378% for 5% false positives. In addition, Mix2 reveals 5 dominant biases in MAQC data deviating from the common assumption of a uniform fragment distribution. On SEQC data, Mix2 yields higher consistency between measured and predicted concentration ratios. A relative error of 20% or less is obtained for 51% of transcripts by Mix2, 40% of transcripts by Cufflinks and RSEM and 30% by eXpress. Titration order consistency is correct for 47% of transcripts for Mix2, 41% for Cufflinks and RSEM and 34% for eXpress. We, further, observe improved repeatability across laboratory sites with a relative increase in R2 between 8% and 44% and reduced standard deviation.

  20. A Simple Noise Correction Scheme for Diffusional Kurtosis Imaging

    PubMed Central

    Glenn, G. Russell; Tabesh, Ali; Jensen, Jens H.

    2014-01-01

    Purpose Diffusional kurtosis imaging (DKI) is sensitive to the effects of signal noise due to strong diffusion weightings and higher order modeling of the diffusion weighted signal. A simple noise correction scheme is proposed to remove the majority of the noise bias in the estimated diffusional kurtosis. Methods Weighted linear least squares (WLLS) fitting together with a voxel-wise, subtraction-based noise correction from multiple, independent acquisitions are employed to reduce noise bias in DKI data. The method is validated in phantom experiments and demonstrated for in vivo human brain for DKI-derived parameter estimates. Results As long as the signal-to-noise ratio (SNR) for the most heavily diffusion weighted images is greater than 2.1, errors in phantom diffusional kurtosis estimates are found to be less than 5 percent with noise correction, but as high as 44 percent for uncorrected estimates. In human brain, noise correction is also shown to improve diffusional kurtosis estimates derived from measurements made with low SNR. Conclusion The proposed correction technique removes the majority of noise bias from diffusional kurtosis estimates in noisy phantom data and is applicable to DKI of human brain. Features of the method include computational simplicity and ease of integration into standard WLLS DKI post-processing algorithms. PMID:25172990

  1. Implementation of Coupled Skin Temperature Analysis and Bias Correction in a Global Atmospheric Data Assimilation System

    NASA Technical Reports Server (NTRS)

    Radakovich, Jon; Bosilovich, M.; Chern, Jiun-dar; daSilva, Arlindo

    2004-01-01

    The NASA/NCAR Finite Volume GCM (fvGCM) with the NCAR CLM (Community Land Model) version 2.0 was integrated into the NASA/GMAO Finite Volume Data Assimilation System (fvDAS). A new method was developed for coupled skin temperature assimilation and bias correction where the analysis increment and bias correction term is passed into the CLM2 and considered a forcing term in the solution to the energy balance. For our purposes, the fvDAS CLM2 was run at 1 deg. x 1.25 deg. horizontal resolution with 55 vertical levels. We assimilate the ISCCP-DX (30 km resolution) surface temperature product. The atmospheric analysis was performed 6-hourly, while the skin temperature analysis was performed 3-hourly. The bias correction term, which was updated at the analysis times, was added to the skin temperature tendency equation at every timestep. In this presentation, we focus on the validation of the surface energy budget at the in situ reference sites for the Coordinated Enhanced Observation Period (CEOP). We will concentrate on sites that include independent skin temperature measurements and complete energy budget observations for the month of July 2001. In addition, MODIS skin temperature will be used for validation. Several assimilations were conducted and preliminary results will be presented.

  2. Adaptive correction of ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Pelosi, Anna; Battista Chirico, Giovanni; Van den Bergh, Joris; Vannitsem, Stephane

    2017-04-01

    Forecasts from numerical weather prediction (NWP) models often suffer from both systematic and non-systematic errors. These are present in both deterministic and ensemble forecasts, and originate from various sources such as model error and subgrid variability. Statistical post-processing techniques can partly remove such errors, which is particularly important when NWP outputs concerning surface weather variables are employed for site specific applications. Many different post-processing techniques have been developed. For deterministic forecasts, adaptive methods such as the Kalman filter are often used, which sequentially post-process the forecasts by continuously updating the correction parameters as new ground observations become available. These methods are especially valuable when long training data sets do not exist. For ensemble forecasts, well-known techniques are ensemble model output statistics (EMOS), and so-called "member-by-member" approaches (MBM). Here, we introduce a new adaptive post-processing technique for ensemble predictions. The proposed method is a sequential Kalman filtering technique that fully exploits the information content of the ensemble. One correction equation is retrieved and applied to all members, however the parameters of the regression equations are retrieved by exploiting the second order statistics of the forecast ensemble. We compare our new method with two other techniques: a simple method that makes use of a running bias correction of the ensemble mean, and an MBM post-processing approach that rescales the ensemble mean and spread, based on minimization of the Continuous Ranked Probability Score (CRPS). We perform a verification study for the region of Campania in southern Italy. We use two years (2014-2015) of daily meteorological observations of 2-meter temperature and 10-meter wind speed from 18 ground-based automatic weather stations distributed across the region, comparing them with the corresponding COSMO-LEPS ensemble forecasts. Deterministic verification scores (e.g., mean absolute error, bias) and probabilistic scores (e.g., CRPS) are used to evaluate the post-processing techniques. We conclude that the new adaptive method outperforms the simpler running bias-correction. The proposed adaptive method often outperforms the MBM method in removing bias. The MBM method has the advantage of correcting the ensemble spread, although it needs more training data.

  3. SU-F-I-80: Correction for Bias in a Channelized Hotelling Model Observer Caused by Temporally Variable Non-Stationary Noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Favazza, C; Fetterly, K

    2016-06-15

    Purpose: Application of a channelized Hotelling model observer (CHO) over a wide range of x-ray angiography detector target dose (DTD) levels demonstrated substantial bias for conditions yielding low detectability indices (d’), including low DTD and small test objects. The purpose of this work was to develop theory and methods to correct this bias. Methods: A hypothesis was developed wherein the measured detectability index (d’b) for a known test object is positively biased by temporally variable non-stationary noise in the images. Hotelling’s T2 test statistic provided the foundation for a mathematical theory which accounts for independent contributions to the measured d’bmore » value from both the test object (d’o) and non-stationary noise (d’ns). Experimental methods were developed to directly estimate d’o by determining d’ns and subtracting it from d’b, in accordance with the theory. Specifically, d’ns was determined from two sets of images from which the traditional test object was withheld. This method was applied to angiography images with DTD levels in the range 0 to 240 nGy and for disk-shaped iodine-based contrast targets with diameters 0.5 to 4.0 mm. Results: Bias in d’ was evidenced by d’b values which exceeded values expected from a quantum limited imaging system and decreasing object size and DTD. d’ns increased with decreasing DTD, reaching a maximum of 2.6 for DTD = 0. Bias-corrected d’o estimates demonstrated sub-quantum limited performance of the x-ray angiography for low DTD. Findings demonstrated that the source of non-stationary noise was detector electronic readout noise. Conclusion: Theory and methods to estimate and correct bias in CHO measurements from temporally variable non-stationary noise were presented. The temporal non-stationary noise was shown to be due to electronic readout noise. This method facilitates accurate estimates of d’ values over a large range of object size and detector target dose.« less

  4. Variability in Annual and Average Mass Changes in Antarctica from 2004 to 2009 using Satellite Laser Altimetry

    NASA Astrophysics Data System (ADS)

    Babonis, G. S.; Csatho, B. M.; Schenk, A. F.

    2016-12-01

    We present a new record of Antarctic ice thickness changes, reconstructed from ICESat laser altimetry observations, from 2004-2009, at over 100,000 locations across the Antarctic Ice Sheet (AIS). This work generates elevation time series at ICESat groundtrack crossover regions on an observation-by-observation basis, with rigorous, quantified, error estimates using the SERAC approach (Schenk and Csatho, 2012). The results include average and annual elevation, volume and mass changes in Antarctica, fully corrected for glacial isostatic adjustment (GIA) and known intercampaign biases; and partitioned into contributions from surficial processes (e.g. firn densification) and ice dynamics. The modular flexibility of the SERAC framework allows for the assimilation of multiple ancillary datasets (e.g. GIA models, Intercampaign Bias Corrections, IBC), in a common framework, to calculate mass changes for several different combinations of GIA models and IBCs and to arrive at a measure of variability from these results. We are able to determine the effect these corrections have on annual and average volume and mass change calculations in Antarctica, and to explore how these differences vary between drainage basins and with elevation. As such, this contribution presents a method that compliments, and is consistent with, the 2012 Ice sheet Mass Balance Inter-comparison Exercise (IMBIE) results (Shepherd 2012). Additionally, this work will contribute to the 2016 IMBIE, which seeks to reconcile ice sheet mass changes from different observations,, including laser altimetry, using a different methodologies and ancillary datasets including GIA models, Firn Densification Models, and Intercampaign Bias Corrections.

  5. Interpolation bias for the inverse compositional Gauss-Newton algorithm in digital image correlation

    NASA Astrophysics Data System (ADS)

    Su, Yong; Zhang, Qingchuan; Xu, Xiaohai; Gao, Zeren; Wu, Shangquan

    2018-01-01

    It is believed that the classic forward additive Newton-Raphson (FA-NR) algorithm and the recently introduced inverse compositional Gauss-Newton (IC-GN) algorithm give rise to roughly equal interpolation bias. Questioning the correctness of this statement, this paper presents a thorough analysis of interpolation bias for the IC-GN algorithm. A theoretical model is built to analytically characterize the dependence of interpolation bias upon speckle image, target image interpolation, and reference image gradient estimation. The interpolation biases of the FA-NR algorithm and the IC-GN algorithm can be significantly different, whose relative difference can exceed 80%. For the IC-GN algorithm, the gradient estimator can strongly affect the interpolation bias; the relative difference can reach 178%. Since the mean bias errors are insensitive to image noise, the theoretical model proposed remains valid in the presence of noise. To provide more implementation details, source codes are uploaded as a supplement.

  6. EM Bias-Correction for Ice Thickness and Surface Roughness Retrievals over Rough Deformed Sea Ice

    NASA Astrophysics Data System (ADS)

    Li, L.; Gaiser, P. W.; Allard, R.; Posey, P. G.; Hebert, D. A.; Richter-Menge, J.; Polashenski, C. M.

    2016-12-01

    The very rough ridge sea ice accounts for significant percentage of total ice areas and even larger percentage of total volume. The commonly used Radar altimeter surface detection techniques are empirical in nature and work well only over level/smooth sea ice. Rough sea ice surfaces can modify the return waveforms, resulting in significant Electromagnetic (EM) bias in the estimated surface elevations, and thus large errors in the ice thickness retrievals. To understand and quantify such sea ice surface roughness effects, a combined EM rough surface and volume scattering model was developed to simulate radar returns from the rough sea ice `layer cake' structure. A waveform matching technique was also developed to fit observed waveforms to a physically-based waveform model and subsequently correct the roughness induced EM bias in the estimated freeboard. This new EM Bias Corrected (EMBC) algorithm was able to better retrieve surface elevations and estimate the surface roughness parameter simultaneously. In situ data from multi-instrument airborne and ground campaigns were used to validate the ice thickness and surface roughness retrievals. For the surface roughness retrievals, we applied this EMBC algorithm to co-incident LiDAR/Radar measurements collected during a Cryosat-2 under-flight by the NASA IceBridge missions. Results show that not only does the waveform model fit very well to the measured radar waveform, but also the roughness parameters derived independently from the LiDAR and radar data agree very well for both level and deformed sea ice. For sea ice thickness retrievals, validation based on in-situ data from the coordinated CRREL/NRL field campaign demonstrates that the physically-based EMBC algorithm performs fundamentally better than the empirical algorithm over very rough deformed sea ice, suggesting that sea ice surface roughness effects can be modeled and corrected based solely on the radar return waveforms.

  7. Bias Corrections for Regional Estimates of the Time-averaged Geomagnetic Field

    NASA Astrophysics Data System (ADS)

    Constable, C.; Johnson, C. L.

    2009-05-01

    We assess two sources of bias in the time-averaged geomagnetic field (TAF) and paleosecular variation (PSV): inadequate temporal sampling, and the use of unit vectors in deriving temporal averages of the regional geomagnetic field. For the first temporal sampling question we use statistical resampling of existing data sets to minimize and correct for bias arising from uneven temporal sampling in studies of the time- averaged geomagnetic field (TAF) and its paleosecular variation (PSV). The techniques are illustrated using data derived from Hawaiian lava flows for 0-5~Ma: directional observations are an updated version of a previously published compilation of paleomagnetic directional data centered on ± 20° latitude by Lawrence et al./(2006); intensity data are drawn from Tauxe & Yamazaki, (2007). We conclude that poor temporal sampling can produce biased estimates of TAF and PSV, and resampling to appropriate statistical distribution of ages reduces this bias. We suggest that similar resampling should be attempted as a bias correction for all regional paleomagnetic data to be used in TAF and PSV modeling. The second potential source of bias is the use of directional data in place of full vector data to estimate the average field. This is investigated for the full vector subset of the updated Hawaiian data set. Lawrence, K.P., C.G. Constable, and C.L. Johnson, 2006, Geochem. Geophys. Geosyst., 7, Q07007, DOI 10.1029/2005GC001181. Tauxe, L., & Yamazkai, 2007, Treatise on Geophysics,5, Geomagnetism, Elsevier, Amsterdam, Chapter 13,p509

  8. Calibrating the ECCO ocean general circulation model using Green's functions

    NASA Technical Reports Server (NTRS)

    Menemenlis, D.; Fu, L. L.; Lee, T.; Fukumori, I.

    2002-01-01

    Green's functions provide a simple, yet effective, method to test and calibrate General-Circulation-Model(GCM) parameterizations, to study and quantify model and data errors, to correct model biases and trends, and to blend estimates from different solutions and data products.

  9. A retrieval-based approach to eliminating hindsight bias.

    PubMed

    Van Boekel, Martin; Varma, Keisha; Varma, Sashank

    2017-03-01

    Individuals exhibit hindsight bias when they are unable to recall their original responses to novel questions after correct answers are provided to them. Prior studies have eliminated hindsight bias by modifying the conditions under which original judgments or correct answers are encoded. Here, we explored whether hindsight bias can be eliminated by manipulating the conditions that hold at retrieval. Our retrieval-based approach predicts that if the conditions at retrieval enable sufficient discrimination of memory representations of original judgments from memory representations of correct answers, then hindsight bias will be reduced or eliminated. Experiment 1 used the standard memory design to replicate the hindsight bias effect in middle-school students. Experiments 2 and 3 modified the retrieval phase of this design, instructing participants beforehand that they would be recalling both their original judgments and the correct answers. As predicted, this enabled participants to form compound retrieval cues that discriminated original judgment traces from correct answer traces, and eliminated hindsight bias. Experiment 4 found that when participants were not instructed beforehand that they would be making both recalls, they did not form discriminating retrieval cues, and hindsight bias returned. These experiments delineate the retrieval conditions that produce-and fail to produce-hindsight bias.

  10. Bias Correction for Assimilation of Retrieved AIRS Profiles of Temperature and Humidity

    NASA Technical Reports Server (NTRS)

    Blankenship, Clay; Zavodsky, Brad; Blackwell, William

    2014-01-01

    Atmospheric Infrared Sounder (AIRS) is a hyperspectral radiometer aboard NASA's Aqua satellite designed to measure atmospheric profiles of temperature and humidity. AIRS retrievals are assimilated into the Weather Research and Forecasting (WRF) model over the North Pacific for some cases involving "atmospheric rivers". These events bring a large flux of water vapor to the west coast of North America and often lead to extreme precipitation in the coastal mountain ranges. An advantage of assimilating retrievals rather than radiances is that information in partly cloudy fields of view can be used. Two different Level 2 AIRS retrieval products are compared: the Version 6 AIRS Science Team standard retrievals and a neural net retrieval from MIT. Before assimilation, a bias correction is applied to adjust each layer of retrieved temperature and humidity so the layer mean values agree with a short-term model climatology. WRF runs assimilating each of the products are compared against each other and against a control run with no assimilation. This paper will describe the bias correction technique and results from forecasts evaluated by validation against a Total Precipitable Water (TPW) product from CIRA and against Global Forecast System (GFS) analyses.

  11. Climate Impacts of CALIPSO-Guided Corrections to Black Carbon Aerosol Vertical Distributions in a Global Climate Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kovilakam, Mahesh; Mahajan, Salil; Saravanan, R.

    Here, we alleviate the bias in the tropospheric vertical distribution of black carbon aerosols (BC) in the Community Atmosphere Model (CAM4) using the Cloud-Aerosol and Infrared Pathfinder Satellite Observations (CALIPSO)-derived vertical profiles. A suite of sensitivity experiments are conducted with 1x, 5x, and 10x the present-day model estimated BC concentration climatology, with (corrected, CC) and without (uncorrected, UC) CALIPSO-corrected BC vertical distribution. The globally averaged top of the atmosphere radiative flux perturbation of CC experiments is ~8–50% smaller compared to uncorrected (UC) BC experiments largely due to an increase in low-level clouds. The global average surface temperature increases, the globalmore » average precipitation decreases, and the ITCZ moves northward with the increase in BC radiative forcing, irrespective of the vertical distribution of BC. Further, tropical expansion metrics for the poleward extent of the Northern Hemisphere Hadley cell (HC) indicate that simulated HC expansion is not sensitive to existing model biases in BC vertical distribution.« less

  12. Climate Impacts of CALIPSO-Guided Corrections to Black Carbon Aerosol Vertical Distributions in a Global Climate Model

    DOE PAGES

    Kovilakam, Mahesh; Mahajan, Salil; Saravanan, R.; ...

    2017-09-13

    Here, we alleviate the bias in the tropospheric vertical distribution of black carbon aerosols (BC) in the Community Atmosphere Model (CAM4) using the Cloud-Aerosol and Infrared Pathfinder Satellite Observations (CALIPSO)-derived vertical profiles. A suite of sensitivity experiments are conducted with 1x, 5x, and 10x the present-day model estimated BC concentration climatology, with (corrected, CC) and without (uncorrected, UC) CALIPSO-corrected BC vertical distribution. The globally averaged top of the atmosphere radiative flux perturbation of CC experiments is ~8–50% smaller compared to uncorrected (UC) BC experiments largely due to an increase in low-level clouds. The global average surface temperature increases, the globalmore » average precipitation decreases, and the ITCZ moves northward with the increase in BC radiative forcing, irrespective of the vertical distribution of BC. Further, tropical expansion metrics for the poleward extent of the Northern Hemisphere Hadley cell (HC) indicate that simulated HC expansion is not sensitive to existing model biases in BC vertical distribution.« less

  13. Simple statistical bias correction techniques greatly improve moderate resolution air quality forecast at station level

    NASA Astrophysics Data System (ADS)

    Curci, Gabriele; Falasca, Serena

    2017-04-01

    Deterministic air quality forecast is routinely carried out at many local Environmental Agencies in Europe and throughout the world by means of eulerian chemistry-transport models. The skill of these models in predicting the ground-level concentrations of relevant pollutants (ozone, nitrogen dioxide, particulate matter) a few days ahead has greatly improved in recent years, but it is not yet always compliant with the required quality level for decision making (e.g. the European Commission has set a maximum uncertainty of 50% on daily values of relevant pollutants). Post-processing of deterministic model output is thus still regarded as a useful tool to make the forecast more reliable. In this work, we test several bias correction techniques applied to a long-term dataset of air quality forecasts over Europe and Italy. We used the WRF-CHIMERE modelling system, which provides operational experimental chemical weather forecast at CETEMPS (http://pumpkin.aquila.infn.it/forechem/), to simulate the years 2008-2012 at low resolution over Europe (0.5° x 0.5°) and moderate resolution over Italy (0.15° x 0.15°). We compared the simulated dataset with available observation from the European Environmental Agency database (AirBase) and characterized model skill and compliance with EU legislation using the Delta tool from FAIRMODE project (http://fairmode.jrc.ec.europa.eu/). The bias correction techniques adopted are, in order of complexity: (1) application of multiplicative factors calculated as the ratio of model-to-observed concentrations averaged over the previous days; (2) correction of the statistical distribution of model forecasts, in order to make it similar to that of the observations; (3) development and application of Model Output Statistics (MOS) regression equations. We illustrate differences and advantages/disadvantages of the three approaches. All the methods are relatively easy to implement for other modelling systems.

  14. Evaluating the utility of dynamical downscaling in agricultural impacts projections

    PubMed Central

    Glotter, Michael; Elliott, Joshua; McInerney, David; Best, Neil; Foster, Ian; Moyer, Elisabeth J.

    2014-01-01

    Interest in estimating the potential socioeconomic costs of climate change has led to the increasing use of dynamical downscaling—nested modeling in which regional climate models (RCMs) are driven with general circulation model (GCM) output—to produce fine-spatial-scale climate projections for impacts assessments. We evaluate here whether this computationally intensive approach significantly alters projections of agricultural yield, one of the greatest concerns under climate change. Our results suggest that it does not. We simulate US maize yields under current and future CO2 concentrations with the widely used Decision Support System for Agrotechnology Transfer crop model, driven by a variety of climate inputs including two GCMs, each in turn downscaled by two RCMs. We find that no climate model output can reproduce yields driven by observed climate unless a bias correction is first applied. Once a bias correction is applied, GCM- and RCM-driven US maize yields are essentially indistinguishable in all scenarios (<10% discrepancy, equivalent to error from observations). Although RCMs correct some GCM biases related to fine-scale geographic features, errors in yield are dominated by broad-scale (100s of kilometers) GCM systematic errors that RCMs cannot compensate for. These results support previous suggestions that the benefits for impacts assessments of dynamically downscaling raw GCM output may not be sufficient to justify its computational demands. Progress on fidelity of yield projections may benefit more from continuing efforts to understand and minimize systematic error in underlying climate projections. PMID:24872455

  15. Assessing the Regional/Diurnal Bias between Satellite Retrievals and GEOS-5/MERRA Model Estimates of Land Surface Temperature

    NASA Astrophysics Data System (ADS)

    Scarino, B. R.; Smith, W. L., Jr.; Minnis, P.; Bedka, K. M.

    2017-12-01

    Atmospheric models rely on high-accuracy, high-resolution initial radiometric and surface conditions for better short-term meteorological forecasts, as well as improved evaluation of global climate models. Continuous remote sensing of the Earth's energy budget, as conducted by the Clouds and Earth's Radiant Energy System (CERES) project, allows for near-realtime evaluation of cloud and surface radiation properties. It is unfortunately common for there to be bias between atmospheric/surface radiation models and Earth-observations. For example, satellite-observed surface skin temperature (Ts), an important parameter for characterizing the energy exchange at the ground/water-atmosphere interface, can be biased due to atmospheric adjustment assumptions and anisotropy effects. Similarly, models are potentially biased by errors in initial conditions and regional forcing assumptions, which can be mitigated through assimilation with true measurements. As such, when frequent, broad-coverage, and accurate retrievals of satellite Ts are available, important insights into model estimates of Ts can be gained. The Satellite ClOud and Radiation Property retrieval System (SatCORPS) employs a single-channel thermal-infrared method to produce anisotropy-corrected Ts over clear-sky land and ocean surfaces from data taken by geostationary Earth orbit (GEO) satellite imagers. Regional and diurnal changes in model land surface temperature (LST) performance can be assessed owing to the somewhat continuous measurements of the LST offered by GEO satellites - measurements which are accurate to within 0.2 K. A seasonal, hourly comparison of satellite-observed LST with the NASA Goddard Earth Observing System Version 5 (GEOS-5) and the Modern-Era Retrospective Analysis for Research and Applications (MERRA) LST estimates is conducted to reveal regional and diurnal biases. This assessment is an important first step for evaluating the effectiveness of Ts assimilation, as well for determining the impact anisotropy correction has on observation - model bias, and is of critical importance for CERES.

  16. Development of an Empirical Local Magnitude Formula for Northern Oklahoma

    NASA Astrophysics Data System (ADS)

    Spriggs, N.; Karimi, S.; Moores, A. O.

    2015-12-01

    In this paper we focus on determining a local magnitude formula for northern Oklahoma that is unbiased with distance by empirically constraining the attenuation properties within the region of interest based on the amplitude of observed seismograms. For regional networks detecting events over several hundred kilometres, distance correction terms play an important role in determining the magnitude of an event. Standard distance correction terms such as Hutton and Boore (1987) may have a significant bias with distance if applied in a region with different attenuation properties, resulting in an incorrect magnitude. We have presented data from a regional network of broadband seismometers installed in bedrock in northern Oklahoma. The events with magnitude in the range of 2.0 and 4.5, distributed evenly across this network are considered. We find that existing models show a bias with respect to hypocentral distance. Observed amplitude measurements demonstrate that there is a significant Moho bounce effect that mandates the use of a trilinear attenuation model in order to avoid bias in the distance correction terms. We present two different approaches of local magnitude calibration. The first maintains the classic definition of local magnitude as proposed by Richter. The second method calibrates local magnitude so that it agrees with moment magnitude where a regional moment tensor can be computed. To this end, regional moment tensor solutions and moment magnitudes are computed for events with magnitude larger than 3.5 to allow calibration of local magnitude to moment magnitude. For both methods the new formula results in magnitudes systematically lower than previous values computed with Eaton's (1992) model. We compare the resulting magnitudes and discuss the benefits and drawbacks of each method. Our results highlight the importance of correct calibration of the distance correction terms for accurate local magnitude assessment in regional networks.

  17. Australian snowpack in the NARCliM ensemble: evaluation, bias correction and future projections

    NASA Astrophysics Data System (ADS)

    Luca, Alejandro Di; Evans, Jason P.; Ji, Fei

    2017-10-01

    In this study we evaluate the ability of an ensemble of high-resolution Regional Climate Model simulations to represent snow cover characteristics over the Australian Alps and go on to asses future projections of snowpack characteristics. Our results show that the ensemble presents a cold temperature bias and overestimates total precipitation leading to a general overestimation of the snow cover as compared with MODIS satellite data. We then produce a new set of snowpack characteristics by running a temperature based snow melt/accumulation model forced by bias corrected temperature and precipitation fields. While some positive snow cover biases remain, the bias corrected (BC) dataset show large improvements regarding the simulation of total amounts, seasonality and spatial distribution of the snow cover compared with MODIS products. Both the raw and BC datasets are then used to assess future changes in the snowpack characteristics. Both datasets show robust increases in near-surface temperatures and decreases in snowfall that lead to a substantial reduction of the snowpack over the Australian Alps. The snowpack decreases by about 15 and 60% by 2030 and 2070 respectively. While the BC data introduce large differences in the simulation of the present climate snowpack, in relative terms future changes appear to be similar to those obtained using the raw data. Future temperature projections show a clear dependence with elevation through the snow-albedo feedback effect that affects snowpack projections. Uncertainties in future projections of the snowpack are large in both datasets and are mainly dominated by the choice of the lateral boundary conditions.

  18. Small field detector correction factors kQclin,Qmsr (fclin,fmsr) for silicon-diode and diamond detectors with circular 6 MV fields derived using both empirical and numerical methods.

    PubMed

    O'Brien, D J; León-Vintró, L; McClean, B

    2016-01-01

    The use of radiotherapy fields smaller than 3 cm in diameter has resulted in the need for accurate detector correction factors for small field dosimetry. However, published factors do not always agree and errors introduced by biased reference detectors, inaccurate Monte Carlo models, or experimental errors can be difficult to distinguish. The aim of this study was to provide a robust set of detector-correction factors for a range of detectors using numerical, empirical, and semiempirical techniques under the same conditions and to examine the consistency of these factors between techniques. Empirical detector correction factors were derived based on small field output factor measurements for circular field sizes from 3.1 to 0.3 cm in diameter performed with a 6 MV beam. A PTW 60019 microDiamond detector was used as the reference dosimeter. Numerical detector correction factors for the same fields were derived based on calculations from a geant4 Monte Carlo model of the detectors and the Linac treatment head. Semiempirical detector correction factors were derived from the empirical output factors and the numerical dose-to-water calculations. The PTW 60019 microDiamond was found to over-respond at small field sizes resulting in a bias in the empirical detector correction factors. The over-response was similar in magnitude to that of the unshielded diode. Good agreement was generally found between semiempirical and numerical detector correction factors except for the PTW 60016 Diode P, where the numerical values showed a greater over-response than the semiempirical values by a factor of 3.7% for a 1.1 cm diameter field and higher for smaller fields. Detector correction factors based solely on empirical measurement or numerical calculation are subject to potential bias. A semiempirical approach, combining both empirical and numerical data, provided the most reliable results.

  19. A non-parametric postprocessor for bias-correcting multi-model ensemble forecasts of hydrometeorological and hydrologic variables

    NASA Astrophysics Data System (ADS)

    Brown, James; Seo, Dong-Jun

    2010-05-01

    Operational forecasts of hydrometeorological and hydrologic variables often contain large uncertainties, for which ensemble techniques are increasingly used. However, the utility of ensemble forecasts depends on the unbiasedness of the forecast probabilities. We describe a technique for quantifying and removing biases from ensemble forecasts of hydrometeorological and hydrologic variables, intended for use in operational forecasting. The technique makes no a priori assumptions about the distributional form of the variables, which is often unknown or difficult to model parametrically. The aim is to estimate the conditional cumulative distribution function (ccdf) of the observed variable given a (possibly biased) real-time ensemble forecast from one or several forecasting systems (multi-model ensembles). The technique is based on Bayesian optimal linear estimation of indicator variables, and is analogous to indicator cokriging (ICK) in geostatistics. By developing linear estimators for the conditional expectation of the observed variable at many thresholds, ICK provides a discrete approximation of the full ccdf. Since ICK minimizes the conditional error variance of the indicator expectation at each threshold, it effectively minimizes the Continuous Ranked Probability Score (CRPS) when infinitely many thresholds are employed. However, the ensemble members used as predictors in ICK, and other bias-correction techniques, are often highly cross-correlated, both within and between models. Thus, we propose an orthogonal transform of the predictors used in ICK, which is analogous to using their principal components in the linear system of equations. This leads to a well-posed problem in which a minimum number of predictors are used to provide maximum information content in terms of the total variance explained. The technique is used to bias-correct precipitation ensemble forecasts from the NCEP Global Ensemble Forecast System (GEFS), for which independent validation results are presented. Extension to multimodel ensembles from the NCEP GFS and Short Range Ensemble Forecast (SREF) systems is also proposed.

  20. Can bias correction and statistical downscaling methods improve the skill of seasonal precipitation forecasts?

    NASA Astrophysics Data System (ADS)

    Manzanas, R.; Lucero, A.; Weisheimer, A.; Gutiérrez, J. M.

    2018-02-01

    Statistical downscaling methods are popular post-processing tools which are widely used in many sectors to adapt the coarse-resolution biased outputs from global climate simulations to the regional-to-local scale typically required by users. They range from simple and pragmatic Bias Correction (BC) methods, which directly adjust the model outputs of interest (e.g. precipitation) according to the available local observations, to more complex Perfect Prognosis (PP) ones, which indirectly derive local predictions (e.g. precipitation) from appropriate upper-air large-scale model variables (predictors). Statistical downscaling methods have been extensively used and critically assessed in climate change applications; however, their advantages and limitations in seasonal forecasting are not well understood yet. In particular, a key problem in this context is whether they serve to improve the forecast quality/skill of raw model outputs beyond the adjustment of their systematic biases. In this paper we analyze this issue by applying two state-of-the-art BC and two PP methods to downscale precipitation from a multimodel seasonal hindcast in a challenging tropical region, the Philippines. To properly assess the potential added value beyond the reduction of model biases, we consider two validation scores which are not sensitive to changes in the mean (correlation and reliability categories). Our results show that, whereas BC methods maintain or worsen the skill of the raw model forecasts, PP methods can yield significant skill improvement (worsening) in cases for which the large-scale predictor variables considered are better (worse) predicted by the model than precipitation. For instance, PP methods are found to increase (decrease) model reliability in nearly 40% of the stations considered in boreal summer (autumn). Therefore, the choice of a convenient downscaling approach (either BC or PP) depends on the region and the season.

  1. Bias Correction of MODIS AOD using DragonNET to obtain improved estimation of PM2.5

    NASA Astrophysics Data System (ADS)

    Gross, B.; Malakar, N. K.; Atia, A.; Moshary, F.; Ahmed, S. A.; Oo, M. M.

    2014-12-01

    MODIS AOD retreivals using the Dark Target algorithm is strongly affected by the underlying surface reflection properties. In particular, the operational algorithms make use of surface parameterizations trained on global datasets and therefore do not account properly for urban surface differences. This parameterization continues to show an underestimation of the surface reflection which results in a general over-biasing in AOD retrievals. Recent results using the Dragon-Network datasets as well as high resolution retrievals in the NYC area illustrate that this is even more significant at the newest C006 3 km retrievals. In the past, we used AERONET observation in the City College to obtain bias-corrected AOD, but the homogeneity assumptions using only one site for the region is clearly an issue. On the other hand, DragonNET observations provide ample opportunities to obtain better tuning the surface corrections while also providing better statistical validation. In this study we present a neural network method to obtain bias correction of the MODIS AOD using multiple factors including surface reflectivity at 2130nm, sun-view geometrical factors and land-class information. These corrected AOD's are then used together with additional WRF meteorological factors to improve estimates of PM2.5. Efforts to explore the portability to other urban areas will be discussed. In addition, annual surface ratio maps will be developed illustrating that among the land classes, the urban pixels constitute the largest deviations from the operational model.

  2. Measuring willingness to pay to improve municipal water in southeast Anatolia, Turkey

    NASA Astrophysics Data System (ADS)

    Bilgic, Abdulbaki

    2010-12-01

    Increasing demands for water and quality concerns have highlighted the importance of accounting for household perceptions before local municipalities rehabilitate existing water infrastructures and bring them into compliance. We compared different willingness-to-pay (WTP) estimates using household surveys in the southern Anatolian region of Turkey. Our study is the first of its kind in Turkey. Biases resulting from sample selection and the endogeneity of explanatory variables were corrected. When compared to a univariate probit model, correction of these biases was shown to result in statistically significant findings through moderate reductions in mean WTP.

  3. High-resolution dynamical downscaling of the future Alpine climate

    NASA Astrophysics Data System (ADS)

    Bozhinova, Denica; José Gómez-Navarro, Juan; Raible, Christoph

    2017-04-01

    The Alpine region and Switzerland is a challenging area for simulating and analysing Global Climate Model (GCM) results. This is mostly due to the combination of a very complex topography and the still rather coarse horizontal resolution of current GCMs, in which not all of the many-scale processes that drive the local weather and climate can be resolved. In our study, the Weather Research and Forecasting (WRF) model is used to dynamically downscale a GCM simulation to a resolution as high as 2 km x 2 km. WRF is driven by initial and boundary conditions produced with the Community Earth System Model (CESM) for the recent past (control run) and until 2100 using the RCP8.5 climate scenario (future run). The control run downscaled with WRF covers the period 1976-2005, while the future run investigates a 20-year-slice simulated for the 2080-2099. We compare the control WRF-CESM simulations to an observational product provided by MeteoSwiss and an additional WRF simulation driven by the ERA-Interim reanalysis, to estimate the bias that is introduced by the extra modelling step of our framework. Several bias-correction methods are evaluated, including a quantile mapping technique, to ameliorate the bias in the control WRF-CESM simulation. In the next step of our study these corrections are applied to our future WRF-CESM run. The resulting downscaled and bias-corrected data is analysed for the properties of precipitation and wind speed in the future climate. Our special interest focuses on the absolute quantities simulated for these meteorological variables as these are used to identify extreme events, such as wind storms and situations that can lead to floods.

  4. Correcting the Relative Bias of Light Obscuration and Flow Imaging Particle Counters.

    PubMed

    Ripple, Dean C; Hu, Zhishang

    2016-03-01

    Industry and regulatory bodies desire more accurate methods for counting and characterizing particles. Measurements of proteinaceous-particle concentrations by light obscuration and flow imaging can differ by factors of ten or more. We propose methods to correct the diameters reported by light obscuration and flow imaging instruments. For light obscuration, diameters were rescaled based on characterization of the refractive index of typical particles and a light scattering model for the extinction efficiency factor. The light obscuration models are applicable for either homogeneous materials (e.g., silicone oil) or for chemically homogeneous, but spatially non-uniform aggregates (e.g., protein aggregates). For flow imaging, the method relied on calibration of the instrument with silica beads suspended in water-glycerol mixtures. These methods were applied to a silicone-oil droplet suspension and four particle suspensions containing particles produced from heat stressed and agitated human serum albumin, agitated polyclonal immunoglobulin, and abraded ethylene tetrafluoroethylene polymer. All suspensions were measured by two flow imaging and one light obscuration apparatus. Prior to correction, results from the three instruments disagreed by a factor ranging from 3.1 to 48 in particle concentration over the size range from 2 to 20 μm. Bias corrections reduced the disagreement from an average factor of 14 down to an average factor of 1.5. The methods presented show promise in reducing the relative bias between light obscuration and flow imaging.

  5. High-end climate change impact on European runoff and low flows – exploring the effects of forcing biases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papadimitriou, Lamprini V.; Koutroulis, Aristeidis G.; Grillakis, Manolis G.

    Climate models project a much more substantial warming than the 2 °C target under the more probable emission scenarios, making higher-end scenarios increasingly plausible. Freshwater availability under such conditions is a key issue of concern. In this study, an ensemble of Euro-CORDEX projections under RCP8.5 is used to assess the mean and low hydrological states under +4 °C of global warming for the European region. Five major European catchments were analysed in terms of future drought climatology and the impact of +2 °C versus +4 °C global warming was investigated. The effect of bias correction of the climate model outputsmore » and the observations used for this adjustment was also quantified. Projections indicate an intensification of the water cycle at higher levels of warming. Even for areas where the average state may not considerably be affected, low flows are expected to reduce, leading to changes in the number of dry days and thus drought climatology. The identified increasing or decreasing runoff trends are substantially intensified when moving from the +2 to the +4° of global warming. Bias correction resulted in an improved representation of the historical hydrology. Moreover, it is also found that the selection of the observational data set for the application of the bias correction has an impact on the projected signal that could be of the same order of magnitude to the selection of the Global Climate Model (GCM).« less

  6. High-end climate change impact on European runoff and low flows – exploring the effects of forcing biases

    DOE PAGES

    Papadimitriou, Lamprini V.; Koutroulis, Aristeidis G.; Grillakis, Manolis G.; ...

    2016-05-10

    Climate models project a much more substantial warming than the 2 °C target under the more probable emission scenarios, making higher-end scenarios increasingly plausible. Freshwater availability under such conditions is a key issue of concern. In this study, an ensemble of Euro-CORDEX projections under RCP8.5 is used to assess the mean and low hydrological states under +4 °C of global warming for the European region. Five major European catchments were analysed in terms of future drought climatology and the impact of +2 °C versus +4 °C global warming was investigated. The effect of bias correction of the climate model outputsmore » and the observations used for this adjustment was also quantified. Projections indicate an intensification of the water cycle at higher levels of warming. Even for areas where the average state may not considerably be affected, low flows are expected to reduce, leading to changes in the number of dry days and thus drought climatology. The identified increasing or decreasing runoff trends are substantially intensified when moving from the +2 to the +4° of global warming. Bias correction resulted in an improved representation of the historical hydrology. Moreover, it is also found that the selection of the observational data set for the application of the bias correction has an impact on the projected signal that could be of the same order of magnitude to the selection of the Global Climate Model (GCM).« less

  7. Direct estimation and correction of bias from temporally variable non-stationary noise in a channelized Hotelling model observer.

    PubMed

    Fetterly, Kenneth A; Favazza, Christopher P

    2016-08-07

    Channelized Hotelling model observer (CHO) methods were developed to assess performance of an x-ray angiography system. The analytical methods included correction for known bias error due to finite sampling. Detectability indices ([Formula: see text]) corresponding to disk-shaped objects with diameters in the range 0.5-4 mm were calculated. Application of the CHO for variable detector target dose (DTD) in the range 6-240 nGy frame(-1) resulted in [Formula: see text] estimates which were as much as 2.9×  greater than expected of a quantum limited system. Over-estimation of [Formula: see text] was presumed to be a result of bias error due to temporally variable non-stationary noise. Statistical theory which allows for independent contributions of 'signal' from a test object (o) and temporally variable non-stationary noise (ns) was developed. The theory demonstrates that the biased [Formula: see text] is the sum of the detectability indices associated with the test object [Formula: see text] and non-stationary noise ([Formula: see text]). Given the nature of the imaging system and the experimental methods, [Formula: see text] cannot be directly determined independent of [Formula: see text]. However, methods to estimate [Formula: see text] independent of [Formula: see text] were developed. In accordance with the theory, [Formula: see text] was subtracted from experimental estimates of [Formula: see text], providing an unbiased estimate of [Formula: see text]. Estimates of [Formula: see text] exhibited trends consistent with expectations of an angiography system that is quantum limited for high DTD and compromised by detector electronic readout noise for low DTD conditions. Results suggest that these methods provide [Formula: see text] estimates which are accurate and precise for [Formula: see text]. Further, results demonstrated that the source of bias was detector electronic readout noise. In summary, this work presents theory and methods to test for the presence of bias in Hotelling model observers due to temporally variable non-stationary noise and correct this bias when the temporally variable non-stationary noise is independent and additive with respect to the test object signal.

  8. Multi-Model Combination techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ajami, N K; Duan, Q; Gao, X

    2005-04-11

    This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less

  9. Estimation of suspended-sediment rating curves and mean suspended-sediment loads

    USGS Publications Warehouse

    Crawford, Charles G.

    1991-01-01

    A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.

  10. Multivariate bias adjustment of high-dimensional climate simulations: the Rank Resampling for Distributions and Dependences (R2D2) bias correction

    NASA Astrophysics Data System (ADS)

    Vrac, Mathieu

    2018-06-01

    Climate simulations often suffer from statistical biases with respect to observations or reanalyses. It is therefore common to correct (or adjust) those simulations before using them as inputs into impact models. However, most bias correction (BC) methods are univariate and so do not account for the statistical dependences linking the different locations and/or physical variables of interest. In addition, they are often deterministic, and stochasticity is frequently needed to investigate climate uncertainty and to add constrained randomness to climate simulations that do not possess a realistic variability. This study presents a multivariate method of rank resampling for distributions and dependences (R2D2) bias correction allowing one to adjust not only the univariate distributions but also their inter-variable and inter-site dependence structures. Moreover, the proposed R2D2 method provides some stochasticity since it can generate as many multivariate corrected outputs as the number of statistical dimensions (i.e., number of grid cell × number of climate variables) of the simulations to be corrected. It is based on an assumption of stability in time of the dependence structure - making it possible to deal with a high number of statistical dimensions - that lets the climate model drive the temporal properties and their changes in time. R2D2 is applied on temperature and precipitation reanalysis time series with respect to high-resolution reference data over the southeast of France (1506 grid cell). Bivariate, 1506-dimensional and 3012-dimensional versions of R2D2 are tested over a historical period and compared to a univariate BC. How the different BC methods behave in a climate change context is also illustrated with an application to regional climate simulations over the 2071-2100 period. The results indicate that the 1d-BC basically reproduces the climate model multivariate properties, 2d-R2D2 is only satisfying in the inter-variable context, 1506d-R2D2 strongly improves inter-site properties and 3012d-R2D2 is able to account for both. Applications of the proposed R2D2 method to various climate datasets are relevant for many impact studies. The perspectives of improvements are numerous, such as introducing stochasticity in the dependence itself, questioning its stability assumption, and accounting for temporal properties adjustment while including more physics in the adjustment procedures.

  11. GOTHiC, a probabilistic model to resolve complex biases and to identify real interactions in Hi-C data.

    PubMed

    Mifsud, Borbala; Martincorena, Inigo; Darbo, Elodie; Sugar, Robert; Schoenfelder, Stefan; Fraser, Peter; Luscombe, Nicholas M

    2017-01-01

    Hi-C is one of the main methods for investigating spatial co-localisation of DNA in the nucleus. However, the raw sequencing data obtained from Hi-C experiments suffer from large biases and spurious contacts, making it difficult to identify true interactions. Existing methods use complex models to account for biases and do not provide a significance threshold for detecting interactions. Here we introduce a simple binomial probabilistic model that resolves complex biases and distinguishes between true and false interactions. The model corrects biases of known and unknown origin and yields a p-value for each interaction, providing a reliable threshold based on significance. We demonstrate this experimentally by testing the method against a random ligation dataset. Our method outperforms previous methods and provides a statistical framework for further data analysis, such as comparisons of Hi-C interactions between different conditions. GOTHiC is available as a BioConductor package (http://www.bioconductor.org/packages/release/bioc/html/GOTHiC.html).

  12. Improving detection of copy-number variation by simultaneous bias correction and read-depth segmentation.

    PubMed

    Szatkiewicz, Jin P; Wang, WeiBo; Sullivan, Patrick F; Wang, Wei; Sun, Wei

    2013-02-01

    Structural variation is an important class of genetic variation in mammals. High-throughput sequencing (HTS) technologies promise to revolutionize copy-number variation (CNV) detection but present substantial analytic challenges. Converging evidence suggests that multiple types of CNV-informative data (e.g. read-depth, read-pair, split-read) need be considered, and that sophisticated methods are needed for more accurate CNV detection. We observed that various sources of experimental biases in HTS confound read-depth estimation, and note that bias correction has not been adequately addressed by existing methods. We present a novel read-depth-based method, GENSENG, which uses a hidden Markov model and negative binomial regression framework to identify regions of discrete copy-number changes while simultaneously accounting for the effects of multiple confounders. Based on extensive calibration using multiple HTS data sets, we conclude that our method outperforms existing read-depth-based CNV detection algorithms. The concept of simultaneous bias correction and CNV detection can serve as a basis for combining read-depth with other types of information such as read-pair or split-read in a single analysis. A user-friendly and computationally efficient implementation of our method is freely available.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Priel, Nadav; Landsman, Hagar; Manfredini, Alessandro

    We propose a safeguard procedure for statistical inference that provides universal protection against mismodeling of the background. The method quantifies and incorporates the signal-like residuals of the background model into the likelihood function, using information available in a calibration dataset. This prevents possible false discovery claims that may arise through unknown mismodeling, and corrects the bias in limit setting created by overestimated or underestimated background. We demonstrate how the method removes the bias created by an incomplete background model using three realistic case studies.

  14. Automatic segmentation for brain MR images via a convex optimized segmentation and bias field correction coupled model.

    PubMed

    Chen, Yunjie; Zhao, Bo; Zhang, Jianwei; Zheng, Yuhui

    2014-09-01

    Accurate segmentation of magnetic resonance (MR) images remains challenging mainly due to the intensity inhomogeneity, which is also commonly known as bias field. Recently active contour models with geometric information constraint have been applied, however, most of them deal with the bias field by using a necessary pre-processing step before segmentation of MR data. This paper presents a novel automatic variational method, which can segment brain MR images meanwhile correcting the bias field when segmenting images with high intensity inhomogeneities. We first define a function for clustering the image pixels in a smaller neighborhood. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. In order to reduce the effect of the noise, the local intensity variations are described by the Gaussian distributions with different means and variances. Then, the objective functions are integrated over the entire domain. In order to obtain the global optimal and make the results independent of the initialization of the algorithm, we reconstructed the energy function to be convex and calculated it by using the Split Bregman theory. A salient advantage of our method is that its result is independent of initialization, which allows robust and fully automated application. Our method is able to estimate the bias of quite general profiles, even in 7T MR images. Moreover, our model can also distinguish regions with similar intensity distribution with different variances. The proposed method has been rigorously validated with images acquired on variety of imaging modalities with promising results. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Novel measures of linkage disequilibrium that correct the bias due to population structure and relatedness.

    PubMed

    Mangin, B; Siberchicot, A; Nicolas, S; Doligez, A; This, P; Cierco-Ayrolles, C

    2012-03-01

    Among the several linkage disequilibrium measures known to capture different features of the non-independence between alleles at different loci, the most commonly used for diallelic loci is the r(2) measure. In the present study, we tackled the problem of the bias of r(2) estimate, which results from the sample structure and/or the relatedness between genotyped individuals. We derived two novel linkage disequilibrium measures for diallelic loci that are both extensions of the usual r(2) measure. The first one, r(S)(2), uses the population structure matrix, which consists of information about the origins of each individual and the admixture proportions of each individual genome. The second one, r(V)(2), includes the kinship matrix into the calculation. These two corrections can be applied together in order to correct for both biases and are defined either on phased or unphased genotypes.We proved that these novel measures are linked to the power of association tests under the mixed linear model including structure and kinship corrections. We validated them on simulated data and applied them to real data sets collected on Vitis vinifera plants. Our results clearly showed the usefulness of the two corrected r(2) measures, which actually captured 'true' linkage disequilibrium unlike the usual r(2) measure.

  16. On the importance of avoiding shortcuts in applying cognitive models to hierarchical data.

    PubMed

    Boehm, Udo; Marsman, Maarten; Matzke, Dora; Wagenmakers, Eric-Jan

    2018-06-12

    Psychological experiments often yield data that are hierarchically structured. A number of popular shortcut strategies in cognitive modeling do not properly accommodate this structure and can result in biased conclusions. To gauge the severity of these biases, we conducted a simulation study for a two-group experiment. We first considered a modeling strategy that ignores the hierarchical data structure. In line with theoretical results, our simulations showed that Bayesian and frequentist methods that rely on this strategy are biased towards the null hypothesis. Secondly, we considered a modeling strategy that takes a two-step approach by first obtaining participant-level estimates from a hierarchical cognitive model and subsequently using these estimates in a follow-up statistical test. Methods that rely on this strategy are biased towards the alternative hypothesis. Only hierarchical models of the multilevel data lead to correct conclusions. Our results are particularly relevant for the use of hierarchical Bayesian parameter estimates in cognitive modeling.

  17. An algebraic algorithm for nonuniformity correction in focal-plane arrays.

    PubMed

    Ratliff, Bradley M; Hayat, Majeed M; Hardie, Russell C

    2002-09-01

    A scene-based algorithm is developed to compensate for bias nonuniformity in focal-plane arrays. Nonuniformity can be extremely problematic, especially for mid- to far-infrared imaging systems. The technique is based on use of estimates of interframe subpixel shifts in an image sequence, in conjunction with a linear-interpolation model for the motion, to extract information on the bias nonuniformity algebraically. The performance of the proposed algorithm is analyzed by using real infrared and simulated data. One advantage of this technique is its simplicity; it requires relatively few frames to generate an effective correction matrix, thereby permitting the execution of frequent on-the-fly nonuniformity correction as drift occurs. Additionally, the performance is shown to exhibit considerable robustness with respect to lack of the common types of temporal and spatial irradiance diversity that are typically required by statistical scene-based nonuniformity correction techniques.

  18. A method of bias correction for maximal reliability with dichotomous measures.

    PubMed

    Penev, Spiridon; Raykov, Tenko

    2010-02-01

    This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.

  19. Improving salt marsh digital elevation model accuracy with full-waveform lidar and nonparametric predictive modeling

    NASA Astrophysics Data System (ADS)

    Rogers, Jeffrey N.; Parrish, Christopher E.; Ward, Larry G.; Burdick, David M.

    2018-03-01

    Salt marsh vegetation tends to increase vertical uncertainty in light detection and ranging (lidar) derived elevation data, often causing the data to become ineffective for analysis of topographic features governing tidal inundation or vegetation zonation. Previous attempts at improving lidar data collected in salt marsh environments range from simply computing and subtracting the global elevation bias to more complex methods such as computing vegetation-specific, constant correction factors. The vegetation specific corrections can be used along with an existing habitat map to apply separate corrections to different areas within a study site. It is hypothesized here that correcting salt marsh lidar data by applying location-specific, point-by-point corrections, which are computed from lidar waveform-derived features, tidal-datum based elevation, distance from shoreline and other lidar digital elevation model based variables, using nonparametric regression will produce better results. The methods were developed and tested using full-waveform lidar and ground truth for three marshes in Cape Cod, Massachusetts, U.S.A. Five different model algorithms for nonparametric regression were evaluated, with TreeNet's stochastic gradient boosting algorithm consistently producing better regression and classification results. Additionally, models were constructed to predict the vegetative zone (high marsh and low marsh). The predictive modeling methods used in this study estimated ground elevation with a mean bias of 0.00 m and a standard deviation of 0.07 m (0.07 m root mean square error). These methods appear very promising for correction of salt marsh lidar data and, importantly, do not require an existing habitat map, biomass measurements, or image based remote sensing data such as multi/hyperspectral imagery.

  20. The importance of topographically corrected null models for analyzing ecological point processes.

    PubMed

    McDowall, Philip; Lynch, Heather J

    2017-07-01

    Analyses of point process patterns and related techniques (e.g., MaxEnt) make use of the expected number of occurrences per unit area and second-order statistics based on the distance between occurrences. Ecologists working with point process data often assume that points exist on a two-dimensional x-y plane or within a three-dimensional volume, when in fact many observed point patterns are generated on a two-dimensional surface existing within three-dimensional space. For many surfaces, however, such as the topography of landscapes, the projection from the surface to the x-y plane preserves neither area nor distance. As such, when these point patterns are implicitly projected to and analyzed in the x-y plane, our expectations of the point pattern's statistical properties may not be met. When used in hypothesis testing, we find that the failure to account for the topography of the generating surface may bias statistical tests that incorrectly identify clustering and, furthermore, may bias coefficients in inhomogeneous point process models that incorporate slope as a covariate. We demonstrate the circumstances under which this bias is significant, and present simple methods that allow point processes to be simulated with corrections for topography. These point patterns can then be used to generate "topographically corrected" null models against which observed point processes can be compared. © 2017 by the Ecological Society of America.

  1. Calibration of GOES-derived solar radiation data using a distributed network of surface measurements in Florida, USA

    USGS Publications Warehouse

    Sumner, David M.; Pathak, Chandra S.; Mecikalski, John R.; Paech, Simon J.; Wu, Qinglong; Sangoyomi, Taiye; Babcock, Roger W.; Walton, Raymond

    2008-01-01

    Solar radiation data are critically important for the estimation of evapotranspiration. Analysis of visible-channel data derived from Geostationary Operational Environmental Satellites (GOES) using radiative transfer modeling has been used to produce spatially- and temporally-distributed datasets of solar radiation. An extensive network of (pyranometer) surface measurements of solar radiation in the State of Florida has allowed refined calibration of a GOES-derived daily integrated radiation data product. This refinement of radiation data allowed for corrections of satellite sensor drift, satellite generational change, and consideration of the highly-variable cloudy conditions that are typical of Florida. To aid in calibration of a GOES-derived radiation product, solar radiation data for the period 1995–2004 from 58 field stations that are located throughout the State were compiled. The GOES radiation product was calibrated by way of a three-step process: 1) comparison with ground-based pyranometer measurements on clear reference days, 2) correcting for a bias related to cloud cover, and 3) deriving month-by-month bias correction factors. Pre-calibration results indicated good model performance, with a station-averaged model error of 2.2 MJ m–2 day–1 (13 percent). Calibration reduced errors to 1.7 MJ m–2 day–1 (10 percent) and also removed time- and cloudiness-related biases. The final dataset has been used to produce Statewide evapotranspiration estimates.

  2. Verification of ECMWF System 4 for seasonal hydrological forecasting in a northern climate

    NASA Astrophysics Data System (ADS)

    Bazile, Rachel; Boucher, Marie-Amélie; Perreault, Luc; Leconte, Robert

    2017-11-01

    Hydropower production requires optimal dam and reservoir management to prevent flooding damage and avoid operation losses. In a northern climate, where spring freshet constitutes the main inflow volume, seasonal forecasts can help to establish a yearly strategy. Long-term hydrological forecasts often rely on past observations of streamflow or meteorological data. Another alternative is to use ensemble meteorological forecasts produced by climate models. In this paper, those produced by the ECMWF (European Centre for Medium-Range Forecast) System 4 are examined and bias is characterized. Bias correction, through the linear scaling method, improves the performance of the raw ensemble meteorological forecasts in terms of continuous ranked probability score (CRPS). Then, three seasonal ensemble hydrological forecasting systems are compared: (1) the climatology of simulated streamflow, (2) the ensemble hydrological forecasts based on climatology (ESP) and (3) the hydrological forecasts based on bias-corrected ensemble meteorological forecasts from System 4 (corr-DSP). Simulated streamflow computed using observed meteorological data is used as benchmark. Accounting for initial conditions is valuable even for long-term forecasts. ESP and corr-DSP both outperform the climatology of simulated streamflow for lead times from 1 to 5 months depending on the season and watershed. Integrating information about future meteorological conditions also improves monthly volume forecasts. For the 1-month lead time, a gain exists for almost all watersheds during winter, summer and fall. However, volume forecasts performance for spring varies from one watershed to another. For most of them, the performance is close to the performance of ESP. For longer lead times, the CRPS skill score is mostly in favour of ESP, even if for many watersheds, ESP and corr-DSP have comparable skill. Corr-DSP appears quite reliable but, in some cases, under-dispersion or bias is observed. A more complex bias-correction method should be further investigated to remedy this weakness and take more advantage of the ensemble forecasts produced by the climate model. Overall, in this study, bias-corrected ensemble meteorological forecasts appear to be an interesting source of information for hydrological forecasting for lead times up to 1 month. They could also complement ESP for longer lead times.

  3. Assimilation of SMOS Retrievals in the Land Information System

    NASA Technical Reports Server (NTRS)

    Blankenship, Clay B.; Case, Jonathan L.; Zavodsky, Bradley T.; Crosson, William L.

    2016-01-01

    The Soil Moisture and Ocean Salinity (SMOS) satellite provides retrievals of soil moisture in the upper 5 cm with a 30-50 km resolution and a mission accuracy requirement of 0.04 cm(sub 3 cm(sub -3). These observations can be used to improve land surface model soil moisture states through data assimilation. In this paper, SMOS soil moisture retrievals are assimilated into the Noah land surface model via an Ensemble Kalman Filter within the NASA Land Information System. Bias correction is implemented using Cumulative Distribution Function (CDF) matching, with points aggregated by either land cover or soil type to reduce sampling error in generating the CDFs. An experiment was run for the warm season of 2011 to test SMOS data assimilation and to compare assimilation methods. Verification of soil moisture analyses in the 0-10 cm upper layer and root zone (0-1 m) was conducted using in situ measurements from several observing networks in the central and southeastern United States. This experiment showed that SMOS data assimilation significantly increased the anomaly correlation of Noah soil moisture with station measurements from 0.45 to 0.57 in the 0-10 cm layer. Time series at specific stations demonstrate the ability of SMOS DA to increase the dynamic range of soil moisture in a manner consistent with station measurements. Among the bias correction methods, the correction based on soil type performed best at bias reduction but also reduced correlations. The vegetation-based correction did not produce any significant differences compared to using a simple uniform correction curve.

  4. Assimilation of SMOS Retrievals in the Land Information System

    PubMed Central

    Blankenship, Clay B.; Case, Jonathan L.; Zavodsky, Bradley T.; Crosson, William L.

    2018-01-01

    The Soil Moisture and Ocean Salinity (SMOS) satellite provides retrievals of soil moisture in the upper 5 cm with a 30-50 km resolution and a mission accuracy requirement of 0.04 cm3 cm−3. These observations can be used to improve land surface model soil moisture states through data assimilation. In this paper, SMOS soil moisture retrievals are assimilated into the Noah land surface model via an Ensemble Kalman Filter within the NASA Land Information System. Bias correction is implemented using Cumulative Distribution Function (CDF) matching, with points aggregated by either land cover or soil type to reduce sampling error in generating the CDFs. An experiment was run for the warm season of 2011 to test SMOS data assimilation and to compare assimilation methods. Verification of soil moisture analyses in the 0-10 cm upper layer and root zone (0-1 m) was conducted using in situ measurements from several observing networks in the central and southeastern United States. This experiment showed that SMOS data assimilation significantly increased the anomaly correlation of Noah soil moisture with station measurements from 0.45 to 0.57 in the 0-10 cm layer. Time series at specific stations demonstrate the ability of SMOS DA to increase the dynamic range of soil moisture in a manner consistent with station measurements. Among the bias correction methods, the correction based on soil type performed best at bias reduction but also reduced correlations. The vegetation-based correction did not produce any significant differences compared to using a simple uniform correction curve. PMID:29367795

  5. Application of a hybrid model to reduce bias and improve precision in population estimates for elk (Cervus elaphus) inhabiting a cold desert ecosystem

    USGS Publications Warehouse

    Schoenecker, Kathryn A.; Lubow, Bruce C.

    2016-01-01

    Accurately estimating the size of wildlife populations is critical to wildlife management and conservation of species. Raw counts or “minimum counts” are still used as a basis for wildlife management decisions. Uncorrected raw counts are not only negatively biased due to failure to account for undetected animals, but also provide no estimate of precision on which to judge the utility of counts. We applied a hybrid population estimation technique that combined sightability modeling, radio collar-based mark-resight, and simultaneous double count (double-observer) modeling to estimate the population size of elk in a high elevation desert ecosystem. Combining several models maximizes the strengths of each individual model while minimizing their singular weaknesses. We collected data with aerial helicopter surveys of the elk population in the San Luis Valley and adjacent mountains in Colorado State, USA in 2005 and 2007. We present estimates from 7 alternative analyses: 3 based on different methods for obtaining a raw count and 4 based on different statistical models to correct for sighting probability bias. The most reliable of these approaches is a hybrid double-observer sightability model (model MH), which uses detection patterns of 2 independent observers in a helicopter plus telemetry-based detections of radio collared elk groups. Data were fit to customized mark-resight models with individual sighting covariates. Error estimates were obtained by a bootstrapping procedure. The hybrid method was an improvement over commonly used alternatives, with improved precision compared to sightability modeling and reduced bias compared to double-observer modeling. The resulting population estimate corrected for multiple sources of undercount bias that, if left uncorrected, would have underestimated the true population size by as much as 22.9%. Our comparison of these alternative methods demonstrates how various components of our method contribute to improving the final estimate and demonstrates why each is necessary.

  6. LA-ICP-MS depth profile analysis of apatite: Protocol and implications for (U-Th)/He thermochronometry

    NASA Astrophysics Data System (ADS)

    Johnstone, Samuel; Hourigan, Jeremy; Gallagher, Christopher

    2013-05-01

    Heterogeneous concentrations of α-producing nuclides in apatite have been recognized through a variety of methods. The presence of zonation in apatite complicates both traditional α-ejection corrections and diffusive models, both of which operate under the assumption of homogeneous concentrations. In this work we develop a method for measuring radial concentration profiles of 238U and 232Th in apatite by laser ablation ICP-MS depth profiling. We then focus on one application of this method, removing bias introduced by applying inappropriate α-ejection corrections. Formal treatment of laser ablation ICP-MS depth profile calibration for apatite includes construction and calibration of matrix-matched standards and quantification of rates of elemental fractionation. From this we conclude that matrix-matched standards provide more robust monitors of fractionation rate and concentrations than doped silicate glass standards. We apply laser ablation ICP-MS depth profiling to apatites from three unknown populations and small, intact crystals of Durango fluorapatite. Accurate and reproducible Durango apatite dates suggest that prolonged exposure to laser drilling does not impact cooling ages. Intracrystalline concentrations vary by at least a factor of 2 in the majority of the samples analyzed, but concentration variation only exceeds 5x in 5 grains and 10x in 1 out of the 63 grains analyzed. Modeling of synthetic concentration profiles suggests that for concentration variations of 2x and 10x individual homogeneous versus zonation dependent α-ejection corrections could lead to age bias of >5% and >20%, respectively. However, models based on measured concentration profiles only generated biases exceeding 5% in 13 of the 63 cases modeled. Application of zonation dependent α-ejection corrections did not significantly reduce the age dispersion present in any of the populations studied. This suggests that factors beyond homogeneous α-ejection corrections are the dominant source of overdispersion in apatite (U-Th)/He cooling ages.

  7. A Maximum-Likelihood Method to Correct for Allelic Dropout in Microsatellite Data with No Replicate Genotypes

    PubMed Central

    Wang, Chaolong; Schroeder, Kari B.; Rosenberg, Noah A.

    2012-01-01

    Allelic dropout is a commonly observed source of missing data in microsatellite genotypes, in which one or both allelic copies at a locus fail to be amplified by the polymerase chain reaction. Especially for samples with poor DNA quality, this problem causes a downward bias in estimates of observed heterozygosity and an upward bias in estimates of inbreeding, owing to mistaken classifications of heterozygotes as homozygotes when one of the two copies drops out. One general approach for avoiding allelic dropout involves repeated genotyping of homozygous loci to minimize the effects of experimental error. Existing computational alternatives often require replicate genotyping as well. These approaches, however, are costly and are suitable only when enough DNA is available for repeated genotyping. In this study, we propose a maximum-likelihood approach together with an expectation-maximization algorithm to jointly estimate allelic dropout rates and allele frequencies when only one set of nonreplicated genotypes is available. Our method considers estimates of allelic dropout caused by both sample-specific factors and locus-specific factors, and it allows for deviation from Hardy–Weinberg equilibrium owing to inbreeding. Using the estimated parameters, we correct the bias in the estimation of observed heterozygosity through the use of multiple imputations of alleles in cases where dropout might have occurred. With simulated data, we show that our method can (1) effectively reproduce patterns of missing data and heterozygosity observed in real data; (2) correctly estimate model parameters, including sample-specific dropout rates, locus-specific dropout rates, and the inbreeding coefficient; and (3) successfully correct the downward bias in estimating the observed heterozygosity. We find that our method is fairly robust to violations of model assumptions caused by population structure and by genotyping errors from sources other than allelic dropout. Because the data sets imputed under our model can be investigated in additional subsequent analyses, our method will be useful for preparing data for applications in diverse contexts in population genetics and molecular ecology. PMID:22851645

  8. Group delay variations of GPS transmitting and receiving antennas

    NASA Astrophysics Data System (ADS)

    Wanninger, Lambert; Sumaya, Hael; Beer, Susanne

    2017-09-01

    GPS code pseudorange measurements exhibit group delay variations at the transmitting and the receiving antenna. We calibrated C1 and P2 delay variations with respect to dual-frequency carrier phase observations and obtained nadir-dependent corrections for 32 satellites of the GPS constellation in early 2015 as well as elevation-dependent corrections for 13 receiving antenna models. The combined delay variations reach up to 1.0 m (3.3 ns) in the ionosphere-free linear combination for specific pairs of satellite and receiving antennas. Applying these corrections to the code measurements improves code/carrier single-frequency precise point positioning, ambiguity fixing based on the Melbourne-Wübbena linear combination, and determination of ionospheric total electron content. It also affects fractional cycle biases and differential code biases.

  9. Dynamic response tests of inertial and optical wind-tunnel model attitude measurement devices

    NASA Technical Reports Server (NTRS)

    Buehrle, R. D.; Young, C. P., Jr.; Burner, A. W.; Tripp, J. S.; Tcheng, P.; Finley, T. D.; Popernack, T. G., Jr.

    1995-01-01

    Results are presented for an experimental study of the response of inertial and optical wind-tunnel model attitude measurement systems in a wind-off simulated dynamic environment. This study is part of an ongoing activity at the NASA Langley Research Center to develop high accuracy, advanced model attitude measurement systems that can be used in a dynamic wind-tunnel environment. This activity was prompted by the inertial model attitude sensor response observed during high levels of model vibration which results in a model attitude measurement bias error. Significant bias errors in model attitude measurement were found for the measurement using the inertial device during wind-off dynamic testing of a model system. The amount of bias present during wind-tunnel tests will depend on the amplitudes of the model dynamic response and the modal characteristics of the model system. Correction models are presented that predict the vibration-induced bias errors to a high degree of accuracy for the vibration modes characterized in the simulated dynamic environment. The optical system results were uncorrupted by model vibration in the laboratory setup.

  10. Inference for binomial probability based on dependent Bernoulli random variables with applications to meta-analysis and group level studies.

    PubMed

    Bakbergenuly, Ilyas; Kulinskaya, Elena; Morgenthaler, Stephan

    2016-07-01

    We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group-level studies or in meta-analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log-odds and arcsine transformations of the estimated probability p̂, both for single-group studies and in combining results from several groups or studies in meta-analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta-analysis and result in abysmal coverage of the combined effect for large K. We also propose bias-correction for the arcsine transformation. Our simulations demonstrate that this bias-correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta-analyses of prevalence. © 2016 The Authors. Biometrical Journal Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  11. A Variational Approach to Simultaneous Image Segmentation and Bias Correction.

    PubMed

    Zhang, Kaihua; Liu, Qingshan; Song, Huihui; Li, Xuelong

    2015-08-01

    This paper presents a novel variational approach for simultaneous estimation of bias field and segmentation of images with intensity inhomogeneity. We model intensity of inhomogeneous objects to be Gaussian distributed with different means and variances, and then introduce a sliding window to map the original image intensity onto another domain, where the intensity distribution of each object is still Gaussian but can be better separated. The means of the Gaussian distributions in the transformed domain can be adaptively estimated by multiplying the bias field with a piecewise constant signal within the sliding window. A maximum likelihood energy functional is then defined on each local region, which combines the bias field, the membership function of the object region, and the constant approximating the true signal from its corresponding object. The energy functional is then extended to the whole image domain by the Bayesian learning approach. An efficient iterative algorithm is proposed for energy minimization, via which the image segmentation and bias field correction are simultaneously achieved. Furthermore, the smoothness of the obtained optimal bias field is ensured by the normalized convolutions without extra cost. Experiments on real images demonstrated the superiority of the proposed algorithm to other state-of-the-art representative methods.

  12. RELIC: a novel dye-bias correction method for Illumina Methylation BeadChip.

    PubMed

    Xu, Zongli; Langie, Sabine A S; De Boever, Patrick; Taylor, Jack A; Niu, Liang

    2017-01-03

    The Illumina Infinium HumanMethylation450 BeadChip and its successor, Infinium MethylationEPIC BeadChip, have been extensively utilized in epigenome-wide association studies. Both arrays use two fluorescent dyes (Cy3-green/Cy5-red) to measure methylation level at CpG sites. However, performance difference between dyes can result in biased estimates of methylation levels. Here we describe a novel method, called REgression on Logarithm of Internal Control probes (RELIC) to correct for dye bias on whole array by utilizing the intensity values of paired internal control probes that monitor the two color channels. We evaluate the method in several datasets against other widely used dye-bias correction methods. Results on data quality improvement showed that RELIC correction statistically significantly outperforms alternative dye-bias correction methods. We incorporated the method into the R package ENmix, which is freely available from the Bioconductor website ( https://www.bioconductor.org/packages/release/bioc/html/ENmix.html ). RELIC is an efficient and robust method to correct for dye-bias in Illumina Methylation BeadChip data. It outperforms other alternative methods and conveniently implemented in R package ENmix to facilitate DNA methylation studies.

  13. What is the importance of climate model bias when projecting the impacts of climate change on land surface processes?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, M. L.; Rajagopalan, K.; Chung, S. H.

    2014-05-16

    Regional climate change impact (CCI) studies have widely involved downscaling and bias-correcting (BC) Global Climate Model (GCM)-projected climate for driving land surface models. However, BC may cause uncertainties in projecting hydrologic and biogeochemical responses to future climate due to the impaired spatiotemporal covariance of climate variables and a breakdown of physical conservation principles. Here we quantify the impact of BC on simulated climate-driven changes in water variables(evapotranspiration, ET; runoff; snow water equivalent, SWE; and water demand for irrigation), crop yield, biogenic volatile organic compounds (BVOC), nitric oxide (NO) emissions, and dissolved inorganic nitrogen (DIN) export over the Pacific Northwest (PNW)more » Region. We also quantify the impacts on net primary production (NPP) over a small watershed in the region (HJ Andrews). Simulation results from the coupled ECHAM5/MPI-OM model with A1B emission scenario were firstly dynamically downscaled to 12 km resolutions with WRF model. Then a quantile mapping based statistical downscaling model was used to downscale them into 1/16th degree resolution daily climate data over historical and future periods. Two series climate data were generated according to the option of bias-correction (i.e. with bias-correction (BC) and without bias-correction, NBC). Impact models were then applied to estimate hydrologic and biogeochemical responses to both BC and NBC meteorological datasets. These im20 pact models include a macro-scale hydrologic model (VIC), a coupled cropping system model (VIC-CropSyst), an ecohydrologic model (RHESSys), a biogenic emissions model (MEGAN), and a nutrient export model (Global-NEWS). Results demonstrate that the BC and NBC climate data provide consistent estimates of the climate-driven changes in water fluxes (ET, runoff, and water demand), VOCs (isoprene and monoterpenes) and NO emissions, mean crop yield, and river DIN export over the PNW domain. However, significant differences rise from projected SWE, crop yield from dry lands, and HJ Andrews’s ET between BC and NBC data. Even though BC post-processing has no significant impacts on most of the studied variables when taking PNW as a whole, their effects have large spatial variations and some local areas are substantially influenced. In addition, there are months during which BC and NBC post-processing produces significant differences in projected changes, such as summer runoff. Factor-controlled simulations indicate that BC post-processing of precipitation and temperature both substantially contribute to these differences at region scales. We conclude that there are trade-offs between using BC climate data for offline CCI studies vs. direct modeled climate data. These trade-offs should be considered when designing integrated modeling frameworks for specific applications; e.g., BC may be more important when considering impacts on reservoir operations in mountainous watersheds than when investigating impacts on biogenic emissions and air quality (where VOCs are a primary indicator).« less

  14. QIN DAWG Validation of Gradient Nonlinearity Bias Correction Workflow for Quantitative Diffusion-Weighted Imaging in Multicenter Trials.

    PubMed

    Malyarenko, Dariya I; Wilmes, Lisa J; Arlinghaus, Lori R; Jacobs, Michael A; Huang, Wei; Helmer, Karl G; Taouli, Bachir; Yankeelov, Thomas E; Newitt, David; Chenevert, Thomas L

    2016-12-01

    Previous research has shown that system-dependent gradient nonlinearity (GNL) introduces a significant spatial bias (nonuniformity) in apparent diffusion coefficient (ADC) maps. Here, the feasibility of centralized retrospective system-specific correction of GNL bias for quantitative diffusion-weighted imaging (DWI) in multisite clinical trials is demonstrated across diverse scanners independent of the scanned object. Using corrector maps generated from system characterization by ice-water phantom measurement completed in the previous project phase, GNL bias correction was performed for test ADC measurements from an independent DWI phantom (room temperature agar) at two offset locations in the bore. The precomputed three-dimensional GNL correctors were retrospectively applied to test DWI scans by the central analysis site. The correction was blinded to reference DWI of the agar phantom at magnet isocenter where the GNL bias is negligible. The performance was evaluated from changes in ADC region of interest histogram statistics before and after correction with respect to the unbiased reference ADC values provided by sites. Both absolute error and nonuniformity of the ADC map induced by GNL (median, 12%; range, -35% to +10%) were substantially reduced by correction (7-fold in median and 3-fold in range). The residual ADC nonuniformity errors were attributed to measurement noise and other non-GNL sources. Correction of systematic GNL bias resulted in a 2-fold decrease in technical variability across scanners (down to site temperature range). The described validation of GNL bias correction marks progress toward implementation of this technology in multicenter trials that utilize quantitative DWI.

  15. QIN DAWG Validation of Gradient Nonlinearity Bias Correction Workflow for Quantitative Diffusion-Weighted Imaging in Multicenter Trials

    PubMed Central

    Malyarenko, Dariya I.; Wilmes, Lisa J.; Arlinghaus, Lori R.; Jacobs, Michael A.; Huang, Wei; Helmer, Karl G.; Taouli, Bachir; Yankeelov, Thomas E.; Newitt, David; Chenevert, Thomas L.

    2017-01-01

    Previous research has shown that system-dependent gradient nonlinearity (GNL) introduces a significant spatial bias (nonuniformity) in apparent diffusion coefficient (ADC) maps. Here, the feasibility of centralized retrospective system-specific correction of GNL bias for quantitative diffusion-weighted imaging (DWI) in multisite clinical trials is demonstrated across diverse scanners independent of the scanned object. Using corrector maps generated from system characterization by ice-water phantom measurement completed in the previous project phase, GNL bias correction was performed for test ADC measurements from an independent DWI phantom (room temperature agar) at two offset locations in the bore. The precomputed three-dimensional GNL correctors were retrospectively applied to test DWI scans by the central analysis site. The correction was blinded to reference DWI of the agar phantom at magnet isocenter where the GNL bias is negligible. The performance was evaluated from changes in ADC region of interest histogram statistics before and after correction with respect to the unbiased reference ADC values provided by sites. Both absolute error and nonuniformity of the ADC map induced by GNL (median, 12%; range, −35% to +10%) were substantially reduced by correction (7-fold in median and 3-fold in range). The residual ADC nonuniformity errors were attributed to measurement noise and other non-GNL sources. Correction of systematic GNL bias resulted in a 2-fold decrease in technical variability across scanners (down to site temperature range). The described validation of GNL bias correction marks progress toward implementation of this technology in multicenter trials that utilize quantitative DWI. PMID:28105469

  16. High-resolution CO2 and CH4 flux inverse modeling combining GOSAT, OCO-2 and ground-based observations

    NASA Astrophysics Data System (ADS)

    Maksyutov, S. S.; Oda, T.; Saito, M.; Ito, A.; Janardanan Achari, R.; Sasakawa, M.; Machida, T.; Kaiser, J. W.; Belikov, D.; Valsala, V.; O'Dell, C.; Yoshida, Y.; Matsunaga, T.

    2017-12-01

    We develop a high-resolution CO2 and CH4 flux inversion system that is based on the Lagrangian-Eulerian coupled tracer transport model, and is designed to estimate surface fluxes from atmospheric CO2 and CH4 data observed by the GOSAT and OCO-2 satellites and by global in-situ networks, including observation in Siberia. We use the Lagrangian particle dispersion model (LPDM) FLEXPART to estimate the surface flux footprints for each observation at 0.1-degree spatial resolution for three days of transport. The LPDM is coupled to a global atmospheric tracer transport model (NIES-TM). The adjoint of the coupled transport model is used in an iterative optimization procedure based on either quasi-Newtonian algorithm or singular value decomposition. Combining surface and satellite data for use in inversion requires correcting for biases present in satellite observation data, that is done in a two-step procedure. As a first step, bi-weekly corrections to prior flux fields are estimated for the period of 2009 to 2015 from in-situ CO2 and CH4 data from global observation network, included in Obspack-GVP (for CO2), WDCGG (CH4) and JR-STATION datasets. High-resolution prior fluxes were prepared for anthropogenic emissions (ODIAC and EDGAR), biomass burning (GFAS), and the terrestrial biosphere. The terrestrial biosphere flux was constructed using a vegetation mosaic map and separate simulations of CO2 fluxes by the VISIT model for each vegetation type present in a grid. The prior flux uncertainty for land is scaled proportionally to monthly mean GPP by the MODIS product for CO2 and EDGAR emissions for CH4. Use of the high-resolution transport leads to improved representation of the anthropogenic plumes, often observed at continental continuous observation sites. OCO-2 observations are aggregated to 1 second averages, to match the 0.1 degree resolution of the transport model. Before including satellite observations in the inversion, the monthly varying latitude-dependent bias is estimated by comparing satellite observations with column abundance simulated with surface fluxes optimized by surface inversion. The bias-corrected GOSAT and OCO-2 data are then used in the inversion together with ground-based observations. Application of the bias correction to satellite data reduces the difference between the flux estimates based on ground-based and satellite observations.

  17. Adaptive History Biases Result from Confidence-Weighted Accumulation of past Choices

    PubMed Central

    2018-01-01

    Perceptual decision-making is biased by previous events, including the history of preceding choices: observers tend to repeat (or alternate) their judgments of the sensory environment more often than expected by chance. Computational models postulate that these so-called choice history biases result from the accumulation of internal decision signals across trials. Here, we provide psychophysical evidence for such a mechanism and its adaptive utility. Male and female human observers performed different variants of a challenging visual motion discrimination task near psychophysical threshold. In a first experiment, we decoupled categorical perceptual choices and motor responses on a trial-by-trial basis. Choice history bias was explained by previous perceptual choices, not motor responses, highlighting the importance of internal decision signals in action-independent formats. In a second experiment, observers performed the task in stimulus environments containing different levels of autocorrelation and providing no external feedback about choice correctness. Despite performing under overall high levels of uncertainty, observers adjusted both the strength and the sign of their choice history biases to these environments. When stimulus sequences were dominated by either repetitions or alternations, the individual degree of this adjustment of history bias was about as good a predictor of individual performance as individual perceptual sensitivity. The history bias adjustment scaled with two proxies for observers' confidence about their previous choices (accuracy and reaction time). Together, our results are consistent with the idea that action-independent, confidence-modulated decision variables are accumulated across choices in a flexible manner that depends on decision-makers' model of their environment. SIGNIFICANCE STATEMENT Decisions based on sensory input are often influenced by the history of one's preceding choices, manifesting as a bias to systematically repeat (or alternate) choices. We here provide support for the idea that such choice history biases arise from the context-dependent accumulation of a quantity referred to as the decision variable: the variable's sign dictates the choice and its magnitude the confidence about choice correctness. We show that choices are accumulated in an action-independent format and a context-dependent manner, weighted by the confidence about their correctness. This confidence-weighted accumulation of choices enables decision-makers to flexibly adjust their behavior to different sensory environments. The bias adjustment can be as important for optimizing performance as one's sensitivity to the momentary sensory input. PMID:29371318

  18. Adaptive History Biases Result from Confidence-weighted Accumulation of Past Choices.

    PubMed

    Braun, Anke; Urai, Anne E; Donner, Tobias H

    2018-01-25

    Perceptual decision-making is biased by previous events, including the history of preceding choices: Observers tend to repeat (or alternate) their judgments of the sensory environment more often than expected by chance. Computational models postulate that these so-called choice history biases result from the accumulation of internal decision signals across trials. Here, we provide psychophysical evidence for such a mechanism and its adaptive utility. Male and female human observers performed different variants of a challenging visual motion discrimination task near psychophysical threshold. In a first experiment, we decoupled categorical perceptual choices and motor responses on a trial-by-trial basis. Choice history bias was explained by previous perceptual choices, not motor responses, highlighting the importance of internal decision signals in action-independent formats. In a second experiment, observers performed the task in stimulus environments containing different levels of auto-correlation and providing no external feedback about choice correctness. Despite performing under overall high levels of uncertainty, observers adjusted both the strength and the sign of their choice history biases to these environments. When stimulus sequences were dominated by either repetitions or alternations, the individual degree of this adjustment of history bias was about as good a predictor of individual performance as individual perceptual sensitivity. The history bias adjustment scaled with two proxies for observers' confidence about their previous choices (accuracy and reaction time). Taken together, our results are consistent with the idea that action-independent, confidence-modulated decision variables are accumulated across choices in a flexible manner that depends on decision-makers' model of their environment. Significance statement: Decisions based on sensory input are often influenced by the history of one's preceding choices, manifesting as a bias to systematically repeat (or alternate) choices. We here provide support for the idea that such choice history biases arise from the context-dependent accumulation of a quantity referred to as the decision variable: the variable's sign dictates the choice and its magnitude the confidence about choice correctness. We show that choices are accumulated in an action-independent format and a context-dependent manner, weighted by the confidence about their correctness. This confidence-weighted accumulation of choices enables decision-makers to flexibly adjust their behavior to different sensory environments. The bias adjustment can be as important for optimizing performance as one's sensitivity to the momentary sensory input. Copyright © 2018 Braun et al.

  19. Can Regional Climate Models be used in the assessment of vulnerability and risk caused by extreme events?

    NASA Astrophysics Data System (ADS)

    Nunes, Ana

    2015-04-01

    Extreme meteorological events played an important role in catastrophic occurrences observed in the past over densely populated areas in Brazil. This motived the proposal of an integrated system for analysis and assessment of vulnerability and risk caused by extreme events in urban areas that are particularly affected by complex topography. That requires a multi-scale approach, which is centered on a regional modeling system, consisting of a regional (spectral) climate model coupled to a land-surface scheme. This regional modeling system employs a boundary forcing method based on scale-selective bias correction and assimilation of satellite-based precipitation estimates. Scale-selective bias correction is a method similar to the spectral nudging technique for dynamical downscaling that allows internal modes to develop in agreement with the large-scale features, while the precipitation assimilation procedure improves the modeled deep-convection and drives the land-surface scheme variables. Here, the scale-selective bias correction acts only on the rotational part of the wind field, letting the precipitation assimilation procedure to correct moisture convergence, in order to reconstruct South American current climate within the South American Hydroclimate Reconstruction Project. The hydroclimate reconstruction outputs might eventually produce improved initial conditions for high-resolution numerical integrations in metropolitan regions, generating more reliable short-term precipitation predictions, and providing accurate hidrometeorological variables to higher resolution geomorphological models. Better representation of deep-convection from intermediate scales is relevant when the resolution of the regional modeling system is refined by any method to meet the scale of geomorphological dynamic models of stability and mass movement, assisting in the assessment of risk areas and estimation of terrain stability over complex topography. The reconstruction of past extreme events also helps the development of a system for decision-making, regarding natural and social disasters, and reducing impacts. Numerical experiments using this regional modeling system successfully modeled severe weather events in Brazil. Comparisons with the NCEP Climate Forecast System Reanalysis outputs were made at resolutions of about 40- and 25-km of the regional climate model.

  20. MR coil sensitivity inhomogeneity correction for plaque characterization in carotid arteries

    NASA Astrophysics Data System (ADS)

    Salvado, Olivier; Hillenbrand, Claudia; Suri, Jasjit; Wilson, David L.

    2004-05-01

    We are involved in a comprehensive program to characterize atherosclerotic disease using multiple MR images having different contrast mechanisms (T1W, T2W, PDW, magnetization transfer, etc.) of human carotid and animal model arteries. We use specially designed intravascular and surface array coils that give high signal-to-noise but suffer from sensitivity inhomogeneity. With carotid surface coils, challenges include: (1) a steep bias field with an 80% change; (2) presence of nearby muscular structures lacking high frequency information to distinguish bias from anatomical features; (3) many confounding zero-valued voxels subject to fat suppression, blood flow cancellation, or air, which are not subject to coil sensitivity; and (4) substantial noise. Bias was corrected using a modification of the adaptive fuzzy c-mean method reported by Pham et al. (IEEE TMI, 18:738-752), whereby a bias field modeled as a mechanical membrane was iteratively improved until cluster means no longer changed. Because our images were noisy, we added a noise reduction filtering step between iterations and used about 5 classes. In a digital phantom having a bias field measured from our MR system, variations across an area comparable to a carotid artery were reduced from 50% to <5% with processing. Human carotid images were qualitatively improved and large regions of skeletal muscle were relatively flat. Other commonly applied techniques failed to segment the images or introduced strong edge artifacts. Current evaluations include comparisons to bias as measured by a body coil in human MR images.

  1. Attribution of Extreme Rainfall Events in the South of France Using EURO-CORDEX Simulations

    NASA Astrophysics Data System (ADS)

    Luu, L. N.; Vautard, R.; Yiou, P.

    2017-12-01

    The Mediterranean region regularly undergoes episodes of intense precipitation in the fall season that exceed 300mm a day. This study focuses on the role of climate change on the dynamics of the events that occur in the South of France. We used an ensemble of 10 EURO-CORDEX model simulations with two horizontal resolutions (EUR-11: 0.11° and EUR-44: 0.44°) for the attribution of extreme rainfall in the fall in the Cevennes mountain range (South of France). The biases of the simulations were corrected with simple scaling adjustment and a quantile correction (CDFt). This produces five datasets including EUR-44 and EUR-11 with and without scaling adjustment and CDFt-EUR-11, on which we test the impact of resolution and bias correction on the extremes. Those datasets, after pooling all of models together, are fitted by a stationary Generalized Extreme Value distribution for several periods to estimate a climate change signal in the tail of distribution of extreme rainfall in the Cévenne region. Those changes are then interpreted by a scaling model that links extreme rainfall with mean and maximum daily temperature. The results show that higher-resolution simulations with bias adjustment provide a robust and confident increase of intensity and likelihood of occurrence of autumn extreme rainfall in the area in current climate in comparison with historical climate. The probability (exceedance probability) of 1-in-1000-year event in historical climate may increase by a factor of 1.8 under current climate with a confident interval of 0.4 to 5.3 following the CDFt bias-adjusted EUR-11. The change of magnitude appears to follow the Clausius-Clapeyron relation that indicates a 7% increase in rainfall per 1oC increase in temperature.

  2. [Application of an Adaptive Inertia Weight Particle Swarm Algorithm in the Magnetic Resonance Bias Field Correction].

    PubMed

    Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao

    2016-06-01

    An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.

  3. Implications of the methodological choices for hydrologic portrayals of climate change over the contiguous United States: Statistically downscaled forcing data and hydrologic models

    USGS Publications Warehouse

    Mizukami, Naoki; Clark, Martyn P.; Gutmann, Ethan D.; Mendoza, Pablo A.; Newman, Andrew J.; Nijssen, Bart; Livneh, Ben; Hay, Lauren E.; Arnold, Jeffrey R.; Brekke, Levi D.

    2016-01-01

    Continental-domain assessments of climate change impacts on water resources typically rely on statistically downscaled climate model outputs to force hydrologic models at a finer spatial resolution. This study examines the effects of four statistical downscaling methods [bias-corrected constructed analog (BCCA), bias-corrected spatial disaggregation applied at daily (BCSDd) and monthly scales (BCSDm), and asynchronous regression (AR)] on retrospective hydrologic simulations using three hydrologic models with their default parameters (the Community Land Model, version 4.0; the Variable Infiltration Capacity model, version 4.1.2; and the Precipitation–Runoff Modeling System, version 3.0.4) over the contiguous United States (CONUS). Biases of hydrologic simulations forced by statistically downscaled climate data relative to the simulation with observation-based gridded data are presented. Each statistical downscaling method produces different meteorological portrayals including precipitation amount, wet-day frequency, and the energy input (i.e., shortwave radiation), and their interplay affects estimations of precipitation partitioning between evapotranspiration and runoff, extreme runoff, and hydrologic states (i.e., snow and soil moisture). The analyses show that BCCA underestimates annual precipitation by as much as −250 mm, leading to unreasonable hydrologic portrayals over the CONUS for all models. Although the other three statistical downscaling methods produce a comparable precipitation bias ranging from −10 to 8 mm across the CONUS, BCSDd severely overestimates the wet-day fraction by up to 0.25, leading to different precipitation partitioning compared to the simulations with other downscaled data. Overall, the choice of downscaling method contributes to less spread in runoff estimates (by a factor of 1.5–3) than the choice of hydrologic model with use of the default parameters if BCCA is excluded.

  4. Multivariate Bias Correction Procedures for Improving Water Quality Predictions from the SWAT Model

    NASA Astrophysics Data System (ADS)

    Arumugam, S.; Libera, D.

    2017-12-01

    Water quality observations are usually not available on a continuous basis for longer than 1-2 years at a time over a decadal period given the labor requirements making calibrating and validating mechanistic models difficult. Further, any physical model predictions inherently have bias (i.e., under/over estimation) and require post-simulation techniques to preserve the long-term mean monthly attributes. This study suggests a multivariate bias-correction technique and compares to a common technique in improving the performance of the SWAT model in predicting daily streamflow and TN loads across the southeast based on split-sample validation. The approach is a dimension reduction technique, canonical correlation analysis (CCA) that regresses the observed multivariate attributes with the SWAT model simulated values. The common approach is a regression based technique that uses an ordinary least squares regression to adjust model values. The observed cross-correlation between loadings and streamflow is better preserved when using canonical correlation while simultaneously reducing individual biases. Additionally, canonical correlation analysis does a better job in preserving the observed joint likelihood of observed streamflow and loadings. These procedures were applied to 3 watersheds chosen from the Water Quality Network in the Southeast Region; specifically, watersheds with sufficiently large drainage areas and number of observed data points. The performance of these two approaches are compared for the observed period and over a multi-decadal period using loading estimates from the USGS LOADEST model. Lastly, the CCA technique is applied in a forecasting sense by using 1-month ahead forecasts of P & T from ECHAM4.5 as forcings in the SWAT model. Skill in using the SWAT model for forecasting loadings and streamflow at the monthly and seasonal timescale is also discussed.

  5. The usefulness of "corrected" body mass index vs. self-reported body mass index: comparing the population distributions, sensitivity, specificity, and predictive utility of three correction equations using Canadian population-based data.

    PubMed

    Dutton, Daniel J; McLaren, Lindsay

    2014-05-06

    National data on body mass index (BMI), computed from self-reported height and weight, is readily available for many populations including the Canadian population. Because self-reported weight is found to be systematically under-reported, it has been proposed that the bias in self-reported BMI can be corrected using equations derived from data sets which include both self-reported and measured height and weight. Such correction equations have been developed and adopted. We aim to evaluate the usefulness (i.e., distributional similarity; sensitivity and specificity; and predictive utility vis-à-vis disease outcomes) of existing and new correction equations in population-based research. The Canadian Community Health Surveys from 2005 and 2008 include both measured and self-reported values of height and weight, which allows for construction and evaluation of correction equations. We focused on adults age 18-65, and compared three correction equations (two correcting weight only, and one correcting BMI) against self-reported and measured BMI. We first compared population distributions of BMI. Second, we compared the sensitivity and specificity of self-reported BMI and corrected BMI against measured BMI. Third, we compared the self-reported and corrected BMI in terms of association with health outcomes using logistic regression. All corrections outperformed self-report when estimating the full BMI distribution; the weight-only correction outperformed the BMI-only correction for females in the 23-28 kg/m2 BMI range. In terms of sensitivity/specificity, when estimating obesity prevalence, corrected values of BMI (from any equation) were superior to self-report. In terms of modelling BMI-disease outcome associations, findings were mixed, with no correction proving consistently superior to self-report. If researchers are interested in modelling the full population distribution of BMI, or estimating the prevalence of obesity in a population, then a correction of any kind included in this study is recommended. If the researcher is interested in using BMI as a predictor variable for modelling disease, then both self-reported and corrected BMI result in biased estimates of association.

  6. Simultaneous statistical bias correction of multiplePM2.5 species from a regional photochemical grid model

    EPA Science Inventory

    In recent years environmental epidemiologists have begun utilizing regionalscale air quality computer models to predict ambient air pollution concentrations in health studies instead of or in addition to monitoring data from central sites. The advantages of using such models i...

  7. Motion, identity and the bias toward agency

    PubMed Central

    Fields, Chris

    2014-01-01

    The well-documented human bias toward agency as a cause and therefore an explanation of observed events is typically attributed to evolutionary selection for a “social brain”. Based on a review of developmental and adult behavioral and neurocognitive data, it is argued that the bias toward agency is a result of the default human solution, developed during infancy, to the computational requirements of object re-identification over gaps in observation of more than a few seconds. If this model is correct, overriding the bias toward agency to construct mechanistic explanations of observed events requires structure-mapping inferences, implemented by the pre-motor action planning system, that replace agents with mechanisms as causes of unobserved changes in contextual or featural properties of objects. Experiments that would test this model are discussed. PMID:25191245

  8. Bias correction of daily satellite precipitation data using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Pratama, A. W.; Buono, A.; Hidayat, R.; Harsa, H.

    2018-05-01

    Climate Hazards Group InfraRed Precipitation with Stations (CHIRPS) was producted by blending Satellite-only Climate Hazards Group InfraRed Precipitation (CHIRP) with Stasion observations data. The blending process was aimed to reduce bias of CHIRP. However, Biases of CHIRPS on statistical moment and quantil values were high during wet season over Java Island. This paper presented a bias correction scheme to adjust statistical moment of CHIRP using observation precipitation data. The scheme combined Genetic Algorithm and Nonlinear Power Transformation, the results was evaluated based on different season and different elevation level. The experiment results revealed that the scheme robustly reduced bias on variance around 100% reduction and leaded to reduction of first, and second quantile biases. However, bias on third quantile only reduced during dry months. Based on different level of elevation, the performance of bias correction process is only significantly different on skewness indicators.

  9. Alternative Approaches to Assessing Nonresponse Bias in Longitudinal Survey Estimates: An Application to Substance-Use Outcomes Among Young Adults in the United States

    PubMed Central

    West, Brady Thomas; McCabe, Sean Esteban

    2017-01-01

    Abstract We evaluated alternative approaches to assessing and correcting for nonresponse bias in a longitudinal survey. We considered the changes in substance-use outcomes over a 3-year period among young adults aged 18–24 years (n = 5,199) in the United States, analyzing data from the National Epidemiologic Survey on Alcohol and Related Conditions. This survey collected a variety of substance-use information from a nationally representative sample of US adults in 2 waves: 2001–2002 and 2004–2005. We first considered nonresponse rates in the second wave as a function of key substance-use outcomes in wave 1. We then evaluated 5 alternative approaches designed to correct for nonresponse bias under different attrition mechanisms, including weighting adjustments, multiple imputation, selection models, and pattern-mixture models. Nonignorable attrition in a longitudinal survey can lead to bias in estimates of change in certain health behaviors over time, and only selected procedures enable analysts to assess the sensitivity of their inferences to different assumptions about the extent of nonignorability. We compared estimates based on these 5 approaches, and we suggest a road map for assessing the risk of nonresponse bias in longitudinal studies. We conclude with directions for future research in this area given the results of our evaluations. PMID:28338839

  10. Estimating interevent time distributions from finite observation periods in communication networks

    NASA Astrophysics Data System (ADS)

    Kivelä, Mikko; Porter, Mason A.

    2015-11-01

    A diverse variety of processes—including recurrent disease episodes, neuron firing, and communication patterns among humans—can be described using interevent time (IET) distributions. Many such processes are ongoing, although event sequences are only available during a finite observation window. Because the observation time window is more likely to begin or end during long IETs than during short ones, the analysis of such data is susceptible to a bias induced by the finite observation period. In this paper, we illustrate how this length bias is born and how it can be corrected without assuming any particular shape for the IET distribution. To do this, we model event sequences using stationary renewal processes, and we formulate simple heuristics for determining the severity of the bias. To illustrate our results, we focus on the example of empirical communication networks, which are temporal networks that are constructed from communication events. The IET distributions of such systems guide efforts to build models of human behavior, and the variance of IETs is very important for estimating the spreading rate of information in networks of temporal interactions. We analyze several well-known data sets from the literature, and we find that the resulting bias can lead to systematic underestimates of the variance in the IET distributions and that correcting for the bias can lead to qualitatively different results for the tails of the IET distributions.

  11. A Critical Meta-Analysis of Lens Model Studies in Human Judgment and Decision-Making

    PubMed Central

    Kaufmann, Esther; Reips, Ulf-Dietrich; Wittmann, Werner W.

    2013-01-01

    Achieving accurate judgment (‘judgmental achievement’) is of utmost importance in daily life across multiple domains. The lens model and the lens model equation provide useful frameworks for modeling components of judgmental achievement and for creating tools to help decision makers (e.g., physicians, teachers) reach better judgments (e.g., a correct diagnosis, an accurate estimation of intelligence). Previous meta-analyses of judgment and decision-making studies have attempted to evaluate overall judgmental achievement and have provided the basis for evaluating the success of bootstrapping (i.e., replacing judges by linear models that guide decision making). However, previous meta-analyses have failed to appropriately correct for a number of study design artifacts (e.g., measurement error, dichotomization), which may have potentially biased estimations (e.g., of the variability between studies) and led to erroneous interpretations (e.g., with regards to moderator variables). In the current study we therefore conduct the first psychometric meta-analysis of judgmental achievement studies that corrects for a number of study design artifacts. We identified 31 lens model studies (N = 1,151, k = 49) that met our inclusion criteria. We evaluated overall judgmental achievement as well as whether judgmental achievement depended on decision domain (e.g., medicine, education) and/or the level of expertise (expert vs. novice). We also evaluated whether using corrected estimates affected conclusions with regards to the success of bootstrapping with psychometrically-corrected models. Further, we introduce a new psychometric trim-and-fill method to estimate the effect sizes of potentially missing studies correct psychometric meta-analyses for effects of publication bias. Comparison of the results of the psychometric meta-analysis with the results of a traditional meta-analysis (which only corrected for sampling error) indicated that artifact correction leads to a) an increase in values of the lens model components, b) reduced heterogeneity between studies, and c) increases the success of bootstrapping. We argue that psychometric meta-analysis is useful for accurately evaluating human judgment and show the success of bootstrapping. PMID:24391781

  12. A critical meta-analysis of lens model studies in human judgment and decision-making.

    PubMed

    Kaufmann, Esther; Reips, Ulf-Dietrich; Wittmann, Werner W

    2013-01-01

    Achieving accurate judgment ('judgmental achievement') is of utmost importance in daily life across multiple domains. The lens model and the lens model equation provide useful frameworks for modeling components of judgmental achievement and for creating tools to help decision makers (e.g., physicians, teachers) reach better judgments (e.g., a correct diagnosis, an accurate estimation of intelligence). Previous meta-analyses of judgment and decision-making studies have attempted to evaluate overall judgmental achievement and have provided the basis for evaluating the success of bootstrapping (i.e., replacing judges by linear models that guide decision making). However, previous meta-analyses have failed to appropriately correct for a number of study design artifacts (e.g., measurement error, dichotomization), which may have potentially biased estimations (e.g., of the variability between studies) and led to erroneous interpretations (e.g., with regards to moderator variables). In the current study we therefore conduct the first psychometric meta-analysis of judgmental achievement studies that corrects for a number of study design artifacts. We identified 31 lens model studies (N = 1,151, k = 49) that met our inclusion criteria. We evaluated overall judgmental achievement as well as whether judgmental achievement depended on decision domain (e.g., medicine, education) and/or the level of expertise (expert vs. novice). We also evaluated whether using corrected estimates affected conclusions with regards to the success of bootstrapping with psychometrically-corrected models. Further, we introduce a new psychometric trim-and-fill method to estimate the effect sizes of potentially missing studies correct psychometric meta-analyses for effects of publication bias. Comparison of the results of the psychometric meta-analysis with the results of a traditional meta-analysis (which only corrected for sampling error) indicated that artifact correction leads to a) an increase in values of the lens model components, b) reduced heterogeneity between studies, and c) increases the success of bootstrapping. We argue that psychometric meta-analysis is useful for accurately evaluating human judgment and show the success of bootstrapping.

  13. Scale of reference bias and the evolution of health.

    PubMed

    Groot, Wim

    2003-09-01

    The analysis of subjective measures of well-being-such as self-reports by individuals about their health status is frequently hampered by the problem of scale of reference bias. A particular form of scale of reference bias is age norming. In this study we corrected for scale of reference bias by allowing for individual specific effects in an equation on subjective health. A random effects ordered response model was used to analyze scale of reference bias in self-reported health measures. The results indicate that if we do not control for unobservable individual specific effects, the response to a subjective health state measure suffers from age norming. Age norming can be controlled for by a random effects estimation technique using longitudinal data. Further, estimates are presented on the rate of depreciation of health. Finally, simulations of life expectancy indicate that the estimated model provides a reasonably good fit of the true life expectancy.

  14. The SDSS-DR12 large-scale cross-correlation of damped Lyman alpha systems with the Lyman alpha forest

    NASA Astrophysics Data System (ADS)

    Pérez-Ràfols, Ignasi; Font-Ribera, Andreu; Miralda-Escudé, Jordi; Blomqvist, Michael; Bird, Simeon; Busca, Nicolás; du Mas des Bourboux, Hélion; Mas-Ribas, Lluís; Noterdaeme, Pasquier; Petitjean, Patrick; Rich, James; Schneider, Donald P.

    2018-01-01

    We present a measurement of the damped Ly α absorber (DLA) mean bias from the cross-correlation of DLAs and the Ly α forest, updating earlier results of Font-Ribera et al. (2012) with the final Baryon Oscillations Spectroscopic Survey data release and an improved method to address continuum fitting corrections. Our cross-correlation is well fitted by linear theory with the standard ΛCDM model, with a DLA bias of bDLA = 1.99 ± 0.11; a more conservative analysis, which removes DLA in the Ly β forest and uses only the cross-correlation at r > 10 h-1 Mpc, yields bDLA = 2.00 ± 0.19. This assumes the cosmological model from Planck Collaboration (2016) and the Ly α forest bias factors of Bautista et al. (2017) and includes only statistical errors obtained from bootstrap analysis. The main systematic errors arise from possible impurities and selection effects in the DLA catalogue and from uncertainties in the determination of the Ly α forest bias factors and a correction for effects of high column density absorbers. We find no dependence of the DLA bias on column density or redshift. The measured bias value corresponds to a host halo mass ∼4 × 1011 h-1 M⊙ if all DLAs were hosted in haloes of a similar mass. In a realistic model where host haloes over a broad mass range have a DLA cross-section Σ (M_h) ∝ M_h^{α } down to Mh > Mmin = 108.5 h-1 M⊙, we find that α > 1 is required to have bDLA > 1.7, implying a steeper relation or higher value of Mmin than is generally predicted in numerical simulations of galaxy formation.

  15. Evaluation of Statistical Downscaling Skill at Reproducing Extreme Events

    NASA Astrophysics Data System (ADS)

    McGinnis, S. A.; Tye, M. R.; Nychka, D. W.; Mearns, L. O.

    2015-12-01

    Climate model outputs usually have much coarser spatial resolution than is needed by impacts models. Although higher resolution can be achieved using regional climate models for dynamical downscaling, further downscaling is often required. The final resolution gap is often closed with a combination of spatial interpolation and bias correction, which constitutes a form of statistical downscaling. We use this technique to downscale regional climate model data and evaluate its skill in reproducing extreme events. We downscale output from the North American Regional Climate Change Assessment Program (NARCCAP) dataset from its native 50-km spatial resolution to the 4-km resolution of University of Idaho's METDATA gridded surface meterological dataset, which derives from the PRISM and NLDAS-2 observational datasets. We operate on the major variables used in impacts analysis at a daily timescale: daily minimum and maximum temperature, precipitation, humidity, pressure, solar radiation, and winds. To interpolate the data, we use the patch recovery method from the Earth System Modeling Framework (ESMF) regridding package. We then bias correct the data using Kernel Density Distribution Mapping (KDDM), which has been shown to exhibit superior overall performance across multiple metrics. Finally, we evaluate the skill of this technique in reproducing extreme events by comparing raw and downscaled output with meterological station data in different bioclimatic regions according to the the skill scores defined by Perkins et al in 2013 for evaluation of AR4 climate models. We also investigate techniques for improving bias correction of values in the tails of the distributions. These techniques include binned kernel density estimation, logspline kernel density estimation, and transfer functions constructed by fitting the tails with a generalized pareto distribution.

  16. A bias correction for covariance estimators to improve inference with generalized estimating equations that use an unstructured correlation matrix.

    PubMed

    Westgate, Philip M

    2013-07-20

    Generalized estimating equations (GEEs) are routinely used for the marginal analysis of correlated data. The efficiency of GEE depends on how closely the working covariance structure resembles the true structure, and therefore accurate modeling of the working correlation of the data is important. A popular approach is the use of an unstructured working correlation matrix, as it is not as restrictive as simpler structures such as exchangeable and AR-1 and thus can theoretically improve efficiency. However, because of the potential for having to estimate a large number of correlation parameters, variances of regression parameter estimates can be larger than theoretically expected when utilizing the unstructured working correlation matrix. Therefore, standard error estimates can be negatively biased. To account for this additional finite-sample variability, we derive a bias correction that can be applied to typical estimators of the covariance matrix of parameter estimates. Via simulation and in application to a longitudinal study, we show that our proposed correction improves standard error estimation and statistical inference. Copyright © 2012 John Wiley & Sons, Ltd.

  17. Effect of Malmquist bias on correlation studies with IRAS data base

    NASA Technical Reports Server (NTRS)

    Verter, Frances

    1993-01-01

    The relationships between galaxy properties in the sample of Trinchieri et al. (1989) are reexamined with corrections for Malmquist bias. The linear correlations are tested and linear regressions are fit for log-log plots of L(FIR), L(H-alpha), and L(B) as well as ratios of these quantities. The linear correlations for Malmquist bias are corrected using the method of Verter (1988), in which each galaxy observation is weighted by the inverse of its sampling volume. The linear regressions are corrected for Malmquist bias by a new method invented here in which each galaxy observation is weighted by its sampling volume. The results of correlation and regressions among the sample are significantly changed in the anticipated sense that the corrected correlation confidences are lower and the corrected slopes of the linear regressions are lower. The elimination of Malmquist bias eliminates the nonlinear rise in luminosity that has caused some authors to hypothesize additional components of FIR emission.

  18. Does Fat Suppression via Chemically Selective Saturation (CHESS) Affect R2*-MRI for Transfusional Iron Overload Assessment? A Clinical Evaluation at 1.5 and 3 Tesla

    PubMed Central

    Krafft, Axel J.; Loeffler, Ralf B.; Song, Ruitian; Bian, Xiao; McCarville, M. Beth; Hankins, Jane S.; Hillenbrand, Claudia M.

    2015-01-01

    Purpose Fat suppression (FS) via chemically selective saturation (CHESS) eliminates fat-water oscillations in multi-echo gradient echo (mGRE) R2*-MRI. However, for increasing R2* values as seen with increasing liver iron content (LIC), the water signal spectrally overlaps with the CHESS band, which may alter R2*. Here, we investigate the effect of CHESS on R2* and describe a heuristic correction for the observed CHESS-induced R2* changes. Methods Eighty patients (49/31 female/male, mean age: 18.3±11.7 years) with iron overload were scanned with a non-FS and a CHESS-FS mGRE sequence at 1.5T and 3T. Mean liver R2* values were evaluated using 3 published fitting approaches. Measured and model-corrected R2* values were compared and statistically analyzed. Results At 1.5T, CHESS led to a systematic R2* reduction (P<0.001 for all fitting algorithms) especially toward higher R2*. Our model described the observed changes well and reduced the CHESS-induced R2* bias after correction (linear regression slopes: 1.032/0.927/0.981). No CHESS-induced R2* reductions were found at 3T. Conclusion The CHESS-induced R2* bias at 1.5T needs to be considered when applying R2*-LIC biopsy calibrations for clinical LIC assessment which were established without FS at 1.5T. The proposed model corrects the R2* bias and could therefore improve clinical iron overload assessment based on linear R2*-LIC calibrations. PMID:26308155

  19. Does fat suppression via chemically selective saturation affect R2*-MRI for transfusional iron overload assessment? A clinical evaluation at 1.5T and 3T.

    PubMed

    Krafft, Axel J; Loeffler, Ralf B; Song, Ruitian; Bian, Xiao; McCarville, M Beth; Hankins, Jane S; Hillenbrand, Claudia M

    2016-08-01

    Fat suppression (FS) via chemically selective saturation (CHESS) eliminates fat-water oscillations in multiecho gradient echo (mGRE) R2*-MRI. However, for increasing R2* values as seen with increasing liver iron content (LIC), the water signal spectrally overlaps with the CHESS band, which may alter R2*. We investigated the effect of CHESS on R2* and developed a heuristic correction for the observed CHESS-induced R2* changes. Eighty patients [female, n = 49; male, n = 31; mean age (± standard deviation), 18.3 ± 11.7 y] with iron overload were scanned with a non-FS and a CHESS-FS mGRE sequence at 1.5T and 3T. Mean liver R2* values were evaluated using three published fitting approaches. Measured and model-corrected R2* values were compared and statistically analyzed. At 1.5T, CHESS led to a systematic R2* reduction (P < 0.001 for all fitting algorithms) especially toward higher R2*. Our model described the observed changes well and reduced the CHESS-induced R2* bias after correction (linear regression slopes: 1.032/0.927/0.981). No CHESS-induced R2* reductions were found at 3T. The CHESS-induced R2* bias at 1.5T needs to be considered when applying R2*-LIC biopsy calibrations for clinical LIC assessment, which were established without FS at 1.5T. The proposed model corrects the R2* bias and could therefore improve clinical iron overload assessment based on linear R2*-LIC calibrations. Magn Reson Med 76:591-601, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  20. Ascertainment correction for Markov chain Monte Carlo segregation and linkage analysis of a quantitative trait.

    PubMed

    Ma, Jianzhong; Amos, Christopher I; Warwick Daw, E

    2007-09-01

    Although extended pedigrees are often sampled through probands with extreme levels of a quantitative trait, Markov chain Monte Carlo (MCMC) methods for segregation and linkage analysis have not been able to perform ascertainment corrections. Further, the extent to which ascertainment of pedigrees leads to biases in the estimation of segregation and linkage parameters has not been previously studied for MCMC procedures. In this paper, we studied these issues with a Bayesian MCMC approach for joint segregation and linkage analysis, as implemented in the package Loki. We first simulated pedigrees ascertained through individuals with extreme values of a quantitative trait in spirit of the sequential sampling theory of Cannings and Thompson [Cannings and Thompson [1977] Clin. Genet. 12:208-212]. Using our simulated data, we detected no bias in estimates of the trait locus location. However, in addition to allele frequencies, when the ascertainment threshold was higher than or close to the true value of the highest genotypic mean, bias was also found in the estimation of this parameter. When there were multiple trait loci, this bias destroyed the additivity of the effects of the trait loci, and caused biases in the estimation all genotypic means when a purely additive model was used for analyzing the data. To account for pedigree ascertainment with sequential sampling, we developed a Bayesian ascertainment approach and implemented Metropolis-Hastings updates in the MCMC samplers used in Loki. Ascertainment correction greatly reduced biases in parameter estimates. Our method is designed for multiple, but a fixed number of trait loci. Copyright (c) 2007 Wiley-Liss, Inc.

  1. Bias Correction of Satellite Precipitation Products (SPPs) using a User-friendly Tool: A Step in Enhancing Technical Capacity

    NASA Astrophysics Data System (ADS)

    Rushi, B. R.; Ellenburg, W. L.; Adams, E. C.; Flores, A.; Limaye, A. S.; Valdés-Pineda, R.; Roy, T.; Valdés, J. B.; Mithieu, F.; Omondi, S.

    2017-12-01

    SERVIR, a joint NASA-USAID initiative, works to build capacity in Earth observation technologies in developing countries for improved environmental decision making in the arena of: weather and climate, water and disasters, food security and land use/land cover. SERVIR partners with leading regional organizations in Eastern and Southern Africa, Hindu Kush-Himalaya, Mekong region, and West Africa to achieve its objectives. SERVIR develops hydrological applications to address specific needs articulated by key stakeholders and daily rainfall estimates are a vital input for these applications. Satellite-derived rainfall is subjected to systemic biases which need to be corrected before it can be used for any hydrologic application such as real-time or seasonal forecasting. SERVIR and the SWAAT team at the University of Arizona, have co-developed an open-source and user friendly tool of rainfall bias correction approaches for SPPs. Bias correction tools were developed based on Linear Scaling and Quantile Mapping techniques. A set of SPPs, such as PERSIANN-CCS, TMPA-RT, and CMORPH, are bias corrected using Climate Hazards Group InfraRed Precipitation with Station (CHIRPS) data which incorporates ground based precipitation observations. This bias correction tools also contains a component, which is included to improve monthly mean of CHIRPS using precipitation products of the Global Surface Summary of the Day (GSOD) database developed by the National Climatic Data Center (NCDC). This tool takes input from command-line which makes it user-friendly and applicable in any operating platform without prior programming skills. This presentation will focus on this bias-correction tool for SPPs, including application scenarios.

  2. Analysis of near-surface biases in ERA-Interim over the Canadian Prairies

    NASA Astrophysics Data System (ADS)

    Betts, Alan K.; Beljaars, Anton C. M.

    2017-09-01

    We quantify the biases in the diurnal cycle of temperature in ERA-Interim for both warm and cold season using hourly climate station data for four stations in Saskatchewan from 1979 to 2006. The warm season biases increase as opaque cloud cover decreases, and change substantially from April to October. The bias in mean temperature increases almost monotonically from small negative values in April to small positive values in the fall. Under clear skies, the bias in maximum temperature is of the order of -1°C in June and July, and -2°C in spring and fall; while the bias in minimum temperature increases almost monotonically from +1°C in spring to +2.5°C in October. The bias in the diurnal temperature range falls under clear skies from -2.5°C in spring to -5°C in fall. The cold season biases with surface snow have a different structure. The biases in maximum, mean and minimum temperature with a stable BL reach +1°C, +2.6°C and +3°C respectively in January under clear skies. The cold season bias in diurnal range increases from about -1.8°C in the fall to positive values in March. These diurnal biases in 2 m temperature and their seasonal trends are consistent with a high bias in both the diurnal and seasonal amplitude of the model ground heat flux, and a warm season daytime bias resulting from the model fixed leaf area index. Our results can be used as bias corrections in agricultural modeling that use these reanalysis data, and also as a framework for understanding model biases.

  3. A review of bias flow liners for acoustic damping in gas turbine combustors

    NASA Astrophysics Data System (ADS)

    Lahiri, C.; Bake, F.

    2017-07-01

    The optimized design of bias flow liner is a key element for the development of low emission combustion systems in modern gas turbines and aero-engines. The research of bias flow liners has a fairly long history concerning both the parameter dependencies as well as the methods to model the acoustic behaviour of bias flow liners under the variety of different bias and grazing flow conditions. In order to establish an overview over the state of the art, this paper provides a comprehensive review about the published research on bias flow liners and modelling approaches with an extensive study of the most relevant parameters determining the acoustic behaviour of these liners. The paper starts with a historical description of available investigations aiming on the characterization of the bias flow absorption principle. This chronological compendium is extended by the recent and ongoing developments in this field. In a next step the fundamental acoustic property of bias flow liner in terms of the wall impedance is introduced and the different derivations and formulations of this impedance yielding the different published model descriptions are explained and compared. Finally, a parametric study reveals the most relevant parameters for the acoustic damping behaviour of bias flow liners and how this is reflected by the various model representations. Although the general trend of the investigated acoustic behaviour is captured by the different models fairly well for a certain range of parameters, in the transition region between the resonance dominated and the purely bias flow related regime all models lack the correct damping prediction. This seems to be connected to the proper implementation of the reactance as a function of bias flow Mach number.

  4. The Influence of Dimensionality on Estimation in the Partial Credit Model.

    ERIC Educational Resources Information Center

    De Ayala, R. J.

    1995-01-01

    The effect of multidimensionality on partial credit model parameter estimation was studied with noncompensatory and compensatory data. Analysis results, consisting of root mean square error bias, Pearson product-moment corrections, standardized root mean squared differences, standardized differences between means, and descriptive statistics…

  5. Survey Response-Related Biases in Contingent Valuation: Concepts, Remedies, and Empirical Application to Valuing Aquatic Plant Management

    Treesearch

    Mark L. Messonnier; John C. Bergstrom; Chrisopher M. Cornwell; R. Jeff Teasley; H. Ken Cordell

    2000-01-01

    Simple nonresponse and selection biases that may occur in survey research such as contingent valuation applications are discussed and tested. Correction mechanisms for these types of biases are demonstrated. Results indicate the importance of testing and correcting for unit and item nonresponse bias in contingent valuation survey data. When sample nonresponse and...

  6. Statistical Properties of Global Precipitation in the NCEP GFS Model and TMPA Observations for Data Assimilation

    NASA Technical Reports Server (NTRS)

    Lien, Guo-Yuan; Kalnay, Eugenia; Miyoshi, Takemasa; Huffman, George J.

    2016-01-01

    Assimilation of satellite precipitation data into numerical models presents several difficulties, with two of the most important being the non-Gaussian error distributions associated with precipitation, and large model and observation errors. As a result, improving the model forecast beyond a few hours by assimilating precipitation has been found to be difficult. To identify the challenges and propose practical solutions to assimilation of precipitation, statistics are calculated for global precipitation in a low-resolution NCEP Global Forecast System (GFS) model and the TRMM Multisatellite Precipitation Analysis (TMPA). The samples are constructed using the same model with the same forecast period, observation variables, and resolution as in the follow-on GFSTMPA precipitation assimilation experiments presented in the companion paper.The statistical results indicate that the T62 and T126 GFS models generally have positive bias in precipitation compared to the TMPA observations, and that the simulation of the marine stratocumulus precipitation is not realistic in the T62 GFS model. It is necessary to apply to precipitation either the commonly used logarithm transformation or the newly proposed Gaussian transformation to obtain a better relationship between the model and observational precipitation. When the Gaussian transformations are separately applied to the model and observational precipitation, they serve as a bias correction that corrects the amplitude-dependent biases. In addition, using a spatially andor temporally averaged precipitation variable, such as the 6-h accumulated precipitation, should be advantageous for precipitation assimilation.

  7. Correction of Selection Bias in Survey Data: Is the Statistical Cure Worse Than the Bias?

    PubMed

    Hanley, James A

    2017-04-01

    In previous articles in the American Journal of Epidemiology (Am J Epidemiol. 2013;177(5):431-442) and American Journal of Public Health (Am J Public Health. 2013;103(10):1895-1901), Masters et al. reported age-specific hazard ratios for the contrasts in mortality rates between obesity categories. They corrected the observed hazard ratios for selection bias caused by what they postulated was the nonrepresentativeness of the participants in the National Health Interview Study that increased with age, obesity, and ill health. However, it is possible that their regression approach to remove the alleged bias has not produced, and in general cannot produce, sensible hazard ratio estimates. First, we must consider how many nonparticipants there might have been in each category of obesity and of age at entry and how much higher the mortality rates would have to be in nonparticipants than in participants in these same categories. What plausible set of numerical values would convert the ("biased") decreasing-with-age hazard ratios seen in the data into the ("unbiased") increasing-with-age ratios that they computed? Can these values be encapsulated in (and can sensible values be recovered from) one additional internal variable in a regression model? Second, one must examine the age pattern of the hazard ratios that have been adjusted for selection. Without the correction, the hazard ratios are attenuated with increasing age. With it, the hazard ratios at older ages are considerably higher, but those at younger ages are well below one. Third, one must test whether the regression approach suggested by Masters et al. would correct the nonrepresentativeness that increased with age and ill health that I introduced into real and hypothetical data sets. I found that the approach did not recover the hazard ratio patterns present in the unselected data sets: the corrections overshot the target at older ages and undershot it at lower ages.

  8. Correcting length-frequency distributions for imperfect detection

    USGS Publications Warehouse

    Breton, André R.; Hawkins, John A.; Winkelman, Dana L.

    2013-01-01

    Sampling gear selects for specific sizes of fish, which may bias length-frequency distributions that are commonly used to assess population size structure, recruitment patterns, growth, and survival. To properly correct for sampling biases caused by gear and other sources, length-frequency distributions need to be corrected for imperfect detection. We describe a method for adjusting length-frequency distributions when capture and recapture probabilities are a function of fish length, temporal variation, and capture history. The method is applied to a study involving the removal of Smallmouth Bass Micropterus dolomieu by boat electrofishing from a 38.6-km reach on the Yampa River, Colorado. Smallmouth Bass longer than 100 mm were marked and released alive from 2005 to 2010 on one or more electrofishing passes and removed on all other passes from the population. Using the Huggins mark–recapture model, we detected a significant effect of fish total length, previous capture history (behavior), year, pass, year×behavior, and year×pass on capture and recapture probabilities. We demonstrate how to partition the Huggins estimate of abundance into length frequencies to correct for these effects. Uncorrected length frequencies of fish removed from Little Yampa Canyon were negatively biased in every year by as much as 88% relative to mark–recapture estimates for the smallest length-class in our analysis (100–110 mm). Bias declined but remained high even for adult length-classes (≥200 mm). The pattern of bias across length-classes was variable across years. The percentage of unadjusted counts that were below the lower 95% confidence interval from our adjusted length-frequency estimates were 95, 89, 84, 78, 81, and 92% from 2005 to 2010, respectively. Length-frequency distributions are widely used in fisheries science and management. Our simple method for correcting length-frequency estimates for imperfect detection could be widely applied when mark–recapture data are available.

  9. On land-use modeling: A treatise of satellite imagery data and misclassification error

    NASA Astrophysics Data System (ADS)

    Sandler, Austin M.

    Recent availability of satellite-based land-use data sets, including data sets with contiguous spatial coverage over large areas, relatively long temporal coverage, and fine-scale land cover classifications, is providing new opportunities for land-use research. However, care must be used when working with these datasets due to misclassification error, which causes inconsistent parameter estimates in the discrete choice models typically used to model land-use. I therefore adapt the empirical correction methods developed for other contexts (e.g., epidemiology) so that they can be applied to land-use modeling. I then use a Monte Carlo simulation, and an empirical application using actual satellite imagery data from the Northern Great Plains, to compare the results of a traditional model ignoring misclassification to those from models accounting for misclassification. Results from both the simulation and application indicate that ignoring misclassification will lead to biased results. Even seemingly insignificant levels of misclassification error (e.g., 1%) result in biased parameter estimates, which alter marginal effects enough to affect policy inference. At the levels of misclassification typical in current satellite imagery datasets (e.g., as high as 35%), ignoring misclassification can lead to systematically erroneous land-use probabilities and substantially biased marginal effects. The correction methods I propose, however, generate consistent parameter estimates and therefore consistent estimates of marginal effects and predicted land-use probabilities.

  10. Bias correction of precipitation data and its effects on aridity and drought assessment in China over 1961-2015.

    PubMed

    Yao, Ning; Li, Yi; Li, Na; Yang, Daqing; Ayantobo, Olusola Olaitan

    2018-10-15

    The accuracy of gauge-measured precipitation (P m ) affects drought assessment since drought severity changes due to precipitation bias correction. This research investigates how drought severity changes as the result of bias-corrected precipitation (P c ) using the Erinc's index I m and standardized precipitation evapotranspiration index (SPEI). Daily and monthly P c values at 552 sites in China were determined using daily P m and wind speed and air temperature data over 1961-2015. P c -based I m values were generally larger than P m -based I m for most sub-regions in China. The increased P c and P c -based I m values indicated wetter climate conditions than previously reported for China. After precipitation bias-correction, Climate types changed, e.g., 20 sites from severe-arid to arid, and 11 sites from arid to semi-arid. However, the changes in SPEI were not that obvious due to precipitation bias correction because the standardized index SPEI removed the effects of mean precipitation values. In conclusion, precipitation bias in different sub-regions of China changed the spatial and temporal characteristics of drought assessment. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. A simple bias correction in linear regression for quantitative trait association under two-tail extreme selection.

    PubMed

    Kwan, Johnny S H; Kung, Annie W C; Sham, Pak C

    2011-09-01

    Selective genotyping can increase power in quantitative trait association. One example of selective genotyping is two-tail extreme selection, but simple linear regression analysis gives a biased genetic effect estimate. Here, we present a simple correction for the bias.

  12. Improved model for correcting the ionospheric impact on bending angle in radio occultation measurements

    NASA Astrophysics Data System (ADS)

    Angling, Matthew J.; Elvidge, Sean; Healy, Sean B.

    2018-04-01

    The standard approach to remove the effects of the ionosphere from neutral atmosphere GPS radio occultation measurements is to estimate a corrected bending angle from a combination of the L1 and L2 bending angles. This approach is known to result in systematic errors and an extension has been proposed to the standard ionospheric correction that is dependent on the squared L1 / L2 bending angle difference and a scaling term (κ). The variation of κ with height, time, season, location and solar activity (i.e. the F10.7 flux) has been investigated by applying a 1-D bending angle operator to electron density profiles provided by a monthly median ionospheric climatology model. As expected, the residual bending angle is well correlated (negatively) with the vertical total electron content (TEC). κ is more strongly dependent on the solar zenith angle, indicating that the TEC-dependent component of the residual error is effectively modelled by the squared L1 / L2 bending angle difference term in the correction. The residual error from the ionospheric correction is likely to be a major contributor to the overall error budget of neutral atmosphere retrievals between 40 and 80 km. Over this height range κ is approximately linear with height. A simple κ model has also been developed. It is independent of ionospheric measurements, but incorporates geophysical dependencies (i.e. solar zenith angle, solar flux, altitude). The global mean error (i.e. bias) and the standard deviation of the residual errors are reduced from -1.3×10-8 and 2.2×10-8 for the uncorrected case to -2.2×10-10 rad and 2.0×10-9 rad, respectively, for the corrections using the κ model. Although a fixed scalar κ also reduces bias for the global average, the selected value of κ (14 rad-1) is only appropriate for a small band of locations around the solar terminator. In the daytime, the scalar κ is consistently too high and this results in an overcorrection of the bending angles and a positive bending angle bias. Similarly, in the nighttime, the scalar κ is too low. However, in this case, the bending angles are already small and the impact of the choice of κ is less pronounced.

  13. Potassium-based algorithm allows correction for the hematocrit bias in quantitative analysis of caffeine and its major metabolite in dried blood spots.

    PubMed

    De Kesel, Pieter M M; Capiau, Sara; Stove, Veronique V; Lambert, Willy E; Stove, Christophe P

    2014-10-01

    Although dried blood spot (DBS) sampling is increasingly receiving interest as a potential alternative to traditional blood sampling, the impact of hematocrit (Hct) on DBS results is limiting its final breakthrough in routine bioanalysis. To predict the Hct of a given DBS, potassium (K(+)) proved to be a reliable marker. The aim of this study was to evaluate whether application of an algorithm, based upon predicted Hct or K(+) concentrations as such, allowed correction for the Hct bias. Using validated LC-MS/MS methods, caffeine, chosen as a model compound, was determined in whole blood and corresponding DBS samples with a broad Hct range (0.18-0.47). A reference subset (n = 50) was used to generate an algorithm based on K(+) concentrations in DBS. Application of the developed algorithm on an independent test set (n = 50) alleviated the assay bias, especially at lower Hct values. Before correction, differences between DBS and whole blood concentrations ranged from -29.1 to 21.1%. The mean difference, as obtained by Bland-Altman comparison, was -6.6% (95% confidence interval (CI), -9.7 to -3.4%). After application of the algorithm, differences between corrected and whole blood concentrations lay between -19.9 and 13.9% with a mean difference of -2.1% (95% CI, -4.5 to 0.3%). The same algorithm was applied to a separate compound, paraxanthine, which was determined in 103 samples (Hct range, 0.17-0.47), yielding similar results. In conclusion, a K(+)-based algorithm allows correction for the Hct bias in the quantitative analysis of caffeine and its metabolite paraxanthine.

  14. Observing atmospheric formaldehyde (HCHO) from space: validation and intercomparison of six retrievals from four satellites (OMI, GOME2A, GOME2B, OMPS) with SEAC4RS aircraft observations over the Southeast US

    PubMed Central

    Zhu, Lei; Jacob, Daniel J.; Kim, Patrick S.; Fisher, Jenny A.; Yu, Karen; Travis, Katherine R.; Mickley, Loretta J.; Yantosca, Robert M.; Sulprizio, Melissa P.; De Smedt, Isabelle; Abad, Gonzalo Gonzalez; Chance, Kelly; Li, Can; Ferrare, Richard; Fried, Alan; Hair, Johnathan W.; Hanisco, Thomas F.; Richter, Dirk; Scarino, Amy Jo; Walega, James; Weibring, Petter; Wolfe, Glenn M.

    2018-01-01

    Formaldehyde (HCHO) column data from satellites are widely used as a proxy for emissions of volatile organic compounds (VOCs) but validation of the data has been extremely limited. Here we use highly accurate HCHO aircraft observations from the NASA SEAC4RS campaign over the Southeast US in August–September 2013 to validate and intercompare six retrievals of HCHO columns from four different satellite instruments (OMI, GOME2A, GOME2B and OMPS) and three different research groups. The GEOS-Chem chemical transport model is used as a common intercomparison platform. All retrievals feature a HCHO maximum over Arkansas and Louisiana, consistent with the aircraft observations and reflecting high emissions of biogenic isoprene. The retrievals are also interconsistent in their spatial variability over the Southeast US (r=0.4–0.8 on a 0.5°×0.5° grid) and in their day-to-day variability (r=0.5–0.8). However, all retrievals are biased low in the mean by 20–51%, which would lead to corresponding bias in estimates of isoprene emissions from the satellite data. The smallest bias is for OMI-BIRA, which has high corrected slant columns relative to the other retrievals and low scattering weights in its air mass factor (AMF) calculation. OMI-BIRA has systematic error in its assumed vertical HCHO shape profiles for the AMF calculation and correcting this would eliminate its bias relative to the SEAC4RS data. Our results support the use of satellite HCHO data as a quantitative proxy for isoprene emission after correction of the low mean bias. There is no evident pattern in the bias, suggesting that a uniform correction factor may be applied to the data until better understanding is achieved. PMID:29619044

  15. An Iterative, Geometric, Tilt Correction Method for Radiation and Albedo Observed by Automatic Weather Stations on Snow-Covered Surfaces: Application to Greenland

    NASA Astrophysics Data System (ADS)

    Wang, W.; Zender, C. S.; van As, D.; Smeets, P.; van den Broeke, M.

    2015-12-01

    Surface melt and mass loss of Greenland Ice Sheet may play crucial roles in global climate change due to their positive feedbacks and large fresh water storage. With few other regular meteorological observations available in this extreme environment, measurements from Automatic Weather Stations (AWS) are the primary data source for the surface energy budget studies, and for validating satellite observations and model simulations. However, station tilt, due to surface melt and compaction, results in considerable biases in the radiation and thus albedo measurements by AWS. In this study, we identify the tilt-induced biases in the climatology of surface radiative flux and albedo, and then correct them based on geometrical principles. Over all the AWS from the Greenland Climate Network (GC-Net), the Kangerlussuaq transect (K-transect) and the Programme for Monitoring of the Greenland Ice Sheet (PROMICE), only ~15% of clear days have the correct solar noon time, with the largest bias to be 3 hours. Absolute hourly biases in the magnitude of surface insolation can reach up to 200 W/m2, with daily average exceeding 100 W/m2. The biases are larger in the accumulation zone due to the systematic tilt at each station, although variabilities of tilt angles are larger in the ablation zone. Averaged over the whole Greenland Ice Sheet in the melting season, the absolute bias in insolation is ~23 W/m2, enough to melt 0.51 m snow water equivalent. We estimate the tilt angles and their directions by comparing the simulated insolation at a horizontal surface with the observed insolation by these tilted AWS under clear-sky conditions. Our correction reduces the RMSE against satellite measurements and reanalysis by ~30 W/m2 relative to the uncorrected data, with correlation coefficients over 0.95 for both references. The corrected diurnal changes of albedo are more smooth, with consistent semi-smiling patterns (see Fig. 1). The seasonal cycles and annual variabilities of albedo are in a better agreement with previous studies (see Fig. 2 and 3). The consistent tilt-corrected shortwave radiation dataset derived here will provide better observations and validations for surface energy budget studies on Greenland Ice Sheet, including albedo variation, surface melt simulations and cloud radiative forcing estimates.

  16. Testing the Hydrological Coherence of High-Resolution Gridded Precipitation and Temperature Data Sets

    NASA Astrophysics Data System (ADS)

    Laiti, L.; Mallucci, S.; Piccolroaz, S.; Bellin, A.; Zardi, D.; Fiori, A.; Nikulin, G.; Majone, B.

    2018-03-01

    Assessing the accuracy of gridded climate data sets is highly relevant to climate change impact studies, since evaluation, bias correction, and statistical downscaling of climate models commonly use these products as reference. Among all impact studies those addressing hydrological fluxes are the most affected by errors and biases plaguing these data. This paper introduces a framework, coined Hydrological Coherence Test (HyCoT), for assessing the hydrological coherence of gridded data sets with hydrological observations. HyCoT provides a framework for excluding meteorological forcing data sets not complying with observations, as function of the particular goal at hand. The proposed methodology allows falsifying the hypothesis that a given data set is coherent with hydrological observations on the basis of the performance of hydrological modeling measured by a metric selected by the modeler. HyCoT is demonstrated in the Adige catchment (southeastern Alps, Italy) for streamflow analysis, using a distributed hydrological model. The comparison covers the period 1989-2008 and includes five gridded daily meteorological data sets: E-OBS, MSWEP, MESAN, APGD, and ADIGE. The analysis highlights that APGD and ADIGE, the data sets with highest effective resolution, display similar spatiotemporal precipitation patterns and produce the largest hydrological efficiency indices. Lower performances are observed for E-OBS, MESAN, and MSWEP, especially in small catchments. HyCoT reveals deficiencies in the representation of spatiotemporal patterns of gridded climate data sets, which cannot be corrected by simply rescaling the meteorological forcing fields, as often done in bias correction of climate model outputs. We recommend this framework to assess the hydrological coherence of gridded data sets to be used in large-scale hydroclimatic studies.

  17. Estimation of an Occupational Choice Model when Occupations Are Misclassified

    ERIC Educational Resources Information Center

    Sullivan, Paul

    2009-01-01

    This paper develops an empirical occupational choice model that corrects for misclassification in occupational choices and measurement error in occupation-specific work experience. The model is used to estimate the extent of measurement error in occupation data and quantify the bias that results from ignoring measurement error in occupation codes…

  18. An accurate filter loading correction is essential for assessing personal exposure to black carbon using an Aethalometer.

    PubMed

    Good, Nicholas; Mölter, Anna; Peel, Jennifer L; Volckens, John

    2017-07-01

    The AE51 micro-Aethalometer (microAeth) is a popular and useful tool for assessing personal exposure to particulate black carbon (BC). However, few users of the AE51 are aware that its measurements are biased low (by up to 70%) due to the accumulation of BC on the filter substrate over time; previous studies of personal black carbon exposure are likely to have suffered from this bias. Although methods to correct for bias in micro-Aethalometer measurements of particulate black carbon have been proposed, these methods have not been verified in the context of personal exposure assessment. Here, five Aethalometer loading correction equations based on published methods were evaluated. Laboratory-generated aerosols of varying black carbon content (ammonium sulfate, Aquadag and NIST diesel particulate matter) were used to assess the performance of these methods. Filters from a personal exposure assessment study were also analyzed to determine how the correction methods performed for real-world samples. Standard correction equations produced correction factors with root mean square errors of 0.10 to 0.13 and mean bias within ±0.10. An optimized correction equation is also presented, along with sampling recommendations for minimizing bias when assessing personal exposure to BC using the AE51 micro-Aethalometer.

  19. Ensemble and Bias-Correction Techniques for Air-Quality Model Forecasts of Surface O3 and PM2.5 during the TEXAQS-II Experiment of 2006

    EPA Science Inventory

    Several air quality forecasting ensembles were created from seven models, running in real-time during the 2006 Texas Air Quality (TEXAQS-II) experiment. These multi-model ensembles incorporated a diverse set of meteorological models, chemical mechanisms, and emission inventories...

  20. Breast density quantification using magnetic resonance imaging (MRI) with bias field correction: A postmortem study

    PubMed Central

    Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q.; Ducote, Justin L.; Su, Min-Ying; Molloi, Sabee

    2013-01-01

    Purpose: Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. Methods: T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left–right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. Results: The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left–right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left–right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. Conclusions: The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications. PMID:24320536

  1. Breast density quantification using magnetic resonance imaging (MRI) with bias field correction: a postmortem study.

    PubMed

    Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q; Ducote, Justin L; Su, Min-Ying; Molloi, Sabee

    2013-12-01

    Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left-right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left-right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left-right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications.

  2. Correcting Memory Improves Accuracy of Predicted Task Duration

    ERIC Educational Resources Information Center

    Roy, Michael M.; Mitten, Scott T.; Christenfeld, Nicholas J. S.

    2008-01-01

    People are often inaccurate in predicting task duration. The memory bias explanation holds that this error is due to people having incorrect memories of how long previous tasks have taken, and these biased memories cause biased predictions. Therefore, the authors examined the effect on increasing predictive accuracy of correcting memory through…

  3. Bayesian Correction for Misclassification in Multilevel Count Data Models.

    PubMed

    Nelson, Tyler; Song, Joon Jin; Chin, Yoo-Mi; Stamey, James D

    2018-01-01

    Covariate misclassification is well known to yield biased estimates in single level regression models. The impact on hierarchical count models has been less studied. A fully Bayesian approach to modeling both the misclassified covariate and the hierarchical response is proposed. Models with a single diagnostic test and with multiple diagnostic tests are considered. Simulation studies show the ability of the proposed model to appropriately account for the misclassification by reducing bias and improving performance of interval estimators. A real data example further demonstrated the consequences of ignoring the misclassification. Ignoring misclassification yielded a model that indicated there was a significant, positive impact on the number of children of females who observed spousal abuse between their parents. When the misclassification was accounted for, the relationship switched to negative, but not significant. Ignoring misclassification in standard linear and generalized linear models is well known to lead to biased results. We provide an approach to extend misclassification modeling to the important area of hierarchical generalized linear models.

  4. Evaluating the Sensitivity of Agricultural Model Performance to Different Climate Inputs: Supplemental Material

    NASA Technical Reports Server (NTRS)

    Glotter, Michael J.; Ruane, Alex C.; Moyer, Elisabeth J.; Elliott, Joshua W.

    2015-01-01

    Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled and observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources reanalysis, reanalysis that is bias corrected with observed climate, and a control dataset and compared with observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by non-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. Some issues persist for all choices of climate inputs: crop yields appear to be oversensitive to precipitation fluctuations but under sensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves.

  5. Evaluating the sensitivity of agricultural model performance to different climate inputs

    PubMed Central

    Glotter, Michael J.; Moyer, Elisabeth J.; Ruane, Alex C.; Elliott, Joshua W.

    2017-01-01

    Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled to observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections, but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely-used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources – reanalysis, reanalysis bias-corrected with observed climate, and a control dataset – and compared to observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by un-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. However, some issues persist for all choices of climate inputs: crop yields appear oversensitive to precipitation fluctuations but undersensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves. PMID:29097985

  6. Bias correction of surface downwelling longwave and shortwave radiation for the EWEMBI dataset

    NASA Astrophysics Data System (ADS)

    Lange, Stefan

    2018-05-01

    Many meteorological forcing datasets include bias-corrected surface downwelling longwave and shortwave radiation (rlds and rsds). Methods used for such bias corrections range from multi-year monthly mean value scaling to quantile mapping at the daily timescale. An additional downscaling is necessary if the data to be corrected have a higher spatial resolution than the observational data used to determine the biases. This was the case when EartH2Observe (E2OBS; Calton et al., 2016) rlds and rsds were bias-corrected using more coarsely resolved Surface Radiation Budget (SRB; Stackhouse Jr. et al., 2011) data for the production of the meteorological forcing dataset EWEMBI (Lange, 2016). This article systematically compares various parametric quantile mapping methods designed specifically for this purpose, including those used for the production of EWEMBI rlds and rsds. The methods vary in the timescale at which they operate, in their way of accounting for physical upper radiation limits, and in their approach to bridging the spatial resolution gap between E2OBS and SRB. It is shown how temporal and spatial variability deflation related to bilinear interpolation and other deterministic downscaling approaches can be overcome by downscaling the target statistics of quantile mapping from the SRB to the E2OBS grid such that the sub-SRB-grid-scale spatial variability present in the original E2OBS data is retained. Cross validations at the daily and monthly timescales reveal that it is worthwhile to take empirical estimates of physical upper limits into account when adjusting either radiation component and that, overall, bias correction at the daily timescale is more effective than bias correction at the monthly timescale if sampling errors are taken into account.

  7. Exploring the future change space for fire weather in southeast Australia

    NASA Astrophysics Data System (ADS)

    Clarke, Hamish; Evans, Jason P.

    2018-05-01

    High-resolution projections of climate change impacts on fire weather conditions in southeast Australia out to 2080 are presented. Fire weather is represented by the McArthur Forest Fire Danger Index (FFDI), calculated from an objectively designed regional climate model ensemble. Changes in annual cumulative FFDI vary widely, from - 337 (- 21%) to + 657 (+ 24%) in coastal areas and - 237 (- 12%) to + 1143 (+ 26%) in inland areas. A similar spread is projected in extreme FFDI values. In coastal regions, the number of prescribed burning days is projected to change from - 11 to + 10 in autumn and - 10 to + 3 in spring. Across the ensemble, the most significant increases in fire weather and decreases in prescribed burn windows are projected to take place in spring. Partial bias correction of FFDI leads to similar projections but with a greater spread, particularly in extreme values. The partially bias-corrected FFDI performs similarly to uncorrected FFDI compared to the observed annual cumulative FFDI (ensemble root mean square error spans 540 to 1583 for uncorrected output and 695 to 1398 for corrected) but is generally worse for FFDI values above 50. This emphasizes the need to consider inter-variable relationships when bias-correcting for complex phenomena such as fire weather. There is considerable uncertainty in the future trajectory of fire weather in southeast Australia, including the potential for less prescribed burning days and substantially greater fire danger in spring. Selecting climate models on the basis of multiple criteria can lead to more informative projections and allow an explicit exploration of uncertainty.

  8. Biases in simulation of the rice phenology models when applied in warmer climates

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Li, T.; Yang, X.; Simelton, E.

    2015-12-01

    The current model inter-comparison studies highlight the difference in projections between crop models when they are applied to warmer climates, but these studies do not provide results on how the accuracy of the models would change in these projections because the adequate observations under largely diverse growing season temperature (GST) are often unavailable. Here, we investigate the potential changes in the accuracy of rice phenology models when these models were applied to a significantly warmer climate. We collected phenology data from 775 trials with 19 cultivars in 5 Asian countries (China, India, Philippines, Bangladesh and Thailand). Each cultivar encompasses the phenology observations under diverse GST regimes. For a given rice cultivar in different trials, the GST difference reaches 2.2 to 8.2°C, which allows us to calibrate the models under lower GST and validate under higher GST (i.e., warmer climates). Four common phenology models representing major algorithms on simulations of rice phenology, and three model calibration experiments were conducted. The results suggest that the bilinear and beta models resulted in gradually increasing phenology bias (Figure) and double yield bias per percent increase in phenology bias, whereas the growing-degree-day (GDD) and exponential models maintained a comparatively constant bias when applied in warmer climates (Figure). Moreover, the bias of phenology estimated by the bilinear and beta models did not reduce with increase in GST when all data were used to calibrate models. These suggest that variations in phenology bias are primarily attributed to intrinsic properties of the respective phenology model rather than on the calibration dataset. Therefore we conclude that using the GDD and exponential models has more chances of predicting rice phenology correctly and thus, production under warmer climates, and result in effective agricultural strategic adaptation to and mitigation of climate change.

  9. Wave-optics uncertainty propagation and regression-based bias model in GNSS radio occultation bending angle retrievals

    NASA Astrophysics Data System (ADS)

    Gorbunov, Michael E.; Kirchengast, Gottfried

    2018-01-01

    A new reference occultation processing system (rOPS) will include a Global Navigation Satellite System (GNSS) radio occultation (RO) retrieval chain with integrated uncertainty propagation. In this paper, we focus on wave-optics bending angle (BA) retrieval in the lower troposphere and introduce (1) an empirically estimated boundary layer bias (BLB) model then employed to reduce the systematic uncertainty of excess phases and bending angles in about the lowest 2 km of the troposphere and (2) the estimation of (residual) systematic uncertainties and their propagation together with random uncertainties from excess phase to bending angle profiles. Our BLB model describes the estimated bias of the excess phase transferred from the estimated bias of the bending angle, for which the model is built, informed by analyzing refractivity fluctuation statistics shown to induce such biases. The model is derived from regression analysis using a large ensemble of Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) RO observations and concurrent European Centre for Medium-Range Weather Forecasts (ECMWF) analysis fields. It is formulated in terms of predictors and adaptive functions (powers and cross products of predictors), where we use six main predictors derived from observations: impact altitude, latitude, bending angle and its standard deviation, canonical transform (CT) amplitude, and its fluctuation index. Based on an ensemble of test days, independent of the days of data used for the regression analysis to establish the BLB model, we find the model very effective for bias reduction and capable of reducing bending angle and corresponding refractivity biases by about a factor of 5. The estimated residual systematic uncertainty, after the BLB profile subtraction, is lower bounded by the uncertainty from the (indirect) use of ECMWF analysis fields but is significantly lower than the systematic uncertainty without BLB correction. The systematic and random uncertainties are propagated from excess phase to bending angle profiles, using a perturbation approach and the wave-optical method recently introduced by Gorbunov and Kirchengast (2015), starting with estimated excess phase uncertainties. The results are encouraging and this uncertainty propagation approach combined with BLB correction enables a robust reduction and quantification of the uncertainties of excess phases and bending angles in the lower troposphere.

  10. Four dimensional data assimilation (FDDA) impacts on WRF performance in simulating inversion layer structure and distributions of CMAQ-simulated winter ozone concentrations in Uintah Basin

    NASA Astrophysics Data System (ADS)

    Tran, Trang; Tran, Huy; Mansfield, Marc; Lyman, Seth; Crosman, Erik

    2018-03-01

    Four-dimensional data assimilation (FDDA) was applied in WRF-CMAQ model sensitivity tests to study the impact of observational and analysis nudging on model performance in simulating inversion layers and O3 concentration distributions within the Uintah Basin, Utah, U.S.A. in winter 2013. Observational nudging substantially improved WRF model performance in simulating surface wind fields, correcting a 10 °C warm surface temperature bias, correcting overestimation of the planetary boundary layer height (PBLH) and correcting underestimation of inversion strengths produced by regular WRF model physics without nudging. However, the combined effects of poor performance of WRF meteorological model physical parameterization schemes in simulating low clouds, and warm and moist biases in the temperature and moisture initialization and subsequent simulation fields, likely amplified the overestimation of warm clouds during inversion days when observational nudging was applied, impacting the resulting O3 photochemical formation in the chemistry model. To reduce the impact of a moist bias in the simulations on warm cloud formation, nudging with the analysis water mixing ratio above the planetary boundary layer (PBL) was applied. However, due to poor analysis vertical temperature profiles, applying analysis nudging also increased the errors in the modeled inversion layer vertical structure compared to observational nudging. Combining both observational and analysis nudging methods resulted in unrealistically extreme stratified stability that trapped pollutants at the lowest elevations at the center of the Uintah Basin and yielded the worst WRF performance in simulating inversion layer structure among the four sensitivity tests. The results of this study illustrate the importance of carefully considering the representativeness and quality of the observational and model analysis data sets when applying nudging techniques within stable PBLs, and the need to evaluate model results on a basin-wide scale.

  11. Bias Correction Methods Explain Much of the Variation Seen in Breast Cancer Risks of BRCA1/2 Mutation Carriers.

    PubMed

    Vos, Janet R; Hsu, Li; Brohet, Richard M; Mourits, Marian J E; de Vries, Jakob; Malone, Kathleen E; Oosterwijk, Jan C; de Bock, Geertruida H

    2015-08-10

    Recommendations for treating patients who carry a BRCA1/2 gene are mainly based on cumulative lifetime risks (CLTRs) of breast cancer determined from retrospective cohorts. These risks vary widely (27% to 88%), and it is important to understand why. We analyzed the effects of methods of risk estimation and bias correction and of population factors on CLTRs in this retrospective clinical cohort of BRCA1/2 carriers. The following methods to estimate the breast cancer risk of BRCA1/2 carriers were identified from the literature: Kaplan-Meier, frailty, and modified segregation analyses with bias correction consisting of including or excluding index patients combined with including or excluding first-degree relatives (FDRs) or different conditional likelihoods. These were applied to clinical data of BRCA1/2 families derived from our family cancer clinic for whom a simulation was also performed to evaluate the methods. CLTRs and 95% CIs were estimated and compared with the reference CLTRs. CLTRs ranged from 35% to 83% for BRCA1 and 41% to 86% for BRCA2 carriers at age 70 years width of 95% CIs: 10% to 35% and 13% to 46%, respectively). Relative bias varied from -38% to +16%. Bias correction with inclusion of index patients and untested FDRs gave the smallest bias: +2% (SD, 2%) in BRCA1 and +0.9% (SD, 3.6%) in BRCA2. Much of the variation in breast cancer CLTRs in retrospective clinical BRCA1/2 cohorts is due to the bias-correction method, whereas a smaller part is due to population differences. Kaplan-Meier analyses with bias correction that includes index patients and a proportion of untested FDRs provide suitable CLTRs for carriers counseled in the clinic. © 2015 by American Society of Clinical Oncology.

  12. Is First-Order Vector Autoregressive Model Optimal for fMRI Data?

    PubMed

    Ting, Chee-Ming; Seghouane, Abd-Krim; Khalid, Muhammad Usman; Salleh, Sh-Hussain

    2015-09-01

    We consider the problem of selecting the optimal orders of vector autoregressive (VAR) models for fMRI data. Many previous studies used model order of one and ignored that it may vary considerably across data sets depending on different data dimensions, subjects, tasks, and experimental designs. In addition, the classical information criteria (IC) used (e.g., the Akaike IC (AIC)) are biased and inappropriate for the high-dimensional fMRI data typically with a small sample size. We examine the mixed results on the optimal VAR orders for fMRI, especially the validity of the order-one hypothesis, by a comprehensive evaluation using different model selection criteria over three typical data types--a resting state, an event-related design, and a block design data set--with varying time series dimensions obtained from distinct functional brain networks. We use a more balanced criterion, Kullback's IC (KIC) based on Kullback's symmetric divergence combining two directed divergences. We also consider the bias-corrected versions (AICc and KICc) to improve VAR model selection in small samples. Simulation results show better small-sample selection performance of the proposed criteria over the classical ones. Both bias-corrected ICs provide more accurate and consistent model order choices than their biased counterparts, which suffer from overfitting, with KICc performing the best. Results on real data show that orders greater than one were selected by all criteria across all data sets for the small to moderate dimensions, particularly from small, specific networks such as the resting-state default mode network and the task-related motor networks, whereas low orders close to one but not necessarily one were chosen for the large dimensions of full-brain networks.

  13. A brain MRI bias field correction method created in the Gaussian multi-scale space

    NASA Astrophysics Data System (ADS)

    Chen, Mingsheng; Qin, Mingxin

    2017-07-01

    A pre-processing step is needed to correct for the bias field signal before submitting corrupted MR images to such image-processing algorithms. This study presents a new bias field correction method. The method creates a Gaussian multi-scale space by the convolution of the inhomogeneous MR image with a two-dimensional Gaussian function. In the multi-Gaussian space, the method retrieves the image details from the differentiation of the original image and convolution image. Then, it obtains an image whose inhomogeneity is eliminated by the weighted sum of image details in each layer in the space. Next, the bias field-corrected MR image is retrieved after the Υ correction, which enhances the contrast and brightness of the inhomogeneity-eliminated MR image. We have tested the approach on T1 MRI and T2 MRI with varying bias field levels and have achieved satisfactory results. Comparison experiments with popular software have demonstrated superior performance of the proposed method in terms of quantitative indices, especially an improvement in subsequent image segmentation.

  14. The Role of Response Bias in Perceptual Learning

    PubMed Central

    2015-01-01

    Sensory judgments improve with practice. Such perceptual learning is often thought to reflect an increase in perceptual sensitivity. However, it may also represent a decrease in response bias, with unpracticed observers acting in part on a priori hunches rather than sensory evidence. To examine whether this is the case, 55 observers practiced making a basic auditory judgment (yes/no amplitude-modulation detection or forced-choice frequency/amplitude discrimination) over multiple days. With all tasks, bias was present initially, but decreased with practice. Notably, this was the case even on supposedly “bias-free,” 2-alternative forced-choice, tasks. In those tasks, observers did not favor the same response throughout (stationary bias), but did favor whichever response had been correct on previous trials (nonstationary bias). Means of correcting for bias are described. When applied, these showed that at least 13% of perceptual learning on a forced-choice task was due to reduction in bias. In other situations, changes in bias were shown to obscure the true extent of learning, with changes in estimated sensitivity increasing once bias was corrected for. The possible causes of bias and the implications for our understanding of perceptual learning are discussed. PMID:25867609

  15. Over-fitting Time Series Models of Air Pollution Health Effects: Smoothing Tends to Bias Non-Null Associations Towards the Null.

    EPA Science Inventory

    Background: Simulation studies have previously demonstrated that time-series analyses using smoothing splines correctly model null health-air pollution associations. Methods: We repeatedly simulated season, meteorology and air quality for the metropolitan area of Atlanta from cyc...

  16. Inference for binomial probability based on dependent Bernoulli random variables with applications to meta‐analysis and group level studies

    PubMed Central

    Bakbergenuly, Ilyas; Morgenthaler, Stephan

    2016-01-01

    We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group‐level studies or in meta‐analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log‐odds and arcsine transformations of the estimated probability p^, both for single‐group studies and in combining results from several groups or studies in meta‐analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta‐analysis and result in abysmal coverage of the combined effect for large K. We also propose bias‐correction for the arcsine transformation. Our simulations demonstrate that this bias‐correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta‐analyses of prevalence. PMID:27192062

  17. Quasi-steady acoustic response of wall perforations subject to a grazing-bias flow combination

    NASA Astrophysics Data System (ADS)

    Tonon, D.; Moers, E. M. T.; Hirschberg, A.

    2013-04-01

    Well known examples of acoustical dampers are the aero-engine liners, the IC-engine exhaust mufflers, and the liners in combustion chambers. These devices comprise wall perforations, responsible for their sound absorbing features. Understanding the effect of the flow on the acoustic properties of a perforation is essential for the design of acoustic dampers. In the present work the effect of a grazing-bias flow combination on the impedance of slit shaped wall perforations is experimentally investigated by means of a multi-microphone impedance tube. Measurements are carried out for perforation geometries relevant for in technical applications. The focus of the experiments is on the low Strouhal number (quasi-steady) behavior. Analytical models of the steady flow and of the low frequency aeroacoustic behavior of a two-dimensional wall perforation are proposed for the case of a bias flow directed from the grazing flow towards the opposite side of the perforated wall. These theoretical results compare favorably with the experiments, when a semi-empirical correction is used to obtain the correct limit for pure bias flow.

  18. Supernovae as probes of cosmic parameters: estimating the bias from under-dense lines of sight

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Busti, V.C.; Clarkson, C.; Holanda, R.F.L., E-mail: vinicius.busti@uct.ac.za, E-mail: holanda@uepb.edu.br, E-mail: chris.clarkson@uct.ac.za

    2013-11-01

    Correctly interpreting observations of sources such as type Ia supernovae (SNe Ia) require knowledge of the power spectrum of matter on AU scales — which is very hard to model accurately. Because under-dense regions account for much of the volume of the universe, light from a typical source probes a mean density significantly below the cosmic mean. The relative sparsity of sources implies that there could be a significant bias when inferring distances of SNe Ia, and consequently a bias in cosmological parameter estimation. While the weak lensing approximation should in principle give the correct prediction for this, linear perturbationmore » theory predicts an effectively infinite variance in the convergence for ultra-narrow beams. We attempt to quantify the effect typically under-dense lines of sight might have in parameter estimation by considering three alternative methods for estimating distances, in addition to the usual weak lensing approximation. We find in each case this not only increases the errors in the inferred density parameters, but also introduces a bias in the posterior value.« less

  19. Detection probability in aerial surveys of feral horses

    USGS Publications Warehouse

    Ransom, Jason I.

    2011-01-01

    Observation bias pervades data collected during aerial surveys of large animals, and although some sources can be mitigated with informed planning, others must be addressed using valid sampling techniques that carefully model detection probability. Nonetheless, aerial surveys are frequently employed to count large mammals without applying such methods to account for heterogeneity in visibility of animal groups on the landscape. This often leaves managers and interest groups at odds over decisions that are not adequately informed. I analyzed detection of feral horse (Equus caballus) groups by dual independent observers from 24 fixed-wing and 16 helicopter flights using mixed-effect logistic regression models to investigate potential sources of observation bias. I accounted for observer skill, population location, and aircraft type in the model structure and analyzed the effects of group size, sun effect (position related to observer), vegetation type, topography, cloud cover, percent snow cover, and observer fatigue on detection of horse groups. The most important model-averaged effects for both fixed-wing and helicopter surveys included group size (fixed-wing: odds ratio = 0.891, 95% CI = 0.850–0.935; helicopter: odds ratio = 0.640, 95% CI = 0.587–0.698) and sun effect (fixed-wing: odds ratio = 0.632, 95% CI = 0.350–1.141; helicopter: odds ratio = 0.194, 95% CI = 0.080–0.470). Observer fatigue was also an important effect in the best model for helicopter surveys, with detection probability declining after 3 hr of survey time (odds ratio = 0.278, 95% CI = 0.144–0.537). Biases arising from sun effect and observer fatigue can be mitigated by pre-flight survey design. Other sources of bias, such as those arising from group size, topography, and vegetation can only be addressed by employing valid sampling techniques such as double sampling, mark–resight (batch-marked animals), mark–recapture (uniquely marked and identifiable animals), sightability bias correction models, and line transect distance sampling; however, some of these techniques may still only partially correct for negative observation biases.

  20. System Dynamic Analysis of a Wind Tunnel Model with Applications to Improve Aerodynamic Data Quality

    NASA Technical Reports Server (NTRS)

    Buehrle, Ralph David

    1997-01-01

    The research investigates the effect of wind tunnel model system dynamics on measured aerodynamic data. During wind tunnel tests designed to obtain lift and drag data, the required aerodynamic measurements are the steady-state balance forces and moments, pressures, and model attitude. However, the wind tunnel model system can be subjected to unsteady aerodynamic and inertial loads which result in oscillatory translations and angular rotations. The steady-state force balance and inertial model attitude measurements are obtained by filtering and averaging data taken during conditions of high model vibrations. The main goals of this research are to characterize the effects of model system dynamics on the measured steady-state aerodynamic data and develop a correction technique to compensate for dynamically induced errors. Equations of motion are formulated for the dynamic response of the model system subjected to arbitrary aerodynamic and inertial inputs. The resulting modal model is examined to study the effects of the model system dynamic response on the aerodynamic data. In particular, the equations of motion are used to describe the effect of dynamics on the inertial model attitude, or angle of attack, measurement system that is used routinely at the NASA Langley Research Center and other wind tunnel facilities throughout the world. This activity was prompted by the inertial model attitude sensor response observed during high levels of model vibration while testing in the National Transonic Facility at the NASA Langley Research Center. The inertial attitude sensor cannot distinguish between the gravitational acceleration and centrifugal accelerations associated with wind tunnel model system vibration, which results in a model attitude measurement bias error. Bias errors over an order of magnitude greater than the required device accuracy were found in the inertial model attitude measurements during dynamic testing of two model systems. Based on a theoretical modal approach, a method using measured vibration amplitudes and measured or calculated modal characteristics of the model system is developed to correct for dynamic bias errors in the model attitude measurements. The correction method is verified through dynamic response tests on two model systems and actual wind tunnel test data.

  1. State-level estimates of childhood obesity prevalence in the United States corrected for report bias.

    PubMed

    Long, M W; Ward, Z J; Resch, S C; Cradock, A L; Wang, Y C; Giles, C M; Gortmaker, S L

    2016-10-01

    State-specific obesity prevalence data are critical to public health efforts to address the childhood obesity epidemic. However, few states administer objectively measured body mass index (BMI) surveillance programs. This study reports state-specific childhood obesity prevalence by age and sex correcting for parent-reported child height and weight bias. As part of the Childhood Obesity Intervention Cost Effectiveness Study (CHOICES), we developed childhood obesity prevalence estimates for states for the period 2005-2010 using data from the 2010 US Census and American Community Survey (ACS), 2003-2004 and 2007-2008 National Survey of Children's Health (NSCH) (n=133 213), and 2005-2010 National Health and Nutrition Examination Surveys (NHANES) (n=9377; ages 2-17). Measured height and weight data from NHANES were used to correct parent-report bias in NSCH using a non-parametric statistical matching algorithm. Model estimates were validated against surveillance data from five states (AR, FL, MA, PA and TN) that conduct censuses of children across a range of grades. Parent-reported height and weight resulted in the largest overestimation of childhood obesity in males ages 2-5 years (NSCH: 42.36% vs NHANES: 11.44%). The CHOICES model estimates for this group (12.81%) and for all age and sex categories were not statistically different from NHANES. Our modeled obesity prevalence aligned closely with measured data from five validation states, with a 0.64 percentage point mean difference (range: 0.23-1.39) and a high correlation coefficient (r=0.96, P=0.009). Estimated state-specific childhood obesity prevalence ranged from 11.0 to 20.4%. Uncorrected estimates of childhood obesity prevalence from NSCH vary widely from measured national data, from a 278% overestimate among males aged 2-5 years to a 44% underestimate among females aged 14-17 years. This study demonstrates the validity of the CHOICES matching methods to correct the bias of parent-reported BMI data and highlights the need for public release of more recent data from the 2011 to 2012 NSCH.

  2. Calibration of weak-lensing shear in the Kilo-Degree Survey

    NASA Astrophysics Data System (ADS)

    Fenech Conti, I.; Herbonnet, R.; Hoekstra, H.; Merten, J.; Miller, L.; Viola, M.

    2017-05-01

    We describe and test the pipeline used to measure the weak-lensing shear signal from the Kilo-Degree Survey (KiDS). It includes a novel method of 'self-calibration' that partially corrects for the effect of noise bias. We also discuss the 'weight bias' that may arise in optimally weighted measurements, and present a scheme to mitigate that bias. To study the residual biases arising from both galaxy selection and shear measurement, and to derive an empirical correction to reduce the shear biases to ≲1 per cent, we create a suite of simulated images whose properties are close to those of the KiDS survey observations. We find that the use of 'self-calibration' reduces the additive and multiplicative shear biases significantly, although further correction via a calibration scheme is required, which also corrects for a dependence of the bias on galaxy properties. We find that the calibration relation itself is biased by the use of noisy, measured galaxy properties, which may limit the final accuracy that can be achieved. We assess the accuracy of the calibration in the tomographic bins used for the KiDS cosmic shear analysis, testing in particular the effect of possible variations in the uncertain distributions of galaxy size, magnitude and ellipticity, and conclude that the calibration procedure is accurate at the level of multiplicative bias ≲1 per cent required for the KiDS cosmic shear analysis.

  3. Recent advances in the salinity retrieval algorithms for Aquarius and Soil Moisture Active Passive (SMAP)

    NASA Astrophysics Data System (ADS)

    Meissner, Thomas; Wentz, Frank; Lee, Tong

    2017-04-01

    Our presentation discusses the latest improvements in the salinity retrievals both for Aquarius and Soil Moisture Active-Passive (SMAP) since the last releases. The Aquarius V4.0 was released in June 2015. The final V5.0 release is planned for late 2017. SMAP V 2.0 has been released in September 2016. We will present validation results for both Aquarius V5.0 pre-release and SMAP V2.0 salinity comparing with near-surface salinity measurements from Argo floats. We show that salty biases at higher northern latitudes in Aquarius V4.0 can be explained by inaccuracy in the model used in correcting for the absorption by atmospheric oxygen. These biases will be mitigated in V5.0 by fine-tuning the parameters in the oxygen absorption model. The full 360-degree look capability of SMAP makes it possible to take observations from the forward and backward looking direction at the same instance of time. This two-look capability aids the salinity retrievals. One of the largest spurious contaminations in the salinity retrievals is caused by the galactic reflection from the ocean surface. Because in most instances the reflected galaxy appears only in either the forward or the backward look, it is possible to determine its contribution by taking the difference of the measured SMAP brightness temperatures between the two looks. Our result suggests that the surface roughness that is used in the galactic correction needs to be increased and also the estimated strength of some of the galactic sources need to be slightly adjusted. The improved galaxy correction has been implemented in SMAP V2.0 retrieval and will be included in Aquarius V5.0 as well. It helps the mitigation of residual zonal and temporal biases that were present in both products. Another major cause of the observed zonal biases in SMAP is the emissive SMAP mesh antenna. In order to correct for it, an accurate knowledge of the emissivity of the antenna and its physical temperature are required. We discuss the improvements in the correction for the emissive SMAP antenna in SMAP V2.0 over V1.0.

  4. NAQFC Reports

    Science.gov Websites

    Forecasts Recent NCEP NAM-CMAQ AQF Reports EPA CMAQ Bibliography 2016-2017 Huang, J., et al., 2017: Wea Stajner, I., et al., 2016: EGU: NAQFC Overview Huang, J., et al. 2016: AMS: Bias Correction Stajner, I, et . Huang, J., et al.,(2015): CMAS, Testing of two bias correction approaches for reducing biases of

  5. Empirical Validation of a Procedure to Correct Position and Stimulus Biases in Matching-to-Sample

    ERIC Educational Resources Information Center

    Kangas, Brian D.; Branch, Marc N.

    2008-01-01

    The development of position and stimulus biases often occurs during initial training on matching-to-sample tasks. Furthermore, without intervention, these biases can be maintained via intermittent reinforcement provided by matching-to-sample contingencies. The present study evaluated the effectiveness of a correction procedure designed to…

  6. bcROCsurface: an R package for correcting verification bias in estimation of the ROC surface and its volume for continuous diagnostic tests.

    PubMed

    To Duc, Khanh

    2017-11-18

    Receiver operating characteristic (ROC) surface analysis is usually employed to assess the accuracy of a medical diagnostic test when there are three ordered disease status (e.g. non-diseased, intermediate, diseased). In practice, verification bias can occur due to missingness of the true disease status and can lead to a distorted conclusion on diagnostic accuracy. In such situations, bias-corrected inference tools are required. This paper introduce an R package, named bcROCsurface, which provides utility functions for verification bias-corrected ROC surface analysis. The shiny web application of the correction for verification bias in estimation of the ROC surface analysis is also developed. bcROCsurface may become an important tool for the statistical evaluation of three-class diagnostic markers in presence of verification bias. The R package, readme and example data are available on CRAN. The web interface enables users less familiar with R to evaluate the accuracy of diagnostic tests, and can be found at http://khanhtoduc.shinyapps.io/bcROCsurface_shiny/ .

  7. Brief communication: Drought likelihood for East Africa

    NASA Astrophysics Data System (ADS)

    Yang, Hui; Huntingford, Chris

    2018-02-01

    The East Africa drought in autumn of year 2016 caused malnutrition, illness and death. Close to 16 million people across Somalia, Ethiopia and Kenya needed food, water and medical assistance. Many factors influence drought stress and response. However, inevitably the following question is asked: are elevated greenhouse gas concentrations altering extreme rainfall deficit frequency? We investigate this with general circulation models (GCMs). After GCM bias correction to match the climatological mean of the CHIRPS data-based rainfall product, climate models project small decreases in probability of drought with the same (or worse) severity as 2016 ASO (August to October) East African event. This is by the end of the 21st century compared to the probabilities for present day. However, when further adjusting the climatological variability of GCMs to also match CHIRPS data, by additionally bias-correcting for variance, then the probability of drought occurrence will increase slightly over the same period.

  8. Rational learning and information sampling: on the "naivety" assumption in sampling explanations of judgment biases.

    PubMed

    Le Mens, Gaël; Denrell, Jerker

    2011-04-01

    Recent research has argued that several well-known judgment biases may be due to biases in the available information sample rather than to biased information processing. Most of these sample-based explanations assume that decision makers are "naive": They are not aware of the biases in the available information sample and do not correct for them. Here, we show that this "naivety" assumption is not necessary. Systematically biased judgments can emerge even when decision makers process available information perfectly and are also aware of how the information sample has been generated. Specifically, we develop a rational analysis of Denrell's (2005) experience sampling model, and we prove that when information search is interested rather than disinterested, even rational information sampling and processing can give rise to systematic patterns of errors in judgments. Our results illustrate that a tendency to favor alternatives for which outcome information is more accessible can be consistent with rational behavior. The model offers a rational explanation for behaviors that had previously been attributed to cognitive and motivational biases, such as the in-group bias or the tendency to prefer popular alternatives. 2011 APA, all rights reserved

  9. New Radiosonde Temperature Bias Adjustments for Potential NWP Applications Based on GPS RO Data

    NASA Astrophysics Data System (ADS)

    Sun, B.; Reale, A.; Ballish, B.; Seidel, D. J.

    2014-12-01

    Conventional radiosonde observations (RAOBs), along with satellite and other in situ data, are assimilated in numerical weather prediction (NWP) models to generate a forecast. Radiosonde temperature observations, however, have solar and thermal radiation induced biases (typically a warm daytime bias from sunlight heating the sensor and a cold bias at night as the sensor emits longwave radiation). Radiation corrections made at stations based on algorithms provided by radiosonde manufacturers or national meteorological agencies may not be adequate, so biases remain. To adjust these biases, NWP centers may make additional adjustments to radiosonde data. However, the radiation correction (RADCOR) schemes used in the NOAA NCEP data assimilation and forecasting system is outdated and does not cover several widely-used contemporary radiosonde types. This study focuses on work whose objective is to improve these corrections and test their impacts on the NWP forecasting and analysis. GPS Radio Occultation (RO) dry temperature (Tdry) is considered to be highly accurate in the upper troposphere and low stratosphere where atmospheric water vapor is negligible. This study uses GPS RO Tdry from the Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) as the reference to quantify the radiation induced RAOB temperature errors by analyzing ~ 3-yr collocated RAOB and COSMIC GPS RO data compile by the NOAA Products Validation System (NPROVS). The new radiation adjustments are developed for different solar angle categories and for all common sonde types flown in the WMO global operational upper air network. Results for global and several commonly used sondes are presented in the context of NCEP Global Forecast System observation-minus-background analysis, indicating projected impacts in reducing forecast error. Dedicated NWP impact studies to quantify the impact of the new RADCOR schemes on the NCEP analyses and forecast are under consideration.

  10. Addressing small sample size bias in multiple-biomarker trials: Inclusion of biomarker-negative patients and Firth correction.

    PubMed

    Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette

    2018-03-01

    In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Top-down quantification of NOx emissions from traffic in an urban area using a high-resolution regional atmospheric chemistry model

    NASA Astrophysics Data System (ADS)

    Kuik, Friderike; Kerschbaumer, Andreas; Lauer, Axel; Lupascu, Aurelia; von Schneidemesser, Erika; Butler, Tim M.

    2018-06-01

    With NO2 limit values being frequently exceeded in European cities, complying with the European air quality regulations still poses a problem for many cities. Traffic is typically a major source of NOx emissions in urban areas. High-resolution chemistry transport modelling can help to assess the impact of high urban NOx emissions on air quality inside and outside of urban areas. However, many modelling studies report an underestimation of modelled NOx and NO2 compared with observations. Part of this model bias has been attributed to an underestimation of NOx emissions, particularly in urban areas. This is consistent with recent measurement studies quantifying underestimations of urban NOx emissions by current emission inventories, identifying the largest discrepancies when the contribution of traffic NOx emissions is high. This study applies a high-resolution chemistry transport model in combination with ambient measurements in order to assess the potential underestimation of traffic NOx emissions in a frequently used emission inventory. The emission inventory is based on officially reported values and the Berlin-Brandenburg area in Germany is used as a case study. The WRF-Chem model is used at a 3 km × 3 km horizontal resolution, simulating the whole year of 2014. The emission data are downscaled from an original resolution of ca. 7 km × 7 km to a resolution of 1 km × 1 km. An in-depth model evaluation including spectral decomposition of observed and modelled time series and error apportionment suggests that an underestimation in traffic emissions is likely one of the main causes of the bias in modelled NO2 concentrations in the urban background, where NO2 concentrations are underestimated by ca. 8 µg m-3 (-30 %) on average over the whole year. Furthermore, a diurnal cycle of the bias in modelled NO2 suggests that a more realistic treatment of the diurnal cycle of traffic emissions might be needed. Model problems in simulating the correct mixing in the urban planetary boundary layer probably play an important role in contributing to the model bias, particularly in summer. Also taking into account this and other possible sources of model bias, a correction factor for traffic NOx emissions of ca. 3 is estimated for weekday daytime traffic emissions in the core urban area, which corresponds to an overall underestimation of traffic NOx emissions in the core urban area of ca. 50 %. Sensitivity simulations for the months of January and July using the calculated correction factor show that the weekday model bias can be improved from -8.8 µg m-3 (-26 %) to -5.4 µg m-3 (-16 %) in January on average in the urban background, and -10.3 µg m-3 (-46 %) to -7.6 µg m-3 (-34 %) in July. In addition, the negative bias of weekday NO2 concentrations downwind of the city in the rural and suburban background can be reduced from -3.4 µg m-3 (-12 %) to -1.2 µg m-3 (-4 %) in January and from -3.0 µg m-3 (-22 %) to -1.9 µg m-3 (-14 %) in July. The results and their consistency with findings from other studies suggest that more research is needed in order to more accurately understand the spatial and temporal variability in real-world NOx emissions from traffic, and apply this understanding to the inventories used in high-resolution chemical transport models.

  12. Accounting for interannual variability: A comparison of options for water resources climate change impact assessments

    NASA Astrophysics Data System (ADS)

    Johnson, Fiona; Sharma, Ashish

    2011-04-01

    Empirical scaling approaches for constructing rainfall scenarios from general circulation model (GCM) simulations are commonly used in water resources climate change impact assessments. However, these approaches have a number of limitations, not the least of which is that they cannot account for changes in variability or persistence at annual and longer time scales. Bias correction of GCM rainfall projections offers an attractive alternative to scaling methods as it has similar advantages to scaling in that it is computationally simple, can consider multiple GCM outputs, and can be easily applied to different regions or climatic regimes. In addition, it also allows for interannual variability to evolve according to the GCM simulations, which provides additional scenarios for risk assessments. This paper compares two scaling and four bias correction approaches for estimating changes in future rainfall over Australia and for a case study for water supply from the Warragamba catchment, located near Sydney, Australia. A validation of the various rainfall estimation procedures is conducted on the basis of the latter half of the observational rainfall record. It was found that the method leading to the lowest prediction errors varies depending on the rainfall statistic of interest. The flexibility of bias correction approaches in matching rainfall parameters at different frequencies is demonstrated. The results also indicate that for Australia, the scaling approaches lead to smaller estimates of uncertainty associated with changes to interannual variability for the period 2070-2099 compared to the bias correction approaches. These changes are also highlighted using the case study for the Warragamba Dam catchment.

  13. High-resolution Monthly Satellite Precipitation Product over the Conterminous United States

    NASA Astrophysics Data System (ADS)

    Hashemi, H.; Fayne, J.; Knight, R. J.; Lakshmi, V.

    2017-12-01

    We present a data set that enhanced the Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) monthly product 3B43 in its accuracy and spatial resolution. For this, we developed a correction function to improve the accuracy of TRMM 3B43, spatial resolution of 25 km, by estimating and removing the bias in the satellite data using a ground-based precipitation data set. We observed a strong relationship between the bias and land surface elevation; TRMM 3B43 tends to underestimate the ground-based product at elevations above 1500 m above mean sea level (m.amsl) over the conterminous United States. A relationship was developed between satellite bias and elevation. We then resampled TRMM 3B43 to the Digital Elevation Model (DEM) data set at a spatial resolution of 30 arc second ( 1 km on the ground). The produced high-resolution satellite-based data set was corrected using the developed correction function based on the bias-elevation relationship. Assuming that each rain gauge represents an area of 1 km2, we verified our product against 9,200 rain gauges across the conterminous United States. The new product was compared with the gauges, which have 50, 60, 70, 80, 90, and 100% temporal coverage within the TRMM period of 1998 to 2015. Comparisons between the high-resolution corrected satellite-based data and gauges showed an excellent agreement. The new product captured more detail in the changes in precipitation over the mountainous region than the original TRMM 3B43.

  14. A sampling bias in identifying children in foster care using Medicaid data.

    PubMed

    Rubin, David M; Pati, Susmita; Luan, Xianqun; Alessandrini, Evaline A

    2005-01-01

    Prior research identified foster care children using Medicaid eligibility codes specific to foster care, but it is unknown whether these codes capture all foster care children. To describe the sampling bias in relying on Medicaid eligibility codes to identify foster care children. Using foster care administrative files linked to Medicaid data, we describe the proportion of children whose Medicaid eligibility was correctly encoded as foster child during a 1-year follow-up period following a new episode of foster care. Sampling bias is described by comparing claims in mental health, emergency department (ED), and other ambulatory settings among correctly and incorrectly classified foster care children. Twenty-eight percent of the 5683 sampled children were incorrectly classified in Medicaid eligibility files. In a multivariate logistic regression model, correct classification was associated with duration of foster care (>9 vs <2 months, odds ratio [OR] 7.67, 95% confidence interval [CI] 7.17-7.97), number of placements (>3 vs 1 placement, OR 4.20, 95% CI 3.14-5.64), and placement in a group home among adjudicated dependent children (OR 1.87, 95% CI 1.33-2.63). Compared with incorrectly classified children, correctly classified foster care children were 3 times more likely to use any services, 2 times more likely to visit the ED, 3 times more likely to make ambulatory visits, and 4 times more likely to use mental health care services (P < .001 for all comparisons). Identifying children in foster care using Medicaid eligibility files is prone to sampling bias that over-represents children in foster care who use more services.

  15. Dye bias correction in dual-labeled cDNA microarray gene expression measurements.

    PubMed Central

    Rosenzweig, Barry A; Pine, P Scott; Domon, Olen E; Morris, Suzanne M; Chen, James J; Sistare, Frank D

    2004-01-01

    A significant limitation to the analytical accuracy and precision of dual-labeled spotted cDNA microarrays is the signal error due to dye bias. Transcript-dependent dye bias may be due to gene-specific differences of incorporation of two distinctly different chemical dyes and the resultant differential hybridization efficiencies of these two chemically different targets for the same probe. Several approaches were used to assess and minimize the effects of dye bias on fluorescent hybridization signals and maximize the experimental design efficiency of a cell culture experiment. Dye bias was measured at the individual transcript level within each batch of simultaneously processed arrays by replicate dual-labeled split-control sample hybridizations and accounted for a significant component of fluorescent signal differences. This transcript-dependent dye bias alone could introduce unacceptably high numbers of both false-positive and false-negative signals. We found that within a given set of concurrently processed hybridizations, the bias is remarkably consistent and therefore measurable and correctable. The additional microarrays and reagents required for paired technical replicate dye-swap corrections commonly performed to control for dye bias could be costly to end users. Incorporating split-control microarrays within a set of concurrently processed hybridizations to specifically measure dye bias can eliminate the need for technical dye swap replicates and reduce microarray and reagent costs while maintaining experimental accuracy and technical precision. These data support a practical and more efficient experimental design to measure and mathematically correct for dye bias. PMID:15033598

  16. Permutation importance: a corrected feature importance measure.

    PubMed

    Altmann, André; Toloşi, Laura; Sander, Oliver; Lengauer, Thomas

    2010-05-15

    In life sciences, interpretability of machine learning models is as important as their prediction accuracy. Linear models are probably the most frequently used methods for assessing feature relevance, despite their relative inflexibility. However, in the past years effective estimators of feature relevance have been derived for highly complex or non-parametric models such as support vector machines and RandomForest (RF) models. Recently, it has been observed that RF models are biased in such a way that categorical variables with a large number of categories are preferred. In this work, we introduce a heuristic for normalizing feature importance measures that can correct the feature importance bias. The method is based on repeated permutations of the outcome vector for estimating the distribution of measured importance for each variable in a non-informative setting. The P-value of the observed importance provides a corrected measure of feature importance. We apply our method to simulated data and demonstrate that (i) non-informative predictors do not receive significant P-values, (ii) informative variables can successfully be recovered among non-informative variables and (iii) P-values computed with permutation importance (PIMP) are very helpful for deciding the significance of variables, and therefore improve model interpretability. Furthermore, PIMP was used to correct RF-based importance measures for two real-world case studies. We propose an improved RF model that uses the significant variables with respect to the PIMP measure and show that its prediction accuracy is superior to that of other existing models. R code for the method presented in this article is available at http://www.mpi-inf.mpg.de/ approximately altmann/download/PIMP.R CONTACT: altmann@mpi-inf.mpg.de, laura.tolosi@mpi-inf.mpg.de Supplementary data are available at Bioinformatics online.

  17. Measuring and Benchmarking Technical Efficiency of Public Hospitals in Tianjin, China

    PubMed Central

    Li, Hao; Dong, Siping

    2015-01-01

    China has long been stuck in applying traditional data envelopment analysis (DEA) models to measure technical efficiency of public hospitals without bias correction of efficiency scores. In this article, we have introduced the Bootstrap-DEA approach from the international literature to analyze the technical efficiency of public hospitals in Tianjin (China) and tried to improve the application of this method for benchmarking and inter-organizational learning. It is found that the bias corrected efficiency scores of Bootstrap-DEA differ significantly from those of the traditional Banker, Charnes, and Cooper (BCC) model, which means that Chinese researchers need to update their DEA models for more scientific calculation of hospital efficiency scores. Our research has helped shorten the gap between China and the international world in relative efficiency measurement and improvement of hospitals. It is suggested that Bootstrap-DEA be widely applied into afterward research to measure relative efficiency and productivity of Chinese hospitals so as to better serve for efficiency improvement and related decision making. PMID:26396090

  18. Bias corrections of GOSAT SWIR XCO2 and XCH4 with TCCON data and their evaluation using aircraft measurement data

    NASA Astrophysics Data System (ADS)

    Inoue, Makoto; Morino, Isamu; Uchino, Osamu; Nakatsuru, Takahiro; Yoshida, Yukio; Yokota, Tatsuya; Wunch, Debra; Wennberg, Paul O.; Roehl, Coleen M.; Griffith, David W. T.; Velazco, Voltaire A.; Deutscher, Nicholas M.; Warneke, Thorsten; Notholt, Justus; Robinson, John; Sherlock, Vanessa; Hase, Frank; Blumenstock, Thomas; Rettinger, Markus; Sussmann, Ralf; Kyrö, Esko; Kivi, Rigel; Shiomi, Kei; Kawakami, Shuji; De Mazière, Martine; Arnold, Sabrina G.; Feist, Dietrich G.; Barrow, Erica A.; Barney, James; Dubey, Manvendra; Schneider, Matthias; Iraci, Laura T.; Podolske, James R.; Hillyard, Patrick W.; Machida, Toshinobu; Sawa, Yousuke; Tsuboi, Kazuhiro; Matsueda, Hidekazu; Sweeney, Colm; Tans, Pieter P.; Andrews, Arlyn E.; Biraud, Sebastien C.; Fukuyama, Yukio; Pittman, Jasna V.; Kort, Eric A.; Tanaka, Tomoaki

    2016-08-01

    We describe a method for removing systematic biases of column-averaged dry air mole fractions of CO2 (XCO2) and CH4 (XCH4) derived from short-wavelength infrared (SWIR) spectra of the Greenhouse gases Observing SATellite (GOSAT). We conduct correlation analyses between the GOSAT biases and simultaneously retrieved auxiliary parameters. We use these correlations to bias correct the GOSAT data, removing these spurious correlations. Data from the Total Carbon Column Observing Network (TCCON) were used as reference values for this regression analysis. To evaluate the effectiveness of this correction method, the uncorrected/corrected GOSAT data were compared to independent XCO2 and XCH4 data derived from aircraft measurements taken for the Comprehensive Observation Network for TRace gases by AIrLiner (CONTRAIL) project, the National Oceanic and Atmospheric Administration (NOAA), the US Department of Energy (DOE), the National Institute for Environmental Studies (NIES), the Japan Meteorological Agency (JMA), the HIAPER Pole-to-Pole observations (HIPPO) program, and the GOSAT validation aircraft observation campaign over Japan. These comparisons demonstrate that the empirically derived bias correction improves the agreement between GOSAT XCO2/XCH4 and the aircraft data. Finally, we present spatial distributions and temporal variations of the derived GOSAT biases.

  19. In an occupational health surveillance study, auxiliary data from administrative health and occupational databases effectively corrected for nonresponse.

    PubMed

    Santin, Gaëlle; Geoffroy, Béatrice; Bénézet, Laetitia; Delézire, Pauline; Chatelot, Juliette; Sitta, Rémi; Bouyer, Jean; Gueguen, Alice

    2014-06-01

    To show how reweighting can correct for unit nonresponse bias in an occupational health surveillance survey by using data from administrative databases in addition to classic sociodemographic data. In 2010, about 10,000 workers covered by a French health insurance fund were randomly selected and were sent a postal questionnaire. Simultaneously, auxiliary data from routine health insurance and occupational databases were collected for all these workers. To model the probability of response to the questionnaire, logistic regressions were performed with these auxiliary data to compute weights for correcting unit nonresponse. Corrected prevalences of questionnaire variables were estimated under several assumptions regarding the missing data process. The impact of reweighting was evaluated by a sensitivity analysis. Respondents had more reimbursement claims for medical services than nonrespondents but fewer reimbursements for medical prescriptions or hospitalizations. Salaried workers, workers in service companies, or who had held their job longer than 6 months were more likely to respond. Corrected prevalences after reweighting were slightly different from crude prevalences for some variables but meaningfully different for others. Linking health insurance and occupational data effectively corrects for nonresponse bias using reweighting techniques. Sociodemographic variables may be not sufficient to correct for nonresponse. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. A Comparison of Methods for a Priori Bias Correction in Soil Moisture Data Assimilation

    NASA Technical Reports Server (NTRS)

    Kumar, Sujay V.; Reichle, Rolf H.; Harrison, Kenneth W.; Peters-Lidard, Christa D.; Yatheendradas, Soni; Santanello, Joseph A.

    2011-01-01

    Data assimilation is being increasingly used to merge remotely sensed land surface variables such as soil moisture, snow and skin temperature with estimates from land models. Its success, however, depends on unbiased model predictions and unbiased observations. Here, a suite of continental-scale, synthetic soil moisture assimilation experiments is used to compare two approaches that address typical biases in soil moisture prior to data assimilation: (i) parameter estimation to calibrate the land model to the climatology of the soil moisture observations, and (ii) scaling of the observations to the model s soil moisture climatology. To enable this research, an optimization infrastructure was added to the NASA Land Information System (LIS) that includes gradient-based optimization methods and global, heuristic search algorithms. The land model calibration eliminates the bias but does not necessarily result in more realistic model parameters. Nevertheless, the experiments confirm that model calibration yields assimilation estimates of surface and root zone soil moisture that are as skillful as those obtained through scaling of the observations to the model s climatology. Analysis of innovation diagnostics underlines the importance of addressing bias in soil moisture assimilation and confirms that both approaches adequately address the issue.

  1. Regional geoid computation by least squares modified Hotine's formula with additive corrections

    NASA Astrophysics Data System (ADS)

    Märdla, Silja; Ellmann, Artu; Ågren, Jonas; Sjöberg, Lars E.

    2018-03-01

    Geoid and quasigeoid modelling from gravity anomalies by the method of least squares modification of Stokes's formula with additive corrections is adapted for the usage with gravity disturbances and Hotine's formula. The biased, unbiased and optimum versions of least squares modification are considered. Equations are presented for the four additive corrections that account for the combined (direct plus indirect) effect of downward continuation (DWC), topographic, atmospheric and ellipsoidal corrections in geoid or quasigeoid modelling. The geoid or quasigeoid modelling scheme by the least squares modified Hotine formula is numerically verified, analysed and compared to the Stokes counterpart in a heterogeneous study area. The resulting geoid models and the additive corrections computed both for use with Stokes's or Hotine's formula differ most in high topography areas. Over the study area (reaching almost 2 km in altitude), the approximate geoid models (before the additive corrections) differ by 7 mm on average with a 3 mm standard deviation (SD) and a maximum of 1.3 cm. The additive corrections, out of which only the DWC correction has a numerically significant difference, improve the agreement between respective geoid or quasigeoid models to an average difference of 5 mm with a 1 mm SD and a maximum of 8 mm.

  2. Testing Mediation in Structural Equation Modeling: The Effectiveness of the Test of Joint Significance

    ERIC Educational Resources Information Center

    Leth-Steensen, Craig; Gallitto, Elena

    2016-01-01

    A large number of approaches have been proposed for estimating and testing the significance of indirect effects in mediation models. In this study, four sets of Monte Carlo simulations involving full latent variable structural equation models were run in order to contrast the effectiveness of the currently popular bias-corrected bootstrapping…

  3. DESIGN ARTIFACTS IN EULERIAN REGIONAL AIR QUALITY MODELS: EVALUATION OF THE EFFECTS OF LAYER THICKNESS AND VERTICAL PROFILE CORRECTION ON SURFACE OZONE CONCENTRATIONS

    EPA Science Inventory

    Evaluation studies of the Regional Acid Deposition Model (RADM) results have revealed that there exists high bias of surface SO2 and O3 concentrations by the model, especially during nighttime hours. omparison of the RADM results with surface measurements of hourly ozone concentr...

  4. Fat fraction bias correction using T1 estimates and flip angle mapping.

    PubMed

    Yang, Issac Y; Cui, Yifan; Wiens, Curtis N; Wade, Trevor P; Friesen-Waldner, Lanette J; McKenzie, Charles A

    2014-01-01

    To develop a new method of reducing T1 bias in proton density fat fraction (PDFF) measured with iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL). PDFF maps reconstructed from high flip angle IDEAL measurements were simulated and acquired from phantoms and volunteer L4 vertebrae. T1 bias was corrected using a priori T1 values for water and fat, both with and without flip angle correction. Signal-to-noise ratio (SNR) maps were used to measure precision of the reconstructed PDFF maps. PDFF measurements acquired using small flip angles were then compared to both sets of corrected large flip angle measurements for accuracy and precision. Simulations show similar results in PDFF error between small flip angle measurements and corrected large flip angle measurements as long as T1 estimates were within one standard deviation from the true value. Compared to low flip angle measurements, phantom and in vivo measurements demonstrate better precision and accuracy in PDFF measurements if images were acquired at a high flip angle, with T1 bias corrected using T1 estimates and flip angle mapping. T1 bias correction of large flip angle acquisitions using estimated T1 values with flip angle mapping yields fat fraction measurements of similar accuracy and superior precision compared to low flip angle acquisitions. Copyright © 2013 Wiley Periodicals, Inc.

  5. Bias correction for magnetic resonance images via joint entropy regularization.

    PubMed

    Wang, Shanshan; Xia, Yong; Dong, Pei; Luo, Jianhua; Huang, Qiu; Feng, Dagan; Li, Yuanxiang

    2014-01-01

    Due to the imperfections of the radio frequency (RF) coil or object-dependent electrodynamic interactions, magnetic resonance (MR) images often suffer from a smooth and biologically meaningless bias field, which causes severe troubles for subsequent processing and quantitative analysis. To effectively restore the original signal, this paper simultaneously exploits the spatial and gradient features of the corrupted MR images for bias correction via the joint entropy regularization. With both isotropic and anisotropic total variation (TV) considered, two nonparametric bias correction algorithms have been proposed, namely IsoTVBiasC and AniTVBiasC. These two methods have been applied to simulated images under various noise levels and bias field corruption and also tested on real MR data. The test results show that the proposed two methods can effectively remove the bias field and also present comparable performance compared to the state-of-the-art methods.

  6. Comparison Of Downscaled CMIP5 Precipitation Datasets For Projecting Changes In Extreme Precipitation In The San Francisco Bay Area.

    NASA Technical Reports Server (NTRS)

    Milesi, Cristina; Costa-Cabral, Mariza; Rath, John; Mills, William; Roy, Sujoy; Thrasher, Bridget; Wang, Weile; Chiang, Felicia; Loewenstein, Max; Podolske, James

    2014-01-01

    Water resource managers planning for the adaptation to future events of extreme precipitation now have access to high resolution downscaled daily projections derived from statistical bias correction and constructed analogs. We also show that along the Pacific Coast the Northern Oscillation Index (NOI) is a reliable predictor of storm likelihood, and therefore a predictor of seasonal precipitation totals and likelihood of extremely intense precipitation. Such time series can be used to project intensity duration curves into the future or input into stormwater models. However, few climate projection studies have explored the impact of the type of downscaling method used on the range and uncertainty of predictions for local flood protection studies. Here we present a study of the future climate flood risk at NASA Ames Research Center, located in South Bay Area, by comparing the range of predictions in extreme precipitation events calculated from three sets of time series downscaled from CMIP5 data: 1) the Bias Correction Constructed Analogs method dataset downscaled to a 1/8 degree grid (12km); 2) the Bias Correction Spatial Disaggregation method downscaled to a 1km grid; 3) a statistical model of extreme daily precipitation events and projected NOI from CMIP5 models. In addition, predicted years of extreme precipitation are used to estimate the risk of overtopping of the retention pond located on the site through simulations of the EPA SWMM hydrologic model. Preliminary results indicate that the intensity of extreme precipitation events is expected to increase and flood the NASA Ames retention pond. The results from these estimations will assist flood protection managers in planning for infrastructure adaptations.

  7. Correction of aeroheating-induced intensity nonuniformity in infrared images

    NASA Astrophysics Data System (ADS)

    Liu, Li; Yan, Luxin; Zhao, Hui; Dai, Xiaobing; Zhang, Tianxu

    2016-05-01

    Aeroheating-induced intensity nonuniformity effects severely influence the effective performance of an infrared (IR) imaging system in high-speed flight. In this paper, we propose a new approach to the correction of intensity nonuniformity in IR images. The basic assumption is that the low-frequency intensity bias is additive and smoothly varying so that it can be modeled as a bivariate polynomial and estimated by using an isotropic total variation (TV) model. A half quadratic penalty method is applied to the isotropic form of TV discretization. And an alternating minimization algorithm is adopted for solving the optimization model. The experimental results of simulated and real aerothermal images show that the proposed correction method can effectively improve IR image quality.

  8. A calibrated, high-resolution goes satellite solar insolation product for a climatology of Florida evapotranspiration

    USGS Publications Warehouse

    Paech, S.J.; Mecikalski, J.R.; Sumner, D.M.; Pathak, C.S.; Wu, Q.; Islam, S.; Sangoyomi, T.

    2009-01-01

    Estimates of incoming solar radiation (insolation) from Geostationary Operational Environmental Satellite observations have been produced for the state of Florida over a 10-year period (1995-2004). These insolation estimates were developed into well-calibrated half-hourly and daily integrated solar insolation fields over the state at 2 km resolution, in addition to a 2-week running minimum surface albedo product. Model results of the daily integrated insolation were compared with ground-based pyranometers, and as a result, the entire dataset was calibrated. This calibration was accomplished through a three-step process: (1) comparison with ground-based pyranometer measurements on clear (noncloudy) reference days, (2) correcting for a bias related to cloudiness, and (3) deriving a monthly bias correction factor. Precalibration results indicated good model performance, with a station-averaged model error of 2.2 MJ m-2/day (13%). Calibration reduced errors to 1.7 MJ m -2/day (10%), and also removed temporal-related, seasonal-related, and satellite sensor-related biases. The calibrated insolation dataset will subsequently be used by state of Florida Water Management Districts to produce statewide, 2-km resolution maps of estimated daily reference and potential evapotranspiration for water management-related activities. ?? 2009 American Water Resources Association.

  9. The usefulness of “corrected” body mass index vs. self-reported body mass index: comparing the population distributions, sensitivity, specificity, and predictive utility of three correction equations using Canadian population-based data

    PubMed Central

    2014-01-01

    Background National data on body mass index (BMI), computed from self-reported height and weight, is readily available for many populations including the Canadian population. Because self-reported weight is found to be systematically under-reported, it has been proposed that the bias in self-reported BMI can be corrected using equations derived from data sets which include both self-reported and measured height and weight. Such correction equations have been developed and adopted. We aim to evaluate the usefulness (i.e., distributional similarity; sensitivity and specificity; and predictive utility vis-à-vis disease outcomes) of existing and new correction equations in population-based research. Methods The Canadian Community Health Surveys from 2005 and 2008 include both measured and self-reported values of height and weight, which allows for construction and evaluation of correction equations. We focused on adults age 18–65, and compared three correction equations (two correcting weight only, and one correcting BMI) against self-reported and measured BMI. We first compared population distributions of BMI. Second, we compared the sensitivity and specificity of self-reported BMI and corrected BMI against measured BMI. Third, we compared the self-reported and corrected BMI in terms of association with health outcomes using logistic regression. Results All corrections outperformed self-report when estimating the full BMI distribution; the weight-only correction outperformed the BMI-only correction for females in the 23–28 kg/m2 BMI range. In terms of sensitivity/specificity, when estimating obesity prevalence, corrected values of BMI (from any equation) were superior to self-report. In terms of modelling BMI-disease outcome associations, findings were mixed, with no correction proving consistently superior to self-report. Conclusions If researchers are interested in modelling the full population distribution of BMI, or estimating the prevalence of obesity in a population, then a correction of any kind included in this study is recommended. If the researcher is interested in using BMI as a predictor variable for modelling disease, then both self-reported and corrected BMI result in biased estimates of association. PMID:24885210

  10. Improving winter leaf area index estimation in evergreen coniferous forests and its significance in carbon and water fluxes modeling

    NASA Astrophysics Data System (ADS)

    Wang, R.; Chen, J. M.; Luo, X.

    2016-12-01

    Modeling of carbon and water fluxes at the continental and global scales requires remotely sensed LAI as inputs. For evergreen coniferous forests (ENF), severely underestimated winter LAI has been one of the issues for mostly available remote sensing products, which could cause negative bias in the modeling of Gross Primary Productivity (GPP) and evapotranspiration (ET). Unlike deciduous trees which shed all the leaves in winter, conifers retains part of their needles and the proportion of the retained needles depends on the needle longevity. In this work, the Boreal Ecosystem Productivity Simulator (BEPS) was used to model GPP and ET at eight FLUXNET Canada ENF sites. Two sets of LAI were used as the model inputs: the 250m 10-day University of Toronto (U of T) LAI product Version 2 and the corrected LAI based on the U of T LAI product and the needle longevity of the corresponding tree species at individual sites. Validating model daily GPP (gC/m2) against site measurements, the mean RMSE over eight sites decreases from 1.85 to 1.15, and the bias changes from -0.99 to -0.19. For daily ET (mm), mean RMSE decreases from 0.63 to 0.33, and the bias changes from -0.31 to -0.16. Most of the improvements occur in the beginning and at the end of the growing season when there is large correction of LAI and meanwhile temperature is still suitable for photosynthesis and transpiration. For the dormant season, the improvement in ET simulation mostly comes from the increased interception of precipitation brought by the elevated LAI during that time. The results indicate that model performance can be improved by the application the corrected LAI. Improving the winter RS LAI can make a large impact on land surface carbon and energy budget.

  11. Bias Corrections for Standardized Effect Size Estimates Used with Single-Subject Experimental Designs

    ERIC Educational Resources Information Center

    Ugille, Maaike; Moeyaert, Mariola; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim

    2014-01-01

    A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect…

  12. Modeling The Hydrology And Water Allocation Under Climate Change In Rural River Basins: A Case Study From Nam Ngum River Basin, Laos

    NASA Astrophysics Data System (ADS)

    Jayasekera, D. L.; Kaluarachchi, J.; Kim, U.

    2011-12-01

    Rural river basins with sufficient water availability to maintain economic livelihoods can be affected with seasonal fluctuations of precipitation and sometimes by droughts. In addition, climate change impacts can also alter future water availability. General Circulation Models (GCMs) provide credible quantitative estimates of future climate conditions but such estimates are often characterized by bias and coarse scale resolution making it necessary to downscale the outputs for use in regional hydrologic models. This study develops a methodology to downscale and project future monthly precipitation in moderate scale basins where data are limited. A stochastic framework for single-site and multi-site generation of weekly rainfall is developed while preserving the historical temporal and spatial correlation structures. The spatial correlations in the simulated occurrences and the amounts are induced using spatially correlated yet serially independent random numbers. This method is applied to generate weekly precipitation data for a 100-year period in the Nam Ngum River Basin (NNRB) that has a land area of 16,780 km2 located in Lao P.D.R. This method is developed and applied using precipitation data from 1961 to 2000 for 10 selected weather stations that represents the basin rainfall characteristics. Bias-correction method, based on fitted theoretical probability distribution transformations, is applied to improve monthly mean frequency, intensity and the amount of raw GCM precipitation predicted at a given weather station using CGCM3.1 and ECHAM5 for SRES A2 emission scenario. Bias-correction procedure adjusts GCM precipitation to approximate the long-term frequency and the intensity distribution observed at a given weather station. Index of agreement and mean absolute error are determined to assess the overall ability and performance of the bias correction method. The generated precipitation series aggregated at monthly time step was perturbed by the change factors estimated using the corrected GCM and baseline scenarios for future time periods of 2011-2050 and 2051-2090. A network based hydrologic and water resources model, WEAP, was used to simulate the current water allocation and management practices to identify the impacts of climate change in the 20th century. The results of this work are used to identify the multiple challenges faced by stakeholders and planners in water allocation for competing demands in the presence of climate change impacts.

  13. Correction Technique for Raman Water Vapor Lidar Signal-Dependent Bias and Suitability for Water Wapor Trend Monitoring in the Upper Troposphere

    NASA Technical Reports Server (NTRS)

    Whiteman, D. N.; Cadirola, M.; Venable, D.; Calhoun, M.; Miloshevich, L; Vermeesch, K.; Twigg, L.; Dirisu, A.; Hurst, D.; Hall, E.; hide

    2012-01-01

    The MOHAVE-2009 campaign brought together diverse instrumentation for measuring atmospheric water vapor. We report on the participation of the ALVICE (Atmospheric Laboratory for Validation, Interagency Collaboration and Education) mobile laboratory in the MOHAVE-2009 campaign. In appendices we also report on the performance of the corrected Vaisala RS92 radiosonde measurements during the campaign, on a new radiosonde based calibration algorithm that reduces the influence of atmospheric variability on the derived calibration constant, and on other results of the ALVICE deployment. The MOHAVE-2009 campaign permitted the Raman lidar systems participating to discover and address measurement biases in the upper troposphere and lower stratosphere. The ALVICE lidar system was found to possess a wet bias which was attributed to fluorescence of insect material that was deposited on the telescope early in the mission. Other sources of wet biases are discussed and data from other Raman lidar systems are investigated, revealing that wet biases in upper tropospheric (UT) and lower stratospheric (LS) water vapor measurements appear to be quite common in Raman lidar systems. Lower stratospheric climatology of water vapor is investigated both as a means to check for the existence of these wet biases in Raman lidar data and as a source of correction for the bias. A correction technique is derived and applied to the ALVICE lidar water vapor profiles. Good agreement is found between corrected ALVICE lidar measurments and those of RS92, frost point hygrometer and total column water. The correction is offered as a general method to both quality control Raman water vapor lidar data and to correct those data that have signal-dependent bias. The influence of the correction is shown to be small at regions in the upper troposphere where recent work indicates detection of trends in atmospheric water vapor may be most robust. The correction shown here holds promise for permitting useful upper tropospheric water vapor profiles to be consistently measured by Raman lidar within NDACC (Network for the Detection of Atmospheric Composition Change) and elsewhere, despite the prevalence of instrumental and atmospheric effects that can contaminate the very low signal to noise measurements in the UT.

  14. Variance analysis of forecasted streamflow maxima in a wet temperate climate

    NASA Astrophysics Data System (ADS)

    Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.

    2018-05-01

    Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.

  15. Influence of Sub-grid-Scale Isentropic Transports on McRAS Evaluations using ARM-CART SCM Datasets

    NASA Technical Reports Server (NTRS)

    Sud, Y. C.; Walker, G. K.; Tao, W. K.

    2004-01-01

    In GCM-physics evaluations with the currently available ARM-CART SCM datasets, McRAS produced very similar character of near surface errors of simulated temperature and humidity containing typically warm and moist biases near the surface and cold and dry biases aloft. We argued it must have a common cause presumably rooted in the model physics. Lack of vertical adjustment of horizontal transport was thought to be a plausible source. Clearly, debarring such a freedom would force the incoming air to diffuse into the grid-cell which would naturally bias the surface air to become warm and moist while the upper air becomes cold and dry, a characteristic feature of McRAS biases. Since, the errors were significantly larger in the two winter cases that contain potentially more intense episodes of cold and warm advective transports, it further reaffirmed our argument and provided additional motivation to introduce the corrections. When the horizontal advective transports were suitably modified to allow rising and/or sinking following isentropic pathways of subgrid scale motions, the outcome was to cool and dry (or warm and moisten) the lower (or upper) levels. Ever, crude approximations invoking such a correction reduced the temperature and humidity biases considerably. The tests were performed on all the available ARM-CART SCM cases with consistent outcome. With the isentropic corrections implemented through two different numerical approximations, virtually similar benefits were derived further confirming the robustness of our inferences. These results suggest the need for insentropic advective transport adjustment in a GCM due to subgrid scale motions.

  16. Overlooked possibility of a collapsed Atlantic Meridional Overturning Circulation in warming climate.

    PubMed

    Liu, Wei; Xie, Shang-Ping; Liu, Zhengyu; Zhu, Jiang

    2017-01-01

    Changes in the Atlantic Meridional Overturning Circulation (AMOC) are moderate in most climate model projections under increasing greenhouse gas forcing. This intermodel consensus may be an artifact of common model biases that favor a stable AMOC. Observationally based freshwater budget analyses suggest that the AMOC is in an unstable regime susceptible for large changes in response to perturbations. By correcting the model biases, we show that the AMOC collapses 300 years after the atmospheric CO 2 concentration is abruptly doubled from the 1990 level. Compared to an uncorrected model, the AMOC collapse brings about large, markedly different climate responses: a prominent cooling over the northern North Atlantic and neighboring areas, sea ice increases over the Greenland-Iceland-Norwegian seas and to the south of Greenland, and a significant southward rain-belt migration over the tropical Atlantic. Our results highlight the need to develop dynamical metrics to constrain models and the importance of reducing model biases in long-term climate projection.

  17. Overlooked possibility of a collapsed Atlantic Meridional Overturning Circulation in warming climate

    PubMed Central

    Liu, Wei; Xie, Shang-Ping; Liu, Zhengyu; Zhu, Jiang

    2017-01-01

    Changes in the Atlantic Meridional Overturning Circulation (AMOC) are moderate in most climate model projections under increasing greenhouse gas forcing. This intermodel consensus may be an artifact of common model biases that favor a stable AMOC. Observationally based freshwater budget analyses suggest that the AMOC is in an unstable regime susceptible for large changes in response to perturbations. By correcting the model biases, we show that the AMOC collapses 300 years after the atmospheric CO2 concentration is abruptly doubled from the 1990 level. Compared to an uncorrected model, the AMOC collapse brings about large, markedly different climate responses: a prominent cooling over the northern North Atlantic and neighboring areas, sea ice increases over the Greenland-Iceland-Norwegian seas and to the south of Greenland, and a significant southward rain-belt migration over the tropical Atlantic. Our results highlight the need to develop dynamical metrics to constrain models and the importance of reducing model biases in long-term climate projection. PMID:28070560

  18. Correcting Measurement Error in Latent Regression Covariates via the MC-SIMEX Method

    ERIC Educational Resources Information Center

    Rutkowski, Leslie; Zhou, Yan

    2015-01-01

    Given the importance of large-scale assessments to educational policy conversations, it is critical that subpopulation achievement is estimated reliably and with sufficient precision. Despite this importance, biased subpopulation estimates have been found to occur when variables in the conditioning model side of a latent regression model contain…

  19. Predictive Modeling of Marine Mammal Density from Existing Survey Data and Model Validation Using Upcoming Surveys

    DTIC Science & Technology

    2009-05-01

    estimate to a geometric mean in the process (Finney 1941, Smith 1993). The ratio estimator was used to correct for this back-transformation bias...2007) Killer whales preying on a blue whale calf on the Costa Rica Dome: genetics, morphometrics , vocalizations and composition of the group. Journal

  20. Cosmic shear bias and calibration in dark energy studies

    NASA Astrophysics Data System (ADS)

    Taylor, A. N.; Kitching, T. D.

    2018-07-01

    With the advent of large-scale weak lensing surveys there is a need to understand how realistic, scale-dependent systematics bias cosmic shear and dark energy measurements, and how they can be removed. Here, we show how spatially varying image distortions are convolved with the shear field, mixing convergence E and B modes, and bias the observed shear power spectrum. In practise, many of these biases can be removed by calibration to data or simulations. The uncertainty in this calibration is marginalized over, and we calculate how this propagates into parameter estimation and degrades the dark energy Figure-of-Merit. We find that noise-like biases affect dark energy measurements the most, while spikes in the bias power have the least impact. We argue that, in order to remove systematic biases in cosmic shear surveys and maintain statistical power, effort should be put into improving the accuracy of the bias calibration rather than minimizing the size of the bias. In general, this appears to be a weaker condition for bias removal. We also investigate how to minimize the size of the calibration set for a fixed reduction in the Figure-of-Merit. Our results can be used to correctly model the effect of biases and calibration on a cosmic shear survey, assess their impact on the measurement of modified gravity and dark energy models, and to optimize survey and calibration requirements.

  1. Accounting protesting and warm glow bidding in Contingent Valuation surveys considering the management of environmental goods--an empirical case study assessing the value of protecting a Natura 2000 wetland area in Greece.

    PubMed

    Grammatikopoulou, Ioanna; Olsen, Søren Bøye

    2013-11-30

    Based on a Contingent Valuation survey aiming to reveal the willingness to pay (WTP) for conservation of a wetland area in Greece, we show how protest and warm glow motives can be taken into account when modeling WTP. In a sample of more than 300 respondents, we find that 54% of the positive bids are rooted to some extent in warm glow reasoning while 29% of the zero bids can be classified as expressions of protest rather than preferences. In previous studies, warm glow bidders are only rarely identified while protesters are typically identified and excluded from further analysis. We test for selection bias associated with simple removal of both protesters and warm glow bidders in our data. Our findings show that removal of warm glow bidders does not significantly distort WTP whereas we find strong evidence of selection bias associated with removal of protesters. We show how to correct for such selection bias by using a sample selection model. In our empirical sample, using the typical approach of removing protesters from the analysis, the value of protecting the wetland is significantly underestimated by as much as 46% unless correcting for selection bias. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Gamma-ray Burst Prompt Correlations: Selection and Instrumental Effects

    NASA Astrophysics Data System (ADS)

    Dainotti, M. G.; Amati, L.

    2018-05-01

    The prompt emission mechanism of gamma-ray bursts (GRB) even after several decades remains a mystery. However, it is believed that correlations between observable GRB properties, given their huge luminosity/radiated energy and redshift distribution extending up to at least z ≈ 9, are promising possible cosmological tools. They also may help to discriminate among the most plausible theoretical models. Nowadays, the objective is to make GRBs standard candles, similar to supernovae (SNe) Ia, through well-established and robust correlations. However, differently from SNe Ia, GRBs span over several order of magnitude in their energetics, hence they cannot yet be considered standard candles. Additionally, being observed at very large distances, their physical properties are affected by selection biases, the so-called Malmquist bias or Eddington effect. We describe the state of the art on how GRB prompt correlations are corrected for these selection biases to employ them as redshift estimators and cosmological tools. We stress that only after an appropriate evaluation and correction for these effects, GRB correlations can be used to discriminate among the theoretical models of prompt emission, to estimate the cosmological parameters and to serve as distance indicators via redshift estimation.

  3. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    PubMed

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.

  4. An Innovative Strategy for Accurate Thermal Compensation of Gyro Bias in Inertial Units by Exploiting a Novel Augmented Kalman Filter

    PubMed Central

    Angrisani, Leopoldo; Simone, Domenico De

    2018-01-01

    This paper presents an innovative model for integrating thermal compensation of gyro bias error into an augmented state Kalman filter. The developed model is applied in the Zero Velocity Update filter for inertial units manufactured by exploiting Micro Electro-Mechanical System (MEMS) gyros. It is used to remove residual bias at startup. It is a more effective alternative to traditional approach that is realized by cascading bias thermal correction by calibration and traditional Kalman filtering for bias tracking. This function is very useful when adopted gyros are manufactured using MEMS technology. These systems have significant limitations in terms of sensitivity to environmental conditions. They are characterized by a strong correlation of the systematic error with temperature variations. The traditional process is divided into two separated algorithms, i.e., calibration and filtering, and this aspect reduces system accuracy, reliability, and maintainability. This paper proposes an innovative Zero Velocity Update filter that just requires raw uncalibrated gyro data as input. It unifies in a single algorithm the two steps from the traditional approach. Therefore, it saves time and economic resources, simplifying the management of thermal correction process. In the paper, traditional and innovative Zero Velocity Update filters are described in detail, as well as the experimental data set used to test both methods. The performance of the two filters is compared both in nominal conditions and in the typical case of a residual initial alignment bias. In this last condition, the innovative solution shows significant improvements with respect to the traditional approach. This is the typical case of an aircraft or a car in parking conditions under solar input. PMID:29735956

  5. An Innovative Strategy for Accurate Thermal Compensation of Gyro Bias in Inertial Units by Exploiting a Novel Augmented Kalman Filter.

    PubMed

    Fontanella, Rita; Accardo, Domenico; Moriello, Rosario Schiano Lo; Angrisani, Leopoldo; Simone, Domenico De

    2018-05-07

    This paper presents an innovative model for integrating thermal compensation of gyro bias error into an augmented state Kalman filter. The developed model is applied in the Zero Velocity Update filter for inertial units manufactured by exploiting Micro Electro-Mechanical System (MEMS) gyros. It is used to remove residual bias at startup. It is a more effective alternative to traditional approach that is realized by cascading bias thermal correction by calibration and traditional Kalman filtering for bias tracking. This function is very useful when adopted gyros are manufactured using MEMS technology. These systems have significant limitations in terms of sensitivity to environmental conditions. They are characterized by a strong correlation of the systematic error with temperature variations. The traditional process is divided into two separated algorithms, i.e., calibration and filtering, and this aspect reduces system accuracy, reliability, and maintainability. This paper proposes an innovative Zero Velocity Update filter that just requires raw uncalibrated gyro data as input. It unifies in a single algorithm the two steps from the traditional approach. Therefore, it saves time and economic resources, simplifying the management of thermal correction process. In the paper, traditional and innovative Zero Velocity Update filters are described in detail, as well as the experimental data set used to test both methods. The performance of the two filters is compared both in nominal conditions and in the typical case of a residual initial alignment bias. In this last condition, the innovative solution shows significant improvements with respect to the traditional approach. This is the typical case of an aircraft or a car in parking conditions under solar input.

  6. Improvement of forecast skill for severe weather by merging radar-based extrapolation and storm-scale NWP corrected forecast

    NASA Astrophysics Data System (ADS)

    Wang, Gaili; Wong, Wai-Kin; Hong, Yang; Liu, Liping; Dong, Jili; Xue, Ming

    2015-03-01

    The primary objective of this study is to improve the performance of deterministic high resolution rainfall forecasts caused by severe storms by merging an extrapolation radar-based scheme with a storm-scale Numerical Weather Prediction (NWP) model. Effectiveness of Multi-scale Tracking and Forecasting Radar Echoes (MTaRE) model was compared with that of a storm-scale NWP model named Advanced Regional Prediction System (ARPS) for forecasting a violent tornado event that developed over parts of western and much of central Oklahoma on May 24, 2011. Then the bias corrections were performed to improve the forecast accuracy of ARPS forecasts. Finally, the corrected ARPS forecast and radar-based extrapolation were optimally merged by using a hyperbolic tangent weight scheme. The comparison of forecast skill between MTaRE and ARPS in high spatial resolution of 0.01° × 0.01° and high temporal resolution of 5 min showed that MTaRE outperformed ARPS in terms of index of agreement and mean absolute error (MAE). MTaRE had a better Critical Success Index (CSI) for less than 20-min lead times and was comparable to ARPS for 20- to 50-min lead times, while ARPS had a better CSI for more than 50-min lead times. Bias correction significantly improved ARPS forecasts in terms of MAE and index of agreement, although the CSI of corrected ARPS forecasts was similar to that of the uncorrected ARPS forecasts. Moreover, optimally merging results using hyperbolic tangent weight scheme further improved the forecast accuracy and became more stable.

  7. Impact of Gulf Stream SST biases on the global atmospheric circulation

    NASA Astrophysics Data System (ADS)

    Lee, Robert W.; Woollings, Tim J.; Hoskins, Brian J.; Williams, Keith D.; O'Reilly, Christopher H.; Masato, Giacomo

    2018-02-01

    The UK Met Office Unified Model in the Global Coupled 2 (GC2) configuration has a warm bias of up to almost 7 K in the Gulf Stream SSTs in the winter season, which is associated with surface heat flux biases and potentially related to biases in the atmospheric circulation. The role of this SST bias is examined with a focus on the tropospheric response by performing three sensitivity experiments. The SST biases are imposed on the atmosphere-only configuration of the model over a small and medium section of the Gulf Stream, and also the wider North Atlantic. Here we show that the dynamical response to this anomalous Gulf Stream heating (and associated shifting and changing SST gradients) is to enhance vertical motion in the transient eddies over the Gulf Stream, rather than balance the heating with a linear dynamical meridional wind or meridional eddy heat transport. Together with the imposed Gulf Stream heating bias, the response affects the troposphere not only locally but also in remote regions of the Northern Hemisphere via a planetary Rossby wave response. The sensitivity experiments partially reproduce some of the differences in the coupled configuration of the model relative to the atmosphere-only configuration and to the ERA-Interim reanalysis. These biases may have implications for the ability of the model to respond correctly to variability or changes in the Gulf Stream. Better global prediction therefore requires particular focus on reducing any large western boundary current SST biases in these regions of high ocean-atmosphere interaction.

  8. Detecting and removing multiplicative spatial bias in high-throughput screening technologies.

    PubMed

    Caraus, Iurie; Mazoure, Bogdan; Nadon, Robert; Makarenkov, Vladimir

    2017-10-15

    Considerable attention has been paid recently to improve data quality in high-throughput screening (HTS) and high-content screening (HCS) technologies widely used in drug development and chemical toxicity research. However, several environmentally- and procedurally-induced spatial biases in experimental HTS and HCS screens decrease measurement accuracy, leading to increased numbers of false positives and false negatives in hit selection. Although effective bias correction methods and software have been developed over the past decades, almost all of these tools have been designed to reduce the effect of additive bias only. Here, we address the case of multiplicative spatial bias. We introduce three new statistical methods meant to reduce multiplicative spatial bias in screening technologies. We assess the performance of the methods with synthetic and real data affected by multiplicative spatial bias, including comparisons with current bias correction methods. We also describe a wider data correction protocol that integrates methods for removing both assay and plate-specific spatial biases, which can be either additive or multiplicative. The methods for removing multiplicative spatial bias and the data correction protocol are effective in detecting and cleaning experimental data generated by screening technologies. As our protocol is of a general nature, it can be used by researchers analyzing current or next-generation high-throughput screens. The AssayCorrector program, implemented in R, is available on CRAN. makarenkov.vladimir@uqam.ca. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  9. Estimation of the electromagnetic bias from retracked TOPEX data

    NASA Technical Reports Server (NTRS)

    Rodriguez, Ernesto; Martin, Jan M.

    1994-01-01

    We examine the electromagnetic (EM) bias by using retracked TOPEX altimeter data. In contrast to previous studies, we use a parameterization of the EM bias which does not make stringent assumptions about the form of the correction or its global behavior. We find that the most effective single parameter correction uses the altimeter-estimated wind speed but that other parameterizations, using a wave age related parameter of significant wave height, may also significantly reduce the repeat pass variance. The different corrections are compared, and their improvement of the TOPEX height variance is quantified.

  10. Evaluations of high-resolution dynamically downscaled ensembles over the contiguous United States

    NASA Astrophysics Data System (ADS)

    Zobel, Zachary; Wang, Jiali; Wuebbles, Donald J.; Kotamarthi, V. Rao

    2018-02-01

    This study uses Weather Research and Forecast (WRF) model to evaluate the performance of six dynamical downscaled decadal historical simulations with 12-km resolution for a large domain (7200 × 6180 km) that covers most of North America. The initial and boundary conditions are from three global climate models (GCMs) and one reanalysis data. The GCMs employed in this study are the Geophysical Fluid Dynamics Laboratory Earth System Model with Generalized Ocean Layer Dynamics component, Community Climate System Model, version 4, and the Hadley Centre Global Environment Model, version 2-Earth System. The reanalysis data is from the National Centers for Environmental Prediction-US. Department of Energy Reanalysis II. We analyze the effects of bias correcting, the lateral boundary conditions and the effects of spectral nudging. We evaluate the model performance for seven surface variables and four upper atmospheric variables based on their climatology and extremes for seven subregions across the United States. The results indicate that the simulation's performance depends on both location and the features/variable being tested. We find that the use of bias correction and/or nudging is beneficial in many situations, but employing these when running the RCM is not always an improvement when compared to the reference data. The use of an ensemble mean and median leads to a better performance in measuring the climatology, while it is significantly biased for the extremes, showing much larger differences than individual GCM driven model simulations from the reference data. This study provides a comprehensive evaluation of these historical model runs in order to make informed decisions when making future projections.

  11. Assessment of clear sky radiative fluxes in CMIP5 climate models using surface observations from BSRN

    NASA Astrophysics Data System (ADS)

    Wild, M.; Hakuba, M. Z.; Folini, D.; Ott, P.; Long, C. N.

    2017-12-01

    Clear sky fluxes in the latest generation of Global Climate Models (GCM) from CMIP5 still vary largely particularly at the Earth's surface, covering in their global means a range of 16 and 24 Wm-2 in the surface downward clear sky shortwave (SW) and longwave radiation, respectively. We assess these fluxes with monthly clear sky reference climatologies derived from more than 40 Baseline Surface Radiation Network (BSRN) sites based on Long and Ackermann (2000) and Hakuba et al. (2015). The comparison is complicated by the fact that the monthly SW clear sky BSRN reference climatologies are inferred from measurements under true cloud-free conditions, whereas the GCM clear sky fluxes are calculated continuously at every timestep solely by removing the clouds, yet otherwise keeping the prevailing atmospheric composition (e.g. water vapor, temperature, aerosols) during the cloudy conditions. This induces the risk of biases in the GCMs just due to the additional sampling of clear sky fluxes calculated under atmospheric conditions representative for cloudy situations. Thereby, a wet bias may be expected in the GCMs compared to the observational references, which may induce spurious low biases in the downward clear sky SW fluxes. To estimate the magnitude of these spurious biases in the available monthly mean fields from 40 CMIP5 models, we used their respective multi-century control runs, and searched therein for each month and each BSRN station the month with the lowest cloud cover. The deviations of the clear sky fluxes in this month from their long-term means have then be used as indicators of the magnitude of the abovementioned sampling biases and as correction factors for an appropriate comparison with the BSRN climatologies, individually applied for each model and BSRN site. The overall correction is on the order of 2 Wm-2. This revises our best estimate for the global mean surface downward SW clear sky radiation, previously at 249 Wm-2 infered from the GCM clear sky flux fields and their biases compared to the BSRN climatologies, now to 247 Wm-2 including this additional correction. 34 out of 40 CMIP5 GCMs exceed this reference value. With a global mean surface albedo of 13 % and net TOA SW clear sky flux of 287 Wm-2 from CERES-EBAF this results in a global mean clear sky surface and atmospheric SW absorption of 214 and 73 Wm-2, respectively.

  12. Improved PPP Ambiguity Resolution Considering the Stochastic Characteristics of Atmospheric Corrections from Regional Networks

    PubMed Central

    Li, Yihe; Li, Bofeng; Gao, Yang

    2015-01-01

    With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network. PMID:26633400

  13. Improved PPP Ambiguity Resolution Considering the Stochastic Characteristics of Atmospheric Corrections from Regional Networks.

    PubMed

    Li, Yihe; Li, Bofeng; Gao, Yang

    2015-11-30

    With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network.

  14. Hypothesis Testing Using Factor Score Regression: A Comparison of Four Methods

    ERIC Educational Resources Information Center

    Devlieger, Ines; Mayer, Axel; Rosseel, Yves

    2016-01-01

    In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and…

  15. Using GPS RO L1 data for calibration of the atmospheric path delay model for data reduction of the satellite altimetery observations.

    NASA Astrophysics Data System (ADS)

    Petrov, L.

    2017-12-01

    Processing satellite altimetry data requires the computation of path delayin the neutral atmosphere that is used for correcting ranges. The path delayis computed using numerical weather models and the accuracy of its computationdepends on the accuracy of numerical weather models. Accuracy of numerical modelsof numerical weather models over Antarctica and Greenland where there is a very sparse network of ground stations, is not well known. I used the dataset of GPS RO L1 data, computed predicted path delay for ROobservations using the numerical whether model GEOS-FPIT, formed the differences with observed path delay and used these differences for computationof the corrections to the a priori refractivity profile. These profiles wereused for computing corrections to the a priori zenith path delay. The systematic patter of these corrections are used for de-biasing of the the satellite altimetry results and for characterization of the systematic errorscaused by mismodeling atmosphere.

  16. Rules and mechanisms for efficient two-stage learning in neural circuits.

    PubMed

    Teşileanu, Tiberiu; Ölveczky, Bence; Balasubramanian, Vijay

    2017-04-04

    Trial-and-error learning requires evaluating variable actions and reinforcing successful variants. In songbirds, vocal exploration is induced by LMAN, the output of a basal ganglia-related circuit that also contributes a corrective bias to the vocal output. This bias is gradually consolidated in RA, a motor cortex analogue downstream of LMAN. We develop a new model of such two-stage learning. Using stochastic gradient descent, we derive how the activity in 'tutor' circuits ( e.g., LMAN) should match plasticity mechanisms in 'student' circuits ( e.g., RA) to achieve efficient learning. We further describe a reinforcement learning framework through which the tutor can build its teaching signal. We show that mismatches between the tutor signal and the plasticity mechanism can impair learning. Applied to birdsong, our results predict the temporal structure of the corrective bias from LMAN given a plasticity rule in RA. Our framework can be applied predictively to other paired brain areas showing two-stage learning.

  17. Bias corrections of GOSAT SWIR XCO 2 and XCH 4 with TCCON data and their evaluation using aircraft measurement data

    DOE PAGES

    Inoue, Makoto; Morino, Isamu; Uchino, Osamu; ...

    2016-08-01

    We describe a method for removing systematic biases of column-averaged dry air mole fractions of CO 2 (XCO 2) and CH 4 (XCH 4) derived from short-wavelength infrared (SWIR) spectra of the Greenhouse gases Observing SATellite (GOSAT). We conduct correlation analyses between the GOSAT biases and simultaneously retrieved auxiliary parameters. We use these correlations to bias correct the GOSAT data, removing these spurious correlations. Data from the Total Carbon Column Observing Network (TCCON) were used as reference values for this regression analysis. To evaluate the effectiveness of this correction method, the uncorrected/corrected GOSAT data were compared to independent XCO 2more » and XCH 4 data derived from aircraft measurements taken for the Comprehensive Observation Network for TRace gases by AIrLiner (CONTRAIL) project, the National Oceanic and Atmospheric Administration (NOAA), the US Department of Energy (DOE), the National Institute for Environmental Studies (NIES), the Japan Meteorological Agency (JMA), the HIAPER Pole-to-Pole observations (HIPPO) program, and the GOSAT validation aircraft observation campaign over Japan. These comparisons demonstrate that the empirically derived bias correction improves the agreement between GOSAT XCO 2/XCH 4 and the aircraft data. Finally, we present spatial distributions and temporal variations of the derived GOSAT biases.« less

  18. Bias corrections of GOSAT SWIR XCO 2 and XCH 4 with TCCON data and their evaluation using aircraft measurement data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inoue, Makoto; Morino, Isamu; Uchino, Osamu

    We describe a method for removing systematic biases of column-averaged dry air mole fractions of CO 2 (XCO 2) and CH 4 (XCH 4) derived from short-wavelength infrared (SWIR) spectra of the Greenhouse gases Observing SATellite (GOSAT). We conduct correlation analyses between the GOSAT biases and simultaneously retrieved auxiliary parameters. We use these correlations to bias correct the GOSAT data, removing these spurious correlations. Data from the Total Carbon Column Observing Network (TCCON) were used as reference values for this regression analysis. To evaluate the effectiveness of this correction method, the uncorrected/corrected GOSAT data were compared to independent XCO 2more » and XCH 4 data derived from aircraft measurements taken for the Comprehensive Observation Network for TRace gases by AIrLiner (CONTRAIL) project, the National Oceanic and Atmospheric Administration (NOAA), the US Department of Energy (DOE), the National Institute for Environmental Studies (NIES), the Japan Meteorological Agency (JMA), the HIAPER Pole-to-Pole observations (HIPPO) program, and the GOSAT validation aircraft observation campaign over Japan. These comparisons demonstrate that the empirically derived bias correction improves the agreement between GOSAT XCO 2/XCH 4 and the aircraft data. Finally, we present spatial distributions and temporal variations of the derived GOSAT biases.« less

  19. Correction of phase velocity bias caused by strong directional noise sources in high-frequency ambient noise tomography: a case study in Karamay, China

    NASA Astrophysics Data System (ADS)

    Wang, K.; Luo, Y.; Yang, Y.

    2016-12-01

    We collect two months of ambient noise data recorded by 35 broadband seismic stations in a 9×11 km area near Karamay, China, and do cross-correlation of noise data between all station pairs. Array beamforming analysis of the ambient noise data shows that ambient noise sources are unevenly distributed and the most energetic ambient noise mainly comes from azimuths of 40o-70o. As a consequence of the strong directional noise sources, surface wave waveforms of the cross-correlations at 1-5 Hz show clearly azimuthal dependence, and direct dispersion measurements from cross-correlations are strongly biased by the dominant noise energy. This bias renders that the dispersion measurements from cross-correlations do not accurately reflect the interstation velocities of surface waves propagating directly from one station to the other, that is, the cross-correlation functions do not retrieve Empirical Green's Functions accurately. To correct the bias caused by unevenly distributed noise sources, we adopt an iterative inversion procedure. The iterative inversion procedure, based on plane-wave modeling, includes three steps: (1) surface wave tomography, (2) estimation of ambient noise energy and (3) phase velocities correction. First, we use synthesized data to test efficiency and stability of the iterative procedure for both homogeneous and heterogeneous media. The testing results show that: (1) the amplitudes of phase velocity bias caused by directional noise sources are significant, reaching 2% and 10% for homogeneous and heterogeneous media, respectively; (2) phase velocity bias can be corrected by the iterative inversion procedure and the convergences of inversion depend on the starting phase velocity map and the complexity of the media. By applying the iterative approach to the real data in Karamay, we further show that phase velocity maps converge after ten iterations and the phase velocity map based on corrected interstation dispersion measurements are more consistent with results from geology surveys than those based on uncorrected ones. As ambient noise in high frequency band (>1Hz) is mostly related to human activities or climate events, both of which have strong directivity, the iterative approach demonstrated here helps improve the accuracy and resolution of ANT in imaging shallow earth structures.

  20. An improved simulation of the 2015 El Niño event by optimally correcting the initial conditions and model parameters in an intermediate coupled model

    NASA Astrophysics Data System (ADS)

    Zhang, Rong-Hua; Tao, Ling-Jiang; Gao, Chuan

    2017-09-01

    Large uncertainties exist in real-time predictions of the 2015 El Niño event, which have systematic intensity biases that are strongly model-dependent. It is critically important to characterize those model biases so they can be reduced appropriately. In this study, the conditional nonlinear optimal perturbation (CNOP)-based approach was applied to an intermediate coupled model (ICM) equipped with a four-dimensional variational data assimilation technique. The CNOP-based approach was used to quantify prediction errors that can be attributed to initial conditions (ICs) and model parameters (MPs). Two key MPs were considered in the ICM: one represents the intensity of the thermocline effect, and the other represents the relative coupling intensity between the ocean and atmosphere. Two experiments were performed to illustrate the effects of error corrections, one with a standard simulation and another with an optimized simulation in which errors in the ICs and MPs derived from the CNOP-based approach were optimally corrected. The results indicate that simulations of the 2015 El Niño event can be effectively improved by using CNOP-derived error correcting. In particular, the El Niño intensity in late 2015 was adequately captured when simulations were started from early 2015. Quantitatively, the Niño3.4 SST index simulated in Dec. 2015 increased to 2.8 °C in the optimized simulation, compared with only 1.5 °C in the standard simulation. The feasibility and effectiveness of using the CNOP-based technique to improve ENSO simulations are demonstrated in the context of the 2015 El Niño event. The limitations and further applications are also discussed.

  1. Assessing the Assessment Methods: Climate Change and Hydrologic Impacts

    NASA Astrophysics Data System (ADS)

    Brekke, L. D.; Clark, M. P.; Gutmann, E. D.; Mizukami, N.; Mendoza, P. A.; Rasmussen, R.; Ikeda, K.; Pruitt, T.; Arnold, J. R.; Rajagopalan, B.

    2014-12-01

    The Bureau of Reclamation, the U.S. Army Corps of Engineers, and other water management agencies have an interest in developing reliable, science-based methods for incorporating climate change information into longer-term water resources planning. Such assessments must quantify projections of future climate and hydrology, typically relying on some form of spatial downscaling and bias correction to produce watershed-scale weather information that subsequently drives hydrology and other water resource management analyses (e.g., water demands, water quality, and environmental habitat). Water agencies continue to face challenging method decisions in these endeavors: (1) which downscaling method should be applied and at what resolution; (2) what observational dataset should be used to drive downscaling and hydrologic analysis; (3) what hydrologic model(s) should be used and how should these models be configured and calibrated? There is a critical need to understand the ramification of these method decisions, as they affect the signal and uncertainties produced by climate change assessments and, thus, adaptation planning. This presentation summarizes results from a three-year effort to identify strengths and weaknesses of widely applied methods for downscaling climate projections and assessing hydrologic conditions. Methods were evaluated from two perspectives: historical fidelity, and tendency to modulate a global climate model's climate change signal. On downscaling, four methods were applied at multiple resolutions: statistically using Bias Correction Spatial Disaggregation, Bias Correction Constructed Analogs, and Asynchronous Regression; dynamically using the Weather Research and Forecasting model. Downscaling results were then used to drive hydrologic analyses over the contiguous U.S. using multiple models (VIC, CLM, PRMS), with added focus placed on case study basins within the Colorado Headwaters. The presentation will identify which types of climate changes are expressed robustly across methods versus those that are sensitive to method choice; which method choices seem relatively more important; and where strategic investments in research and development can substantially improve guidance on climate change provided to water managers.

  2. A finite parallel zone model to interpret and extend Giddings' coupling theory for the eddy-dispersion in porous chromatographic media.

    PubMed

    Desmet, Gert

    2013-11-01

    The finite length parallel zone (FPZ)-model is proposed as an alternative model for the axial- or eddy-dispersion caused by the occurrence of local velocity biases or flow heterogeneities in porous media such as those used in liquid chromatography columns. The mathematical plate height expression evolving from the model shows that the A- and C-term band broadening effects that can originate from a given velocity bias should be coupled in an exponentially decaying way instead of harmonically as proposed in Giddings' coupling theory. In the low and high velocity limit both models converge, while a 12% difference can be observed in the (practically most relevant) intermediate range of reduced velocities. Explicit expressions for the A- and C-constants appearing in the exponential decay-based plate height expression have been derived for each of the different possible velocity bias levels (single through-pore and particle level, multi-particle level and trans-column level). These expressions allow to directly relate the band broadening originating from these different levels to the local fundamental transport parameters, hence offering the possibility to include a velocity-dependent and, if, needed retention factor-dependent transversal dispersion coefficient. Having developed the mathematics for the general case wherein a difference in retention equilibrium establishes between the two parallel zones, the effect of any possible local variations in packing density and/or retention capacity on the eddy-dispersion can be explicitly accounted for as well. It is furthermore also shown that, whereas the lumped transport parameter model used in the basic variant of the FPZ-model only provides a first approximation of the true decay constant, the model can be extended by introducing a constant correction factor to correctly account for the continuous transversal dispersion transport in the velocity bias zones. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Influence of the partial volume correction method on 18F-fluorodeoxyglucose brain kinetic modelling from dynamic PET images reconstructed with resolution model based OSEM

    PubMed Central

    Bowen, Spencer L.; Byars, Larry G.; Michel, Christian J.; Chonde, Daniel B.; Catana, Ciprian

    2014-01-01

    Kinetic parameters estimated from dynamic 18F-fluorodeoxyglucose PET acquisitions have been used frequently to assess brain function in humans. Neglecting partial volume correction (PVC) for a dynamic series has been shown to produce significant bias in model estimates. Accurate PVC requires a space-variant model describing the reconstructed image spatial point spread function (PSF) that accounts for resolution limitations, including non-uniformities across the field of view due to the parallax effect. For OSEM, image resolution convergence is local and influenced significantly by the number of iterations, the count density, and background-to-target ratio. As both count density and background-to-target values for a brain structure can change during a dynamic scan, the local image resolution may also concurrently vary. When PVC is applied post-reconstruction the kinetic parameter estimates may be biased when neglecting the frame-dependent resolution. We explored the influence of the PVC method and implementation on kinetic parameters estimated by fitting 18F-fluorodeoxyglucose dynamic data acquired on a dedicated brain PET scanner and reconstructed with and without PSF modelling in the OSEM algorithm. The performance of several PVC algorithms was quantified with a phantom experiment, an anthropomorphic Monte Carlo simulation, and a patient scan. Using the last frame reconstructed image only for regional spread function (RSF) generation, as opposed to computing RSFs for each frame independently, and applying perturbation GTM PVC with PSF based OSEM produced the lowest magnitude bias kinetic parameter estimates in most instances, although at the cost of increased noise compared to the PVC methods utilizing conventional OSEM. Use of the last frame RSFs for PVC with no PSF modelling in the OSEM algorithm produced the lowest bias in CMRGlc estimates, although by less than 5% in most cases compared to the other PVC methods. The results indicate that the PVC implementation and choice of PSF modelling in the reconstruction can significantly impact model parameters. PMID:24052021

  4. A comparison of methods to estimate future sub-daily design rainfall

    NASA Astrophysics Data System (ADS)

    Li, J.; Johnson, F.; Evans, J.; Sharma, A.

    2017-12-01

    Warmer temperatures are expected to increase extreme short-duration rainfall due to the increased moisture-holding capacity of the atmosphere. While attention has been paid to the impacts of climate change on future design rainfalls at daily or longer time scales, the potential changes in short duration design rainfalls have been often overlooked due to the limited availability of sub-daily projections and observations. This study uses a high-resolution regional climate model (RCM) to predict the changes in sub-daily design rainfalls for the Greater Sydney region in Australia. Sixteen methods for predicting changes to sub-daily future extremes are assessed based on different options for bias correction, disaggregation and frequency analysis. A Monte Carlo cross-validation procedure is employed to evaluate the skill of each method in estimating the design rainfall for the current climate. It is found that bias correction significantly improves the accuracy of the design rainfall estimated for the current climate. For 1 h events, bias correcting the hourly annual maximum rainfall simulated by the RCM produces design rainfall closest to observations, whereas for multi-hour events, disaggregating the daily rainfall total is recommended. This suggests that the RCM fails to simulate the observed multi-duration rainfall persistence, which is a common issue for most climate models. Despite the significant differences in the estimated design rainfalls between different methods, all methods lead to an increase in design rainfalls across the majority of the study region.

  5. Correction of Selection Bias in Survey Data: Is the Statistical Cure Worse Than the Bias?

    PubMed

    Hanley, James A

    2017-03-15

    In previous articles in the American Journal of Epidemiology (Am J Epidemiol. 2013;177(5):431-442) and American Journal of Public Health (Am J Public Health. 2013;103(10):1895-1901), Masters et al. reported age-specific hazard ratios for the contrasts in mortality rates between obesity categories. They corrected the observed hazard ratios for selection bias caused by what they postulated was the nonrepresentativeness of the participants in the National Health Interview Study that increased with age, obesity, and ill health. However, it is possible that their regression approach to remove the alleged bias has not produced, and in general cannot produce, sensible hazard ratio estimates. First, one must consider how many nonparticipants there might have been in each category of obesity and of age at entry and how much higher the mortality rates would have to be in nonparticipants than in participants in these same categories. What plausible set of numerical values would convert the ("biased") decreasing-with-age hazard ratios seen in the data into the ("unbiased") increasing-with-age ratios that they computed? Can these values be encapsulated in (and can sensible values be recovered from) 1 additional internal variable in a regression model? Second, one must examine the age pattern of the hazard ratios that have been adjusted for selection. Without the correction, the hazard ratios are attenuated with increasing age. With it, the hazard ratios at older ages are considerably higher, but those at younger ages are well below 1. Third, one must test whether the regression approach suggested by Masters et al. would correct the nonrepresentativeness that increased with age and ill health that I introduced into real and hypothetical data sets. I found that the approach did not recover the hazard ratio patterns present in the unselected data sets: The corrections overshot the target at older ages and undershot it at lower ages. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.

  6. Nonlinear bias analysis and correction of microwave temperature sounder observations for FY-3C meteorological satellite

    NASA Astrophysics Data System (ADS)

    Hu, Taiyang; Lv, Rongchuan; Jin, Xu; Li, Hao; Chen, Wenxin

    2018-01-01

    The nonlinear bias analysis and correction of receiving channels in Chinese FY-3C meteorological satellite Microwave Temperature Sounder (MWTS) is a key technology of data assimilation for satellite radiance data. The thermal-vacuum chamber calibration data acquired from the MWTS can be analyzed to evaluate the instrument performance, including radiometric temperature sensitivity, channel nonlinearity and calibration accuracy. Especially, the nonlinearity parameters due to imperfect square-law detectors will be calculated from calibration data and further used to correct the nonlinear bias contributions of microwave receiving channels. Based upon the operational principles and thermalvacuum chamber calibration procedures of MWTS, this paper mainly focuses on the nonlinear bias analysis and correction methods for improving the calibration accuracy of the important instrument onboard FY-3C meteorological satellite, from the perspective of theoretical and experimental studies. Furthermore, a series of original results are presented to demonstrate the feasibility and significance of the methods.

  7. Correcting power and p-value calculations for bias in diffusion tensor imaging.

    PubMed

    Lauzon, Carolyn B; Landman, Bennett A

    2013-07-01

    Diffusion tensor imaging (DTI) provides quantitative parametric maps sensitive to tissue microarchitecture (e.g., fractional anisotropy, FA). These maps are estimated through computational processes and subject to random distortions including variance and bias. Traditional statistical procedures commonly used for study planning (including power analyses and p-value/alpha-rate thresholds) specifically model variability, but neglect potential impacts of bias. Herein, we quantitatively investigate the impacts of bias in DTI on hypothesis test properties (power and alpha-rate) using a two-sided hypothesis testing framework. We present theoretical evaluation of bias on hypothesis test properties, evaluate the bias estimation technique SIMEX for DTI hypothesis testing using simulated data, and evaluate the impacts of bias on spatially varying power and alpha rates in an empirical study of 21 subjects. Bias is shown to inflame alpha rates, distort the power curve, and cause significant power loss even in empirical settings where the expected difference in bias between groups is zero. These adverse effects can be attenuated by properly accounting for bias in the calculation of power and p-values. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Decadal evaluation of regional climate, air quality, and their interactions using WRF/Chem Version 3.6.1

    NASA Astrophysics Data System (ADS)

    Yahya, K.; Wang, K.; Campbell, P.; Glotfelty, T.; He, J.; Zhang, Y.

    2015-08-01

    The Weather Research and Forecasting model with Chemistry (WRF/Chem) v3.6.1 with the Carbon Bond 2005 (CB05) gas-phase mechanism is evaluated for its first decadal application during 2001-2010 using the Representative Concentration Pathway (RCP 8.5) emissions to assess its capability and appropriateness for long-term climatological simulations. The initial and boundary conditions are downscaled from the modified Community Earth System Model/Community Atmosphere Model (CESM/CAM5) v1.2.2. The meteorological initial and boundary conditions are bias-corrected using the National Center for Environmental Protection's Final (FNL) Operational Global Analysis data. Climatological evaluations are carried out for meteorological, chemical, and aerosol-cloud-radiation variables against data from surface networks and satellite retrievals. The model performs very well for the 2 m temperature (T2) for the 10 year period with only a small cold bias of -0.3 °C. Biases in other meteorological variables including relative humidity at 2 m, wind speed at 10 m, and precipitation tend to be site- and season-specific; however, with the exception of T2, consistent annual biases exist for most of the years from 2001 to 2010. Ozone mixing ratios are slightly overpredicted at both urban and rural locations but underpredicted at rural locations. PM2.5 concentrations are slightly overpredicted at rural sites, but slightly underpredicted at urban/suburban sites. In general, the model performs relatively well for chemical and meteorological variables, and not as well for aerosol-cloud-radiation variables. Cloud-aerosol variables including aerosol optical depth, cloud water path, cloud optical thickness, and cloud droplet number concentration are generally underpredicted on average across the continental US. Overpredictions of several cloud variables over eastern US result in underpredictions of radiation variables and overpredictions of shortwave and longwave cloud forcing which are important climate variables. While the current performance is deemed to be acceptable, improvements to the bias-correction method for CESM downscaling and the model parameterizations of cloud dynamics and thermodynamics, as well as aerosol-cloud interactions can potentially improve model performance for long-term climate simulations.

  9. Inter-model Diversity of ENSO simulation and its relation to basic states

    NASA Astrophysics Data System (ADS)

    Kug, J. S.; Ham, Y. G.

    2016-12-01

    In this study, a new methodology is developed to improve the climate simulation of state-of-the-art coupledglobal climate models (GCMs), by a postprocessing based on the intermodel diversity. Based on the closeconnection between the interannual variability and climatological states, the distinctive relation between theintermodel diversity of the interannual variability and that of the basic state is found. Based on this relation,the simulated interannual variabilities can be improved, by correcting their climatological bias. To test thismethodology, the dominant intermodel difference in precipitation responses during El Niño-SouthernOscillation (ENSO) is investigated, and its relationship with climatological state. It is found that the dominantintermodel diversity of the ENSO precipitation in phase 5 of the Coupled Model Intercomparison Project(CMIP5) is associated with the zonal shift of the positive precipitation center during El Niño. This dominantintermodel difference is significantly correlated with the basic states. The models with wetter (dryer) climatologythan the climatology of the multimodel ensemble (MME) over the central Pacific tend to shift positiveENSO precipitation anomalies to the east (west). Based on the model's systematic errors in atmosphericENSO response and bias, the models with better climatological state tend to simulate more realistic atmosphericENSO responses.Therefore, the statistical method to correct the ENSO response mostly improves the ENSO response. Afterthe statistical correction, simulating quality of theMMEENSO precipitation is distinctively improved. Theseresults provide a possibility that the present methodology can be also applied to improving climate projectionand seasonal climate prediction.

  10. Impact of archeomagnetic field model data on modern era geomagnetic forecasts

    NASA Astrophysics Data System (ADS)

    Tangborn, Andrew; Kuang, Weijia

    2018-03-01

    A series of geomagnetic data assimilation experiments have been carried out to demonstrate the impact of assimilating archeomagnetic data via the CALS3k.4 geomagnetic field model from the period between 10 and 1590 CE. The assimilation continues with the gufm1 model from 1590 to 1990 and CM4 model from 1990 to 2000 as observations, and comparisons between these models and the geomagnetic forecasts are used to determine an optimal maximum degree for the archeomagnetic observations, and to independently estimate errors for these observations. These are compared with an assimilation experiment that uses the uncertainties provided with CALS3k.4. Optimal 20 year forecasts in 1990 are found when the Gauss coefficients up to degree 3 are assimilated. In addition we demonstrate how a forecast and observation bias correction scheme could be used to reduce bias in modern era forecasts. Initial experiments show that this approach can reduce modern era forecast biases by as much as 50%.

  11. Impact of bias correction and downscaling through quantile mapping on simulated climate change signal: a case study over Central Italy

    NASA Astrophysics Data System (ADS)

    Sangelantoni, Lorenzo; Russo, Aniello; Gennaretti, Fabio

    2018-02-01

    Quantile mapping (QM) represents a common post-processing technique used to connect climate simulations to impact studies at different spatial scales. Depending on the simulation-observation spatial scale mismatch, QM can be used for two different applications. The first application uses only the bias correction component, establishing transfer functions between observations and simulations at similar spatial scales. The second application includes a statistical downscaling component when point-scale observations are considered. However, knowledge of alterations to climate change signal (CCS) resulting from these two applications is limited. This study investigates QM impacts on the original temperature and precipitation CCSs when applied according to a bias correction only (BC-only) and a bias correction plus downscaling (BC + DS) application over reference stations in Central Italy. BC-only application is used to adjust regional climate model (RCM) simulations having the same resolution as the observation grid. QM BC + DS application adjusts the same simulations to point-wise observations. QM applications alter CCS mainly for temperature. BC-only application produces a CCS of the median 1 °C lower than the original ( 4.5 °C). BC + DS application produces CCS closer to the original, except over the summer 95th percentile, where substantial amplification of the original CCS resulted. The impacts of the two applications are connected to the ratio between the observed and the simulated standard deviation (STD) of the calibration period. For the precipitation, original CCS is essentially preserved in both applications. Yet, calibration period STD ratio cannot predict QM impact on the precipitation CCS when simulated STD and mean are similarly misrepresented.

  12. Bias correction for selecting the minimal-error classifier from many machine learning models.

    PubMed

    Ding, Ying; Tang, Shaowu; Liao, Serena G; Jia, Jia; Oesterreich, Steffi; Lin, Yan; Tseng, George C

    2014-11-15

    Supervised machine learning is commonly applied in genomic research to construct a classifier from the training data that is generalizable to predict independent testing data. When test datasets are not available, cross-validation is commonly used to estimate the error rate. Many machine learning methods are available, and it is well known that no universally best method exists in general. It has been a common practice to apply many machine learning methods and report the method that produces the smallest cross-validation error rate. Theoretically, such a procedure produces a selection bias. Consequently, many clinical studies with moderate sample sizes (e.g. n = 30-60) risk reporting a falsely small cross-validation error rate that could not be validated later in independent cohorts. In this article, we illustrated the probabilistic framework of the problem and explored the statistical and asymptotic properties. We proposed a new bias correction method based on learning curve fitting by inverse power law (IPL) and compared it with three existing methods: nested cross-validation, weighted mean correction and Tibshirani-Tibshirani procedure. All methods were compared in simulation datasets, five moderate size real datasets and two large breast cancer datasets. The result showed that IPL outperforms the other methods in bias correction with smaller variance, and it has an additional advantage to extrapolate error estimates for larger sample sizes, a practical feature to recommend whether more samples should be recruited to improve the classifier and accuracy. An R package 'MLbias' and all source files are publicly available. tsenglab.biostat.pitt.edu/software.htm. ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Assessing the prediction accuracy of cure in the Cox proportional hazards cure model: an application to breast cancer data.

    PubMed

    Asano, Junichi; Hirakawa, Akihiro; Hamada, Chikuma

    2014-01-01

    A cure rate model is a survival model incorporating the cure rate with the assumption that the population contains both uncured and cured individuals. It is a powerful statistical tool for prognostic studies, especially in cancer. The cure rate is important for making treatment decisions in clinical practice. The proportional hazards (PH) cure model can predict the cure rate for each patient. This contains a logistic regression component for the cure rate and a Cox regression component to estimate the hazard for uncured patients. A measure for quantifying the predictive accuracy of the cure rate estimated by the Cox PH cure model is required, as there has been a lack of previous research in this area. We used the Cox PH cure model for the breast cancer data; however, the area under the receiver operating characteristic curve (AUC) could not be estimated because many patients were censored. In this study, we used imputation-based AUCs to assess the predictive accuracy of the cure rate from the PH cure model. We examined the precision of these AUCs using simulation studies. The results demonstrated that the imputation-based AUCs were estimable and their biases were negligibly small in many cases, although ordinary AUC could not be estimated. Additionally, we introduced the bias-correction method of imputation-based AUCs and found that the bias-corrected estimate successfully compensated the overestimation in the simulation studies. We also illustrated the estimation of the imputation-based AUCs using breast cancer data. Copyright © 2014 John Wiley & Sons, Ltd.

  14. Observing Atmospheric Formaldehyde (HCHO) from Space: Validation and Intercomparison of Six Retrievals from Four Satellites (OMI, GOME2A, GOME2B, OMPS) with SEAC4RS Aircraft Observations over the Southeast US

    NASA Technical Reports Server (NTRS)

    Zhu, Lei; Jacob, Daniel J.; Kim, Patrick S.; Fisher, Jenny A.; Yu, Karen; Travis, Katherine R.; Mickley, Loretta J.; Yantosca, Robert M.; Sulprizio, Melissa P.; De Smedt, Isabelle; hide

    2016-01-01

    Formaldehyde (HCHO) column data from satellites are widely used as a proxy for emissions of volatile organic compounds (VOCs), but validation of the data has been extremely limited. Here we use highly accurate HCHO aircraft observations from the NASA SEAC4RS (Studies of Emissions, Atmospheric Composition, Clouds and Climate Coupling by Regional Surveys) campaign over the southeast US in August-September 2013 to validate and intercompare six retrievals of HCHO columns from four different satellite instruments (OMI (Ozone Monitoring Instrument), GOME (Global Ozone Monitoring Experiment) 2A, GOME (Global Ozone Monitoring Experiment) 2B and OMPS (Ozone Mapping and Profiler Suite)) and three different research groups. The GEOS (Goddard Earth Observing System)-Chem chemical transport model is used as a common intercomparison platform. All retrievals feature a HCHO maximum over Arkansas and Louisiana, consistent with the aircraft observations and reflecting high emissions of biogenic isoprene. The retrievals are also interconsistent in their spatial variability over the southeast US (r equals 0.4 to 0.8 on a 0.5 degree by 0.5 degree grid) and in their day-to-day variability (r equals 0.5 to 0.8). However, all retrievals are biased low in the mean by 20 to 51 percent, which would lead to corresponding bias in estimates of isoprene emissions from the satellite data. The smallest bias is for OMI-BIRA (Ozone Monitoring Instrument - Belgian Institute for Space Aeronomy), which has high corrected slant columns relative to the other retrievals and low scattering weights in its air mass factor (AMF) calculation. OMI-BIRA has systematic error in its assumed vertical HCHO shape profiles for the AMF calculation, and correcting this would eliminate its bias relative to the SEAC (sup 4) RS data. Our results support the use of satellite HCHO data as a quantitative proxy for isoprene emission after correction of the low mean bias. There is no evident pattern in the bias, suggesting that a uniform correction factor may be applied to the data until better understanding is achieved.

  15. Dynamic PET image reconstruction integrating temporal regularization associated with respiratory motion correction for applications in oncology

    NASA Astrophysics Data System (ADS)

    Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frédéric

    2018-02-01

    Respiratory motion reduces both the qualitative and quantitative accuracy of PET images in oncology. This impact is more significant for quantitative applications based on kinetic modeling, where dynamic acquisitions are associated with limited statistics due to the necessity of enhanced temporal resolution. The aim of this study is to address these drawbacks, by combining a respiratory motion correction approach with temporal regularization in a unique reconstruction algorithm for dynamic PET imaging. Elastic transformation parameters for the motion correction are estimated from the non-attenuation-corrected PET images. The derived displacement matrices are subsequently used in a list-mode based OSEM reconstruction algorithm integrating a temporal regularization between the 3D dynamic PET frames, based on temporal basis functions. These functions are simultaneously estimated at each iteration, along with their relative coefficients for each image voxel. Quantitative evaluation has been performed using dynamic FDG PET/CT acquisitions of lung cancer patients acquired on a GE DRX system. The performance of the proposed method is compared with that of a standard multi-frame OSEM reconstruction algorithm. The proposed method achieved substantial improvements in terms of noise reduction while accounting for loss of contrast due to respiratory motion. Results on simulated data showed that the proposed 4D algorithms led to bias reduction values up to 40% in both tumor and blood regions for similar standard deviation levels, in comparison with a standard 3D reconstruction. Patlak parameter estimations on reconstructed images with the proposed reconstruction methods resulted in 30% and 40% bias reduction in the tumor and lung region respectively for the Patlak slope, and a 30% bias reduction for the intercept in the tumor region (a similar Patlak intercept was achieved in the lung area). Incorporation of the respiratory motion correction using an elastic model along with a temporal regularization in the reconstruction process of the PET dynamic series led to substantial quantitative improvements and motion artifact reduction. Future work will include the integration of a linear FDG kinetic model, in order to directly reconstruct parametric images.

  16. Dynamic PET image reconstruction integrating temporal regularization associated with respiratory motion correction for applications in oncology.

    PubMed

    Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frédéric

    2018-02-13

    Respiratory motion reduces both the qualitative and quantitative accuracy of PET images in oncology. This impact is more significant for quantitative applications based on kinetic modeling, where dynamic acquisitions are associated with limited statistics due to the necessity of enhanced temporal resolution. The aim of this study is to address these drawbacks, by combining a respiratory motion correction approach with temporal regularization in a unique reconstruction algorithm for dynamic PET imaging. Elastic transformation parameters for the motion correction are estimated from the non-attenuation-corrected PET images. The derived displacement matrices are subsequently used in a list-mode based OSEM reconstruction algorithm integrating a temporal regularization between the 3D dynamic PET frames, based on temporal basis functions. These functions are simultaneously estimated at each iteration, along with their relative coefficients for each image voxel. Quantitative evaluation has been performed using dynamic FDG PET/CT acquisitions of lung cancer patients acquired on a GE DRX system. The performance of the proposed method is compared with that of a standard multi-frame OSEM reconstruction algorithm. The proposed method achieved substantial improvements in terms of noise reduction while accounting for loss of contrast due to respiratory motion. Results on simulated data showed that the proposed 4D algorithms led to bias reduction values up to 40% in both tumor and blood regions for similar standard deviation levels, in comparison with a standard 3D reconstruction. Patlak parameter estimations on reconstructed images with the proposed reconstruction methods resulted in 30% and 40% bias reduction in the tumor and lung region respectively for the Patlak slope, and a 30% bias reduction for the intercept in the tumor region (a similar Patlak intercept was achieved in the lung area). Incorporation of the respiratory motion correction using an elastic model along with a temporal regularization in the reconstruction process of the PET dynamic series led to substantial quantitative improvements and motion artifact reduction. Future work will include the integration of a linear FDG kinetic model, in order to directly reconstruct parametric images.

  17. Observability of satellite launcher navigation with INS, GPS, attitude sensors and reference trajectory

    NASA Astrophysics Data System (ADS)

    Beaudoin, Yanick; Desbiens, André; Gagnon, Eric; Landry, René

    2018-01-01

    The navigation system of a satellite launcher is of paramount importance. In order to correct the trajectory of the launcher, the position, velocity and attitude must be known with the best possible precision. In this paper, the observability of four navigation solutions is investigated. The first one is the INS/GPS couple. Then, attitude reference sensors, such as magnetometers, are added to the INS/GPS solution. The authors have already demonstrated that the reference trajectory could be used to improve the navigation performance. This approach is added to the two previously mentioned navigation systems. For each navigation solution, the observability is analyzed with different sensor error models. First, sensor biases are neglected. Then, sensor biases are modelled as random walks and as first order Markov processes. The observability is tested with the rank and condition number of the observability matrix, the time evolution of the covariance matrix and sensitivity to measurement outlier tests. The covariance matrix is exploited to evaluate the correlation between states in order to detect structural unobservability problems. Finally, when an unobservable subspace is detected, the result is verified with theoretical analysis of the navigation equations. The results show that evaluating only the observability of a model does not guarantee the ability of the aiding sensors to correct the INS estimates within the mission time. The analysis of the covariance matrix time evolution could be a powerful tool to detect this situation, however in some cases, the problem is only revealed with a sensitivity to measurement outlier test. None of the tested solutions provide GPS position bias observability. For the considered mission, the modelling of the sensor biases as random walks or Markov processes gives equivalent results. Relying on the reference trajectory can improve the precision of the roll estimates. But, in the context of a satellite launcher, the roll estimation error and gyroscope bias are only observable if attitude reference sensors are present.

  18. [Study on correction of data bias caused by different missing mechanisms in survey of medical expenditure among students enrolling in Urban Resident Basic Medical Insurance].

    PubMed

    Zhang, Haixia; Zhao, Junkang; Gu, Caijiao; Cui, Yan; Rong, Huiying; Meng, Fanlong; Wang, Tong

    2015-05-01

    The study of the medical expenditure and its influencing factors among the students enrolling in Urban Resident Basic Medical Insurance (URBMI) in Taiyuan indicated that non response bias and selection bias coexist in dependent variable of the survey data. Unlike previous studies only focused on one missing mechanism, a two-stage method to deal with two missing mechanisms simultaneously was suggested in this study, combining multiple imputation with sample selection model. A total of 1 190 questionnaires were returned by the students (or their parents) selected in child care settings, schools and universities in Taiyuan by stratified cluster random sampling in 2012. In the returned questionnaires, 2.52% existed not missing at random (NMAR) of dependent variable and 7.14% existed missing at random (MAR) of dependent variable. First, multiple imputation was conducted for MAR by using completed data, then sample selection model was used to correct NMAR in multiple imputation, and a multi influencing factor analysis model was established. Based on 1 000 times resampling, the best scheme of filling the random missing values is the predictive mean matching (PMM) method under the missing proportion. With this optimal scheme, a two stage survey was conducted. Finally, it was found that the influencing factors on annual medical expenditure among the students enrolling in URBMI in Taiyuan included population group, annual household gross income, affordability of medical insurance expenditure, chronic disease, seeking medical care in hospital, seeking medical care in community health center or private clinic, hospitalization, hospitalization canceled due to certain reason, self medication and acceptable proportion of self-paid medical expenditure. The two-stage method combining multiple imputation with sample selection model can deal with non response bias and selection bias effectively in dependent variable of the survey data.

  19. Assessing the utility of frequency dependent nudging for reducing biases in biogeochemical models

    NASA Astrophysics Data System (ADS)

    Lagman, Karl B.; Fennel, Katja; Thompson, Keith R.; Bianucci, Laura

    2014-09-01

    Bias errors, resulting from inaccurate boundary and forcing conditions, incorrect model parameterization, etc. are a common problem in environmental models including biogeochemical ocean models. While it is important to correct bias errors wherever possible, it is unlikely that any environmental model will ever be entirely free of such errors. Hence, methods for bias reduction are necessary. A widely used technique for online bias reduction is nudging, where simulated fields are continuously forced toward observations or a climatology. Nudging is robust and easy to implement, but suppresses high-frequency variability and introduces artificial phase shifts. As a solution to this problem Thompson et al. (2006) introduced frequency dependent nudging where nudging occurs only in prescribed frequency bands, typically centered on the mean and the annual cycle. They showed this method to be effective for eddy resolving ocean circulation models. Here we add a stability term to the previous form of frequency dependent nudging which makes the method more robust for non-linear biological models. Then we assess the utility of frequency dependent nudging for biological models by first applying the method to a simple predator-prey model and then to a 1D ocean biogeochemical model. In both cases we only nudge in two frequency bands centered on the mean and the annual cycle, and then assess how well the variability in higher frequency bands is recovered. We evaluate the effectiveness of frequency dependent nudging in comparison to conventional nudging and find significant improvements with the former.

  20. Decadal evaluation of regional climate, air quality, and their interactions over the continental US and their interactions using WRF/Chem version 3.6.1

    NASA Astrophysics Data System (ADS)

    Yahya, Khairunnisa; Wang, Kai; Campbell, Patrick; Glotfelty, Timothy; He, Jian; Zhang, Yang

    2016-02-01

    The Weather Research and Forecasting model with Chemistry (WRF/Chem) v3.6.1 with the Carbon Bond 2005 (CB05) gas-phase mechanism is evaluated for its first decadal application during 2001-2010 using the Representative Concentration Pathway 8.5 (RCP 8.5) emissions to assess its capability and appropriateness for long-term climatological simulations. The initial and boundary conditions are downscaled from the modified Community Earth System Model/Community Atmosphere Model (CESM/CAM5) v1.2.2. The meteorological initial and boundary conditions are bias-corrected using the National Center for Environmental Protection's Final (FNL) Operational Global Analysis data. Climatological evaluations are carried out for meteorological, chemical, and aerosol-cloud-radiation variables against data from surface networks and satellite retrievals. The model performs very well for the 2 m temperature (T2) for the 10-year period, with only a small cold bias of -0.3 °C. Biases in other meteorological variables including relative humidity at 2 m, wind speed at 10 m, and precipitation tend to be site- and season-specific; however, with the exception of T2, consistent annual biases exist for most of the years from 2001 to 2010. Ozone mixing ratios are slightly overpredicted at both urban and rural locations with a normalized mean bias (NMB) of 9.7 % but underpredicted at rural locations with an NMB of -8.8 %. PM2.5 concentrations are moderately overpredicted with an NMB of 23.3 % at rural sites but slightly underpredicted with an NMB of -10.8 % at urban/suburban sites. In general, the model performs relatively well for chemical and meteorological variables, and not as well for aerosol-cloud-radiation variables. Cloud-aerosol variables including aerosol optical depth, cloud water path, cloud optical thickness, and cloud droplet number concentration are generally underpredicted on average across the continental US. Overpredictions of several cloud variables over the eastern US result in underpredictions of radiation variables (such as net shortwave radiation - GSW - with a mean bias - MB - of -5.7 W m-2) and overpredictions of shortwave and longwave cloud forcing (MBs of ˜ 7 to 8 W m-2), which are important climate variables. While the current performance is deemed to be acceptable, improvements to the bias-correction method for CESM downscaling and the model parameterizations of cloud dynamics and thermodynamics, as well as aerosol-cloud interactions, can potentially improve model performance for long-term climate simulations.

  1. Effect of glomerular filtration rate at dialysis initiation on survival in patients with advanced chronic kidney disease: what is the effect of lead-time bias?

    PubMed Central

    Janmaat, Cynthia J; van Diepen, Merel; Krediet, Raymond T; Hemmelder, Marc H; Dekker, Friedo W

    2017-01-01

    Purpose Current clinical guidelines recommend to initiate dialysis in the presence of symptoms or signs attributable to kidney failure, often with a glomerular filtration rate (GFR) of 5–10 mL/min/1.73 m2. Little evidence exists about the optimal kidney function to start dialysis. Thus far, most observational studies have been limited by lead-time bias. Only a few studies have accounted for lead-time bias, and showed contradictory results. We examined the effect of GFR at dialysis initiation on survival in chronic kidney disease patients, and the role of lead-time bias therein. We used both kidney function based on 24-hour urine collection (measured GFR [mGFR]) and estimated GFR (eGFR). Materials and methods A total of 1,143 patients with eGFR data at dialysis initiation and 852 patients with mGFR data were included from the NECOSAD cohort. Cox regression was used to adjust for potential confounders. To examine the effect of lead-time bias, survival was counted from the time of dialysis initiation or from a common starting point (GFR 20 mL/min/1.73 m2), using linear interpolation models. Results Without lead-time correction, no difference between early and late starters was present based on eGFR (hazard ratio [HR] 1.03, 95% confidence interval [CI] 0.81–1.3). However, after lead-time correction, early initiation showed a survival disadvantage (HR between 1.1 [95% CI 0.82–1.48] and 1.33 [95% CI 1.05–1.68]). Based on mGFR, the potential survival benefit for early starters without lead-time correction (HR 0.8, 95% CI 0.62–1.03) completely disappeared after lead-time correction (HR between 0.94 [95% CI 0.65–1.34] and 1.21 [95% CI 0.95–1.56]). Dialysis start time differed about a year between early and late initiation. Conclusion Lead-time bias is not only a methodological problem but also has clinical impact when assessing the optimal kidney function to start dialysis. Therefore, lead-time bias is extremely important to correct for. Taking account of lead-time bias, this controlled study showed that early dialysis initiation (eGFR >7.9, mGFR >6.6 mL/min/1.73 m2) was not associated with an improvement in survival. Based on kidney function, this study suggests that in some patients, dialysis could be started even later than an eGFR <5.7 and mGFR <4.3 mL/min/1.73 m2. PMID:28442934

  2. Latent Structure Agreement Analysis

    DTIC Science & Technology

    1989-11-01

    correct for bias in estimation of disease prevalence due to misclassification error [39]. Software Varying panel latent class agreement models can be...D., and L. M. Irwig, "Estimation of Test Error Rates, Disease Prevalence and Relative Risk from Misclassified Data: A Review," Journal of Clinical

  3. Bias correction in species distribution models: pooling survey and collection data for multiple species.

    PubMed

    Fithian, William; Elith, Jane; Hastie, Trevor; Keith, David A

    2015-04-01

    Presence-only records may provide data on the distributions of rare species, but commonly suffer from large, unknown biases due to their typically haphazard collection schemes. Presence-absence or count data collected in systematic, planned surveys are more reliable but typically less abundant.We proposed a probabilistic model to allow for joint analysis of presence-only and survey data to exploit their complementary strengths. Our method pools presence-only and presence-absence data for many species and maximizes a joint likelihood, simultaneously estimating and adjusting for the sampling bias affecting the presence-only data. By assuming that the sampling bias is the same for all species, we can borrow strength across species to efficiently estimate the bias and improve our inference from presence-only data.We evaluate our model's performance on data for 36 eucalypt species in south-eastern Australia. We find that presence-only records exhibit a strong sampling bias towards the coast and towards Sydney, the largest city. Our data-pooling technique substantially improves the out-of-sample predictive performance of our model when the amount of available presence-absence data for a given species is scarceIf we have only presence-only data and no presence-absence data for a given species, but both types of data for several other species that suffer from the same spatial sampling bias, then our method can obtain an unbiased estimate of the first species' geographic range.

  4. Atmospheric correction for inland water based on Gordon model

    NASA Astrophysics Data System (ADS)

    Li, Yunmei; Wang, Haijun; Huang, Jiazhu

    2008-04-01

    Remote sensing technique is soundly used in water quality monitoring since it can receive area radiation information at the same time. But more than 80% radiance detected by sensors at the top of the atmosphere is contributed by atmosphere not directly by water body. Water radiance information is seriously confused by atmospheric molecular and aerosol scattering and absorption. A slight bias of evaluation for atmospheric influence can deduce large error for water quality evaluation. To inverse water composition accurately we have to separate water and air information firstly. In this paper, we studied on atmospheric correction methods for inland water such as Taihu Lake. Landsat-5 TM image was corrected based on Gordon atmospheric correction model. And two kinds of data were used to calculate Raleigh scattering, aerosol scattering and radiative transmission above Taihu Lake. Meanwhile, the influence of ozone and white cap were revised. One kind of data was synchronization meteorology data, and the other one was synchronization MODIS image. At last, remote sensing reflectance was retrieved from the TM image. The effect of different methods was analyzed using in situ measured water surface spectra. The result indicates that measured and estimated remote sensing reflectance were close for both methods. Compared to the method of using MODIS image, the method of using synchronization meteorology is more accurate. And the bias is close to inland water error criterion accepted by water quality inversing. It shows that this method is suitable for Taihu Lake atmospheric correction for TM image.

  5. A working memory bias for alcohol-related stimuli depends on drinking score.

    PubMed

    Kessler, Klaus; Pajak, Katarzyna Malgorzata; Harkin, Ben; Jones, Barry

    2013-03-01

    We tested 44 participants with respect to their working memory (WM) performance on alcohol-related versus neutral visual stimuli. Previously an alcohol attentional bias (AAB) had been reported using these stimuli, where the attention of frequent drinkers was automatically drawn toward alcohol-related items (e.g., beer bottle). The present study set out to provide evidence for an alcohol memory bias (AMB) that would persist over longer time-scales than the AAB. The WM task we used required memorizing 4 stimuli in their correct locations and a visual interference task was administered during a 4-sec delay interval. A subsequent probe required participants to indicate whether a stimulus was shown in the correct or incorrect location. For each participant we calculated a drinking score based on 3 items derived from the Alcohol Use Questionnaire, and we observed that higher scorers better remembered alcohol-related images compared with lower scorers, particularly when these were presented in their correct locations upon recall. This provides first evidence for an AMB. It is important to highlight that this effect persisted over a 4-sec delay period including a visual interference task that erased iconic memories and diverted attention away from the encoded items, thus the AMB cannot be reduced to the previously reported AAB. Our finding calls for further investigation of alcohol-related cognitive biases in WM, and we propose a preliminary model that may guide future research. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  6. Measurement and Modeling of Blocking Contacts for Cadmium Telluride Gamma Ray Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beck, Patrick R.

    2010-01-07

    Gamma ray detectors are important in national security applications, medicine, and astronomy. Semiconductor materials with high density and atomic number, such as Cadmium Telluride (CdTe), offer a small device footprint, but their performance is limited by noise at room temperature; however, improved device design can decrease detector noise by reducing leakage current. This thesis characterizes and models two unique Schottky devices: one with an argon ion sputter etch before Schottky contact deposition and one without. Analysis of current versus voltage characteristics shows that thermionic emission alone does not describe these devices. This analysis points to reverse bias generation current ormore » leakage through an inhomogeneous barrier. Modeling the devices in reverse bias with thermionic field emission and a leaky Schottky barrier yields good agreement with measurements. Also numerical modeling with a finite-element physics-based simulator suggests that reverse bias current is a combination of thermionic emission and generation. This thesis proposes further experiments to determine the correct model for reverse bias conduction. Understanding conduction mechanisms in these devices will help develop more reproducible contacts, reduce leakage current, and ultimately improve detector performance.« less

  7. A Variational Level Set Approach Based on Local Entropy for Image Segmentation and Bias Field Correction.

    PubMed

    Tang, Jian; Jiang, Xiaoliang

    2017-01-01

    Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.

  8. Seismic waveform inversion for core-mantle boundary topography

    NASA Astrophysics Data System (ADS)

    Colombi, Andrea; Nissen-Meyer, Tarje; Boschi, Lapo; Giardini, Domenico

    2014-07-01

    The topography of the core-mantle boundary (CMB) is directly linked to the dynamics of both the mantle and the outer core, although it is poorly constrained and understood. Recent studies have produced topography models with mutual agreement up to degree 2. A broad-band waveform inversion strategy is introduced and applied here, with relatively low computational cost and based on a first-order Born approximation. Its performance is validated using synthetic waveforms calculated in theoretical earth models that include different topography patterns with varying lateral wavelengths, from 600 to 2500 km, and magnitudes (˜10 km peak-to-peak). The source-receiver geometry focuses mainly on the Pdiff, PKP, PcP and ScS phases. The results show that PKP branches, PcP and ScS generally perform well and in a similar fashion, while Pdiff yields unsatisfactory results. We investigate also how 3-D mantle correction influences the output models, and find that despite the disturbance introduced, the models recovered do not appear to be biased, provided that the 3-D model is correct. Using cross-correlated traveltimes, we derive new topography models from both P and S waves. The static corrections used to remove the mantle effect are likely to affect the inversion, compromising the agreement between models derived from P and S data. By modelling traveltime residuals starting from sensitivity kernels, we show how the simultaneous use of volumetric and boundary kernels can reduce the bias coming from mantle structures. The joint inversion approach should be the only reliable method to invert for CMB topography using absolute cross-correlation traveltimes.

  9. High-resolution near real-time drought monitoring in South Asia

    NASA Astrophysics Data System (ADS)

    Aadhar, Saran; Mishra, Vimal

    2017-10-01

    Drought in South Asia affect food and water security and pose challenges for millions of people. For policy-making, planning, and management of water resources at sub-basin or administrative levels, high-resolution datasets of precipitation and air temperature are required in near-real time. We develop a high-resolution (0.05°) bias-corrected precipitation and temperature data that can be used to monitor near real-time drought conditions over South Asia. Moreover, the dataset can be used to monitor climatic extremes (heat and cold waves, dry and wet anomalies) in South Asia. A distribution mapping method was applied to correct bias in precipitation and air temperature, which performed well compared to the other bias correction method based on linear scaling. Bias-corrected precipitation and temperature data were used to estimate Standardized precipitation index (SPI) and Standardized Precipitation Evapotranspiration Index (SPEI) to assess the historical and current drought conditions in South Asia. We evaluated drought severity and extent against the satellite-based Normalized Difference Vegetation Index (NDVI) anomalies and satellite-driven Drought Severity Index (DSI) at 0.05°. The bias-corrected high-resolution data can effectively capture observed drought conditions as shown by the satellite-based drought estimates. High resolution near real-time dataset can provide valuable information for decision-making at district and sub-basin levels.

  10. Understanding the neural basis of cognitive bias modification as a clinical treatment for depression.

    PubMed

    Eguchi, Akihiro; Walters, Daniel; Peerenboom, Nele; Dury, Hannah; Fox, Elaine; Stringer, Simon

    2017-03-01

    [Correction Notice: An Erratum for this article was reported in Vol 85(3) of Journal of Consulting and Clinical Psychology (see record 2017-07144-002). In the article, there was an error in the Discussion section's first paragraph for Implications and Future Work. The in-text reference citation for Penton-Voak et al. (2013) was incorrectly listed as "Blumenfeld, Preminger, Sagi, and Tsodyks (2006)". All versions of this article have been corrected.] Objective: Cognitive bias modification (CBM) eliminates cognitive biases toward negative information and is efficacious in reducing depression recurrence, but the mechanisms behind the bias elimination are not fully understood. The present study investigated, through computer simulation of neural network models, the neural dynamics underlying the use of CBM in eliminating the negative biases in the way that depressed patients evaluate facial expressions. We investigated 2 new CBM methodologies using biologically plausible synaptic learning mechanisms-continuous transformation learning and trace learning-which guide learning by exploiting either the spatial or temporal continuity between visual stimuli presented during training. We first describe simulations with a simplified 1-layer neural network, and then we describe simulations in a biologically detailed multilayer neural network model of the ventral visual pathway. After training with either the continuous transformation learning rule or the trace learning rule, the 1-layer neural network eliminated biases in interpreting neutral stimuli as sad. The multilayer neural network trained with realistic face stimuli was also shown to be able to use continuous transformation learning or trace learning to reduce biases in the interpretation of neutral stimuli. The simulation results suggest 2 biologically plausible synaptic learning mechanisms, continuous transformation learning and trace learning, that may subserve CBM. The results are highly informative for the development of experimental protocols to produce optimal CBM training methodologies with human participants. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. `Been There Done That': Disentangling Option Value Effects from User Heterogeneity When Valuing Natural Resources with a Use Component

    NASA Astrophysics Data System (ADS)

    Lyssenko, Nikita; Martínez-Espiñeira, Roberto

    2012-11-01

    Endogeneity bias arises in contingent valuation studies when the error term in the willingness to pay (WTP) equation is correlated with explanatory variables because observable and unobservable characteristics of the respondents affect both their WTP and the value of those variables. We correct for the endogeneity of variables that capture previous experience with the resource valued, humpback whales, and with the geographic area of study. We consider several endogenous behavioral variables. Therefore, we apply a multivariate Probit approach to jointly model them with WTP. In this case, correcting for endogeneity increases econometric efficiency and substantially corrects the bias affecting the estimated coefficients of the experience variables, by isolating the decreasing effect on option value caused by having already experienced the resource. Stark differences are unveiled between the marginal effects on WTP of previous experience of the resource in an alternative location versus experience in the location studied, Newfoundland and Labrador (Canada).

  12. 'Been there done that': disentangling option value effects from user heterogeneity when valuing natural resources with a use component.

    PubMed

    Lyssenko, Nikita; Martínez-Espiñeira, Roberto

    2012-11-01

    Endogeneity bias arises in contingent valuation studies when the error term in the willingness to pay (WTP) equation is correlated with explanatory variables because observable and unobservable characteristics of the respondents affect both their WTP and the value of those variables. We correct for the endogeneity of variables that capture previous experience with the resource valued, humpback whales, and with the geographic area of study. We consider several endogenous behavioral variables. Therefore, we apply a multivariate Probit approach to jointly model them with WTP. In this case, correcting for endogeneity increases econometric efficiency and substantially corrects the bias affecting the estimated coefficients of the experience variables, by isolating the decreasing effect on option value caused by having already experienced the resource. Stark differences are unveiled between the marginal effects on WTP of previous experience of the resource in an alternative location versus experience in the location studied, Newfoundland and Labrador (Canada).

  13. Bias correction of bounded location errors in presence-only data

    USGS Publications Warehouse

    Hefley, Trevor J.; Brost, Brian M.; Hooten, Mevin B.

    2017-01-01

    Location error occurs when the true location is different than the reported location. Because habitat characteristics at the true location may be different than those at the reported location, ignoring location error may lead to unreliable inference concerning species–habitat relationships.We explain how a transformation known in the spatial statistics literature as a change of support (COS) can be used to correct for location errors when the true locations are points with unknown coordinates contained within arbitrary shaped polygons.We illustrate the flexibility of the COS by modelling the resource selection of Whooping Cranes (Grus americana) using citizen contributed records with locations that were reported with error. We also illustrate the COS with a simulation experiment.In our analysis of Whooping Crane resource selection, we found that location error can result in up to a five-fold change in coefficient estimates. Our simulation study shows that location error can result in coefficient estimates that have the wrong sign, but a COS can efficiently correct for the bias.

  14. A multi-source precipitation approach to fill gaps over a radar precipitation field

    NASA Astrophysics Data System (ADS)

    Tesfagiorgis, K. B.; Mahani, S. E.; Khanbilvardi, R.

    2012-12-01

    Satellite Precipitation Estimates (SPEs) may be the only available source of information for operational hydrologic and flash flood prediction due to spatial limitations of radar and gauge products. The present work develops an approach to seamlessly blend satellite, radar, climatological and gauge precipitation products to fill gaps over ground-based radar precipitation fields. To mix different precipitation products, the bias of any of the products relative to each other should be removed. For bias correction, the study used an ensemble-based method which aims to estimate spatially varying multiplicative biases in SPEs using a radar rainfall product. Bias factors were calculated for a randomly selected sample of rainy pixels in the study area. Spatial fields of estimated bias were generated taking into account spatial variation and random errors in the sampled values. A weighted Successive Correction Method (SCM) is proposed to make the merging between error corrected satellite and radar rainfall estimates. In addition to SCM, we use a Bayesian spatial method for merging the gap free radar with rain gauges, climatological rainfall sources and SPEs. We demonstrate the method using SPE Hydro-Estimator (HE), radar- based Stage-II, a climatological product PRISM and rain gauge dataset for several rain events from 2006 to 2008 over three different geographical locations of the United States. Results show that: the SCM method in combination with the Bayesian spatial model produced a precipitation product in good agreement with independent measurements. The study implies that using the available radar pixels surrounding the gap area, rain gauge, PRISM and satellite products, a radar like product is achievable over radar gap areas that benefits the scientific community.

  15. Detection of mastitis in dairy cattle by use of mixture models for repeated somatic cell scores: a Bayesian approach via Gibbs sampling.

    PubMed

    Odegård, J; Jensen, J; Madsen, P; Gianola, D; Klemetsdal, G; Heringstad, B

    2003-11-01

    The distribution of somatic cell scores could be regarded as a mixture of at least two components depending on a cow's udder health status. A heteroscedastic two-component Bayesian normal mixture model with random effects was developed and implemented via Gibbs sampling. The model was evaluated using datasets consisting of simulated somatic cell score records. Somatic cell score was simulated as a mixture representing two alternative udder health statuses ("healthy" or "diseased"). Animals were assigned randomly to the two components according to the probability of group membership (Pm). Random effects (additive genetic and permanent environment), when included, had identical distributions across mixture components. Posterior probabilities of putative mastitis were estimated for all observations, and model adequacy was evaluated using measures of sensitivity, specificity, and posterior probability of misclassification. Fitting different residual variances in the two mixture components caused some bias in estimation of parameters. When the components were difficult to disentangle, so were their residual variances, causing bias in estimation of Pm and of location parameters of the two underlying distributions. When all variance components were identical across mixture components, the mixture model analyses returned parameter estimates essentially without bias and with a high degree of precision. Including random effects in the model increased the probability of correct classification substantially. No sizable differences in probability of correct classification were found between models in which a single cow effect (ignoring relationships) was fitted and models where this effect was split into genetic and permanent environmental components, utilizing relationship information. When genetic and permanent environmental effects were fitted, the between-replicate variance of estimates of posterior means was smaller because the model accounted for random genetic drift.

  16. Determining relevant parameters for a statistical tropical cyclone genesis tool based upon global model output

    NASA Astrophysics Data System (ADS)

    Halperin, D.; Hart, R. E.; Fuelberg, H. E.; Cossuth, J.

    2013-12-01

    Predicting tropical cyclone (TC) genesis has been a vexing problem for forecasters. While the literature describes environmental conditions which are necessary for TC genesis, predicting if and when a specific disturbance will organize and become a TC remains a challenge. As recently as 5-10 years ago, global models possessed little if any skill in forecasting TC genesis. However, due to increased resolution and more advanced model parameterizations, we have reached the point where global models can provide useful TC genesis guidance to operational forecasters. A recent study evaluated five global models' ability to predict TC genesis out to four days over the North Atlantic basin (Halperin et al. 2013). The results indicate that the models are indeed able to capture the genesis time and location correctly a fair percentage of the time. The study also uncovered model biases. For example, probability of detection and false alarm rate varies spatially within the basin. Also, as expected, the models' performance decreases with increasing lead time. In order to explain these and other biases, it is useful to analyze the model-indicated genesis events further to determine whether or not there are systematic differences between successful forecasts (hits), false alarms, and miss events. This study will examine composites of a number of physically-relevant environmental parameters (e.g., magnitude of vertical wind shear, aerially averaged mid-level relative humidity) and disturbance-based parameters (e.g., 925 hPa maximum wind speed, vertical alignment of relative vorticity) among each TC genesis event classification (i.e., hit, false alarm, miss). We will use standard statistical tests (e.g., Student's t test, Mann-Whitney-U Test) to calculate whether or not any differences are statistically significant. We also plan to discuss how these composite results apply to a few illustrative case studies. The results may help determine which aspects of the forecast are (in)correct and whether the incorrect aspects can be bias-corrected. This, in turn, may allow us to further enhance probabilistic forecasts of TC genesis.

  17. Applicability of ranked Regional Climate Models (RCM) to assess the impact of climate change on Ganges: A case study.

    NASA Astrophysics Data System (ADS)

    Anand, Jatin; Devak, Manjula; Gosain, Ashvani Kumar; Khosa, Rakesh; Dhanya, Ct

    2017-04-01

    The negative impact of climate change is felt over wide range of spatial scales, ranging from small basins to large watershed area, which can possibly outweighs the benefits of natural water system. General Circulation Models (GCMs) has been widely used as an input to a hydrological models (HMs), to simulate different hydrological components of a river basin. However, the coarser scale of GCMs and spatio-temporal biases, restricted its use at finer resolution. If downscaled, adds one more level of uncertainty i.e., downscaling uncertainty together with model and scenario uncertainty. The outputs computed from Regional Climate Models (RCM) may aid the uncertainties arising from GCMs, as the RCMs are the miniatures of GCMs. However, the RCMs do have some inherent systematic biases, hence bias correction is a prerequisite process before it is fed to HMs. RCMs, together with the input from GCMs at later boundaries also takes topography of the area into account. Hence, RCMs need to be ranked a priori. In this study, impact of climate change on the Ganga basin, India, is assessed using the ranked RCMs. Firstly, bias correction of 14 RCM models are done using Quantile-Quantile mapping and Equidistant cumulative distribution method, for historic (1990-2004) and future scenario (2021-2100), respectively. The runoff simulations from Soil Water Assessment Tool (SWAT), for historic scenario is used for ranking of RCMs. Entropy and PROMETHEE-2 method is employed to rank the RCMs based on five performance indicators namely, Nash-Sutcliffe efficiency (NSE), coefficient of determination (R2), normalised root mean square error (NRMSE), absolute normalised mean bias error (ANMBE) and average absolute relative error (AARE). The results illustrated that each of the performance indicators behaves differently for different RCMs. RCA 4 (CNRM-CERFACS) is found as the best model with the highest value of  (0.85), followed by RCA4 (MIROC) and RCA4 (ICHEC) with  values of 0.80 and 0.53, respectively, for Ganga basin. Flow-duration curve and long-term average of streamflow for ranked RCMs, confirm that SWAT model is efficient in capturing the hydrology of the basin. For monsoon months (June, July, August and September), future annual mean surface runoff decreases substantially ( -50 % to -10%), while the base flow for October, November and December is projected to increase ( 10- 20 %). Analysis of snow-melt hydrology, indicated that the snow-melt is projected to increase during the months of November to March, with a maximum increase (400%) shown by RCA 4 (CNRM-CERFACS) and least by RCA4 (ICHEC) (15%). Further, all the RCMs projected higher and lower frequency of dry and wet monsoon, respectively. The analysis of simulated base flow and recharge illustrates that the change varies from +100% to - 500% and +97% to -600%, respectively, with central part of the basin undergoing major loss in the recharge. Hence, this research provides important insights of surface runoff to climate change projections and therefore, better administration and management of available resources is necessary. Keyword: Climate change, uncertainty, Soil Water Assessment Tool (SWAT), General Circulation Model (GCM), Regional Climate Models (RCM), Bias correction.

  18. Impact of chemical lateral boundary conditions in a regional air quality forecast model on surface ozone predictions during stratospheric intrusions

    NASA Astrophysics Data System (ADS)

    Pendlebury, Diane; Gravel, Sylvie; Moran, Michael D.; Lupu, Alexandru

    2018-02-01

    A regional air quality forecast model, GEM-MACH, is used to examine the conditions under which a limited-area air quality model can accurately forecast near-surface ozone concentrations during stratospheric intrusions. Periods in 2010 and 2014 with known stratospheric intrusions over North America were modelled using four different ozone lateral boundary conditions obtained from a seasonal climatology, a dynamically-interpolated monthly climatology, global air quality forecasts, and global air quality reanalyses. It is shown that the mean bias and correlation in surface ozone over the course of a season can be improved by using time-varying ozone lateral boundary conditions, particularly through the correct assignment of stratospheric vs. tropospheric ozone along the western lateral boundary (for North America). Part of the improvement in surface ozone forecasts results from improvements in the characterization of near-surface ozone along the lateral boundaries that then directly impact surface locations near the boundaries. However, there is an additional benefit from the correct characterization of the location of the tropopause along the western lateral boundary such that the model can correctly simulate stratospheric intrusions and their associated exchange of ozone from stratosphere to troposphere. Over a three-month period in spring 2010, the mean bias was seen to improve by as much as 5 ppbv and the correlation by 0.1 depending on location, and on the form of the chemical lateral boundary condition.

  19. Measuring and Benchmarking Technical Efficiency of Public Hospitals in Tianjin, China: A Bootstrap-Data Envelopment Analysis Approach.

    PubMed

    Li, Hao; Dong, Siping

    2015-01-01

    China has long been stuck in applying traditional data envelopment analysis (DEA) models to measure technical efficiency of public hospitals without bias correction of efficiency scores. In this article, we have introduced the Bootstrap-DEA approach from the international literature to analyze the technical efficiency of public hospitals in Tianjin (China) and tried to improve the application of this method for benchmarking and inter-organizational learning. It is found that the bias corrected efficiency scores of Bootstrap-DEA differ significantly from those of the traditional Banker, Charnes, and Cooper (BCC) model, which means that Chinese researchers need to update their DEA models for more scientific calculation of hospital efficiency scores. Our research has helped shorten the gap between China and the international world in relative efficiency measurement and improvement of hospitals. It is suggested that Bootstrap-DEA be widely applied into afterward research to measure relative efficiency and productivity of Chinese hospitals so as to better serve for efficiency improvement and related decision making. © The Author(s) 2015.

  20. An experimental verification of laser-velocimeter sampling bias and its correction

    NASA Technical Reports Server (NTRS)

    Johnson, D. A.; Modarress, D.; Owen, F. K.

    1982-01-01

    The existence of 'sampling bias' in individual-realization laser velocimeter measurements is experimentally verified and shown to be independent of sample rate. The experiments were performed in a simple two-stream mixing shear flow with the standard for comparison being laser-velocimeter results obtained under continuous-wave conditions. It is also demonstrated that the errors resulting from sampling bias can be removed by a proper interpretation of the sampling statistics. In addition, data obtained in a shock-induced separated flow and in the near-wake of airfoils are presented, both bias-corrected and uncorrected, to illustrate the effects of sampling bias in the extreme.

  1. Length bias correction in gene ontology enrichment analysis using logistic regression.

    PubMed

    Mi, Gu; Di, Yanming; Emerson, Sarah; Cumbie, Jason S; Chang, Jeff H

    2012-01-01

    When assessing differential gene expression from RNA sequencing data, commonly used statistical tests tend to have greater power to detect differential expression of genes encoding longer transcripts. This phenomenon, called "length bias", will influence subsequent analyses such as Gene Ontology enrichment analysis. In the presence of length bias, Gene Ontology categories that include longer genes are more likely to be identified as enriched. These categories, however, are not necessarily biologically more relevant. We show that one can effectively adjust for length bias in Gene Ontology analysis by including transcript length as a covariate in a logistic regression model. The logistic regression model makes the statistical issue underlying length bias more transparent: transcript length becomes a confounding factor when it correlates with both the Gene Ontology membership and the significance of the differential expression test. The inclusion of the transcript length as a covariate allows one to investigate the direct correlation between the Gene Ontology membership and the significance of testing differential expression, conditional on the transcript length. We present both real and simulated data examples to show that the logistic regression approach is simple, effective, and flexible.

  2. Ultrahigh Error Threshold for Surface Codes with Biased Noise

    NASA Astrophysics Data System (ADS)

    Tuckett, David K.; Bartlett, Stephen D.; Flammia, Steven T.

    2018-02-01

    We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli Z errors occur more frequently than X or Y errors. Such biased noise, where dephasing dominates, is ubiquitous in many quantum architectures. In the limit of pure dephasing noise we find a threshold of 43.7(1)% using a tensor network decoder proposed by Bravyi, Suchara, and Vargo. The threshold remains surprisingly large in the regime of realistic noise bias ratios, for example 28.2(2)% at a bias of 10. The performance is, in fact, at or near the hashing bound for all values of the bias. The modified surface code still uses only weight-4 stabilizers on a square lattice, but merely requires measuring products of Y instead of Z around the faces, as this doubles the number of useful syndrome bits associated with the dominant Z errors. Our results demonstrate that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.

  3. Bias Correction for Assimilation of Retrieved AIRS Profiles of Temperature and Humidity

    NASA Technical Reports Server (NTRS)

    Blakenship, Clay; Zavodsky, Bradley; Blackwell, William

    2014-01-01

    The Atmospheric Infrared Sounder (AIRS) is a hyperspectral radiometer aboard NASA's Aqua satellite designed to measure atmospheric profiles of temperature and humidity. AIRS retrievals are assimilated into the Weather Research and Forecasting (WRF) model over the North Pacific for some cases involving "atmospheric rivers". These events bring a large flux of water vapor to the west coast of North America and often lead to extreme precipitation in the coastal mountain ranges. An advantage of assimilating retrievals rather than radiances is that information in partly cloudy fields of view can be used. Two different Level 2 AIRS retrieval products are compared: the Version 6 AIRS Science Team standard retrievals and a neural net retrieval from MIT. Before assimilation, a bias correction is applied to adjust each layer of retrieved temperature and humidity so the layer mean values agree with a short-term model climatology. WRF runs assimilating each of the products are compared against each other and against a control run with no assimilation. Forecasts are against ERA reanalyses.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scolnic, D.; Kessler, R., E-mail: dscolnic@kicp.uchicago.edu, E-mail: kessler@kicp.uchicago.edu

    Simulations of Type Ia supernovae (SNe Ia) surveys are a critical tool for correcting biases in the analysis of SNe Ia to infer cosmological parameters. Large-scale Monte Carlo simulations include a thorough treatment of observation history, measurement noise, intrinsic scatter models, and selection effects. In this Letter, we improve simulations with a robust technique to evaluate the underlying populations of SN Ia color and stretch that correlate with luminosity. In typical analyses, the standardized SN Ia brightness is determined from linear “Tripp” relations between the light curve color and luminosity and between stretch and luminosity. However, this solution produces Hubblemore » residual biases because intrinsic scatter and measurement noise result in measured color and stretch values that do not follow the Tripp relation. We find a 10 σ bias (up to 0.3 mag) in Hubble residuals versus color and 5 σ bias (up to 0.2 mag) in Hubble residuals versus stretch in a joint sample of 920 spectroscopically confirmed SN Ia from PS1, SNLS, SDSS, and several low- z surveys. After we determine the underlying color and stretch distributions, we use simulations to predict and correct the biases in the data. We show that removing these biases has a small impact on the low- z sample, but reduces the intrinsic scatter σ {sub int} from 0.101 to 0.083 in the combined PS1, SNLS, and SDSS sample. Past estimates of the underlying populations were too broad, leading to a small bias in the equation of state of dark energy w of Δ w = 0.005.« less

  5. Improving contact layer patterning using SEM contour based etch model

    NASA Astrophysics Data System (ADS)

    Weisbuch, François; Lutich, Andrey; Schatz, Jirka; Hertzsch, Tino; Moll, Hans-Peter

    2016-10-01

    The patterning of the contact layer is modulated by strong etch effects that are highly dependent on the geometry of the contacts. Such litho-etch biases need to be corrected to ensure a good pattern fidelity. But aggressive designs contain complex shapes that can hardly be compensated with etch bias table and are difficult to characterize with standard CD metrology. In this work we propose to implement a model based etch compensation method able to deal with any contact configuration. With the help of SEM contours, it was possible to get reliable 2D measurements particularly helpful to calibrate the etch model. The selections of calibration structures was optimized in combination with model form to achieve an overall errRMS of 3nm allowing the implementation of the model in production.

  6. Correcting negatively biased refractivity below ducts in GNSS radio occultation: an optimal estimation approach towards improving planetary boundary layer (PBL) characterization

    NASA Astrophysics Data System (ADS)

    Wang, Kuo-Nung; de la Torre Juárez, Manuel; Ao, Chi O.; Xie, Feiqin

    2017-12-01

    Global Navigation Satellite System (GNSS) radio occultation (RO) measurements are promising in sensing the vertical structure of the Earth's planetary boundary layer (PBL). However, large refractivity changes near the top of PBL can cause ducting and lead to a negative bias in the retrieved refractivity within the PBL (below ˜ 2 km). To remove the bias, a reconstruction method with assumption of linear structure inside the ducting layer models has been proposed by Xie et al. (2006). While the negative bias can be reduced drastically as demonstrated in the simulation, the lack of high-quality surface refractivity constraint makes its application to real RO data difficult. In this paper, we use the widely available precipitable water (PW) satellite observation as the external constraint for the bias correction. A new framework is proposed to incorporate optimization into the RO reconstruction retrievals in the presence of ducting conditions. The new method uses optimal estimation to select the best refractivity solution whose PW and PBL height best match the externally retrieved PW and the known a priori states, respectively. The near-coincident PW retrievals from AMSR-E microwave radiometer instruments are used as an external observational constraint. This new reconstruction method is tested on both the simulated GNSS-RO profiles and the actual GNSS-RO data. Our results show that the proposed method can greatly reduce the negative refractivity bias when compared to the traditional Abel inversion.

  7. Modeling error in experimental assays using the bootstrap principle: Understanding discrepancies between assays using different dispensing technologies

    PubMed Central

    Hanson, Sonya M.; Ekins, Sean; Chodera, John D.

    2015-01-01

    All experimental assay data contains error, but the magnitude, type, and primary origin of this error is often not obvious. Here, we describe a simple set of assay modeling techniques based on the bootstrap principle that allow sources of error and bias to be simulated and propagated into assay results. We demonstrate how deceptively simple operations—such as the creation of a dilution series with a robotic liquid handler—can significantly amplify imprecision and even contribute substantially to bias. To illustrate these techniques, we review an example of how the choice of dispensing technology can impact assay measurements, and show how large contributions to discrepancies between assays can be easily understood and potentially corrected for. These simple modeling techniques—illustrated with an accompanying IPython notebook—can allow modelers to understand the expected error and bias in experimental datasets, and even help experimentalists design assays to more effectively reach accuracy and imprecision goals. PMID:26678597

  8. Electrostatic focal spot correction for x-ray tubes operating in strong magnetic fields.

    PubMed

    Lillaney, Prasheel; Shin, Mihye; Hinshaw, Waldo; Fahrig, Rebecca

    2014-11-01

    A close proximity hybrid x-ray/magnetic resonance (XMR) imaging system offers several critical advantages over current XMR system installations that have large separation distances (∼5 m) between the imaging fields of view. The two imaging systems can be placed in close proximity to each other if an x-ray tube can be designed to be immune to the magnetic fringe fields outside of the MR bore. One of the major obstacles to robust x-ray tube design is correcting for the effects of the MR fringe field on the x-ray tube focal spot. Any fringe field component orthogonal to the x-ray tube electric field leads to electron drift altering the path of the electron trajectories. The method proposed in this study to correct for the electron drift utilizes an external electric field in the direction of the drift. The electric field is created using two electrodes that are positioned adjacent to the cathode. These electrodes are biased with positive and negative potential differences relative to the cathode. The design of the focusing cup assembly is constrained primarily by the strength of the MR fringe field and high voltage standoff distances between the anode, cathode, and the bias electrodes. From these constraints, a focusing cup design suitable for the close proximity XMR system geometry is derived, and a finite element model of this focusing cup geometry is simulated to demonstrate efficacy. A Monte Carlo simulation is performed to determine any effects of the modified focusing cup design on the output x-ray energy spectrum. An orthogonal fringe field magnitude of 65 mT can be compensated for using bias voltages of +15 and -20 kV. These bias voltages are not sufficient to completely correct for larger orthogonal field magnitudes. Using active shielding coils in combination with the bias electrodes provides complete correction at an orthogonal field magnitude of 88.1 mT. Introducing small fields (<10 mT) parallel to the x-ray tube electric field in addition to the orthogonal field does not affect the electrostatic correction technique. However, rotation of the x-ray tube by 30° toward the MR bore increases the parallel magnetic field magnitude (∼72 mT). The presence of this larger parallel field along with the orthogonal field leads to incomplete correction. Monte Carlo simulations demonstrate that the mean energy of the x-ray spectrum is not noticeably affected by the electrostatic correction, but the output flux is reduced by 7.5%. The maximum orthogonal magnetic field magnitude that can be compensated for using the proposed design is 65 mT. Larger orthogonal field magnitudes cannot be completely compensated for because a pure electrostatic approach is limited by the dielectric strength of the vacuum inside the x-ray tube insert. The electrostatic approach also suffers from limitations when there are strong magnetic fields in both the orthogonal and parallel directions because the electrons prefer to stay aligned with the parallel magnetic field. These challenging field conditions can be addressed by using a hybrid correction approach that utilizes both active shielding coils and biasing electrodes.

  9. Electrostatic focal spot correction for x-ray tubes operating in strong magnetic fields

    PubMed Central

    Lillaney, Prasheel; Shin, Mihye; Hinshaw, Waldo; Fahrig, Rebecca

    2014-01-01

    Purpose: A close proximity hybrid x-ray/magnetic resonance (XMR) imaging system offers several critical advantages over current XMR system installations that have large separation distances (∼5 m) between the imaging fields of view. The two imaging systems can be placed in close proximity to each other if an x-ray tube can be designed to be immune to the magnetic fringe fields outside of the MR bore. One of the major obstacles to robust x-ray tube design is correcting for the effects of the MR fringe field on the x-ray tube focal spot. Any fringe field component orthogonal to the x-ray tube electric field leads to electron drift altering the path of the electron trajectories. Methods: The method proposed in this study to correct for the electron drift utilizes an external electric field in the direction of the drift. The electric field is created using two electrodes that are positioned adjacent to the cathode. These electrodes are biased with positive and negative potential differences relative to the cathode. The design of the focusing cup assembly is constrained primarily by the strength of the MR fringe field and high voltage standoff distances between the anode, cathode, and the bias electrodes. From these constraints, a focusing cup design suitable for the close proximity XMR system geometry is derived, and a finite element model of this focusing cup geometry is simulated to demonstrate efficacy. A Monte Carlo simulation is performed to determine any effects of the modified focusing cup design on the output x-ray energy spectrum. Results: An orthogonal fringe field magnitude of 65 mT can be compensated for using bias voltages of +15 and −20 kV. These bias voltages are not sufficient to completely correct for larger orthogonal field magnitudes. Using active shielding coils in combination with the bias electrodes provides complete correction at an orthogonal field magnitude of 88.1 mT. Introducing small fields (<10 mT) parallel to the x-ray tube electric field in addition to the orthogonal field does not affect the electrostatic correction technique. However, rotation of the x-ray tube by 30° toward the MR bore increases the parallel magnetic field magnitude (∼72 mT). The presence of this larger parallel field along with the orthogonal field leads to incomplete correction. Monte Carlo simulations demonstrate that the mean energy of the x-ray spectrum is not noticeably affected by the electrostatic correction, but the output flux is reduced by 7.5%. Conclusions: The maximum orthogonal magnetic field magnitude that can be compensated for using the proposed design is 65 mT. Larger orthogonal field magnitudes cannot be completely compensated for because a pure electrostatic approach is limited by the dielectric strength of the vacuum inside the x-ray tube insert. The electrostatic approach also suffers from limitations when there are strong magnetic fields in both the orthogonal and parallel directions because the electrons prefer to stay aligned with the parallel magnetic field. These challenging field conditions can be addressed by using a hybrid correction approach that utilizes both active shielding coils and biasing electrodes. PMID:25370658

  10. Quantifying sources of bias in National Healthcare Safety Network laboratory-identified Clostridium difficile infection rates.

    PubMed

    Haley, Valerie B; DiRienzo, A Gregory; Lutterloh, Emily C; Stricof, Rachel L

    2014-01-01

    To assess the effect of multiple sources of bias on state- and hospital-specific National Healthcare Safety Network (NHSN) laboratory-identified Clostridium difficile infection (CDI) rates. Sensitivity analysis. A total of 124 New York hospitals in 2010. New York NHSN CDI events from audited hospitals were matched to New York hospital discharge billing records to obtain additional information on patient age, length of stay, and previous hospital discharges. "Corrected" hospital-onset (HO) CDI rates were calculated after (1) correcting inaccurate case reporting found during audits, (2) incorporating knowledge of laboratory results from outside hospitals, (3) excluding days when patients were not at risk from the denominator of the rates, and (4) adjusting for patient age. Data sets were simulated with each of these sources of bias reintroduced individually and combined. The simulated rates were compared with the corrected rates. Performance (ie, better, worse, or average compared with the state average) was categorized, and misclassification compared with the corrected data set was measured. Counting days patients were not at risk in the denominator reduced the state HO rate by 45% and resulted in 8% misclassification. Age adjustment and reporting errors also shifted rates (7% and 6% misclassification, respectively). Changing the NHSN protocol to require reporting of age-stratified patient-days and adjusting for patient-days at risk would improve comparability of rates across hospitals. Further research is needed to validate the risk-adjustment model before these data should be used as hospital performance measures.

  11. Evaluations of high-resolution dynamically downscaled ensembles over the contiguous United States Climate Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zobel, Zachary; Wang, Jiali; Wuebbles, Donald J.

    This study uses Weather Research and Forecast (WRF) model to evaluate the performance of six dynamical downscaled decadal historical simulations with 12-km resolution for a large domain (7200 x 6180 km) that covers most of North America. The initial and boundary conditions are from three global climate models (GCMs) and one reanalysis data. The GCMs employed in this study are the Geophysical Fluid Dynamics Laboratory Earth System Model with Generalized Ocean Layer Dynamics component, Community Climate System Model, version 4, and the Hadley Centre Global Environment Model, version 2-Earth System. The reanalysis data is from the National Centers for Environmentalmore » Prediction-US. Department of Energy Reanalysis II. We analyze the effects of bias correcting, the lateral boundary conditions and the effects of spectral nudging. We evaluate the model performance for seven surface variables and four upper atmospheric variables based on their climatology and extremes for seven subregions across the United States. The results indicate that the simulation’s performance depends on both location and the features/variable being tested. We find that the use of bias correction and/or nudging is beneficial in many situations, but employing these when running the RCM is not always an improvement when compared to the reference data. The use of an ensemble mean and median leads to a better performance in measuring the climatology, while it is significantly biased for the extremes, showing much larger differences than individual GCM driven model simulations from the reference data. This study provides a comprehensive evaluation of these historical model runs in order to make informed decisions when making future projections.« less

  12. On CD-AFM bias related to probe bending

    NASA Astrophysics Data System (ADS)

    Ukraintsev, V. A.; Orji, N. G.; Vorburger, T. V.; Dixson, R. G.; Fu, J.; Silver, R. M.

    2012-03-01

    Critical Dimension AFM (CD-AFM) is a widely used reference metrology. To characterize modern semiconductor devices, very small and flexible probes, often 15 nm to 20 nm in diameter, are now frequently used. Several recent publications have reported on uncontrolled and significant probe-to-probe bias variation during linewidth and sidewall angle measurements [1,2]. Results obtained in this work suggest that probe bending can be on the order of several nanometers and thus potentially can explain much of the observed CD-AFM probe-to-probe bias variation. We have developed and experimentally tested one-dimensional (1D) and two-dimensional (2D) models to describe the bending of cylindrical probes. An earlier 1D bending model reported by Watanabe et al. [3] was refined. Contributions from several new phenomena were considered, including: probe misalignment, diameter variation near the carbon nanotube tip (CNT) apex, probe bending before snapping, distributed van der Waals-London force, etc. The methodology for extraction of the Hamaker probe-surface interaction energy from experimental probe bending data was developed. To overcome limitations of the 1D model, a new 2D distributed force (DF) model was developed. Comparison of the new model with the 1D single point force (SPF) model revealed about 27 % difference in probe bending bias between the two. A simple linear relation between biases predicted by the 1D SPF and 2D DF models was found. This finding simplifies use of the advanced 2D DF model of probe bending in various CD-AFM applications. New 2D and three-dimensional (3D) CDAFM data analysis software is needed to take full advantage of the new bias correction modeling capabilities.

  13. The Gaussian streaming model and convolution Lagrangian effective field theory

    DOE PAGES

    Vlah, Zvonimir; Castorina, Emanuele; White, Martin

    2016-12-05

    We update the ingredients of the Gaussian streaming model (GSM) for the redshift-space clustering of biased tracers using the techniques of Lagrangian perturbation theory, effective field theory (EFT) and a generalized Lagrangian bias expansion. After relating the GSM to the cumulant expansion, we present new results for the real-space correlation function, mean pairwise velocity and pairwise velocity dispersion including counter terms from EFT and bias terms through third order in the linear density, its leading derivatives and its shear up to second order. We discuss the connection to the Gaussian peaks formalism. We compare the ingredients of the GSM tomore » a suite of large N-body simulations, and show the performance of the theory on the low order multipoles of the redshift-space correlation function and power spectrum. We highlight the importance of a general biasing scheme, which we find to be as important as higher-order corrections due to non-linear evolution for the halos we consider on the scales of interest to us.« less

  14. Unbiased estimates of galaxy scaling relations from photometric redshift surveys

    NASA Astrophysics Data System (ADS)

    Rossi, Graziano; Sheth, Ravi K.

    2008-06-01

    Many physical properties of galaxies correlate with one another, and these correlations are often used to constrain galaxy formation models. Such correlations include the colour-magnitude relation, the luminosity-size relation, the fundamental plane, etc. However, the transformation from observable (e.g. angular size, apparent brightness) to physical quantity (physical size, luminosity) is often distance dependent. Noise in the distance estimate will lead to biased estimates of these correlations, thus compromising the ability of photometric redshift surveys to constrain galaxy formation models. We describe two methods which can remove this bias. One is a generalization of the Vmax method, and the other is a maximum-likelihood approach. We illustrate their effectiveness by studying the size-luminosity relation in a mock catalogue, although both methods can be applied to other scaling relations as well. We show that if one simply uses photometric redshifts one obtains a biased relation; our methods correct for this bias and recover the true relation.

  15. The Gaussian streaming model and convolution Lagrangian effective field theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vlah, Zvonimir; Castorina, Emanuele; White, Martin, E-mail: zvlah@stanford.edu, E-mail: ecastorina@berkeley.edu, E-mail: mwhite@berkeley.edu

    We update the ingredients of the Gaussian streaming model (GSM) for the redshift-space clustering of biased tracers using the techniques of Lagrangian perturbation theory, effective field theory (EFT) and a generalized Lagrangian bias expansion. After relating the GSM to the cumulant expansion, we present new results for the real-space correlation function, mean pairwise velocity and pairwise velocity dispersion including counter terms from EFT and bias terms through third order in the linear density, its leading derivatives and its shear up to second order. We discuss the connection to the Gaussian peaks formalism. We compare the ingredients of the GSM tomore » a suite of large N-body simulations, and show the performance of the theory on the low order multipoles of the redshift-space correlation function and power spectrum. We highlight the importance of a general biasing scheme, which we find to be as important as higher-order corrections due to non-linear evolution for the halos we consider on the scales of interest to us.« less

  16. Generation of Unbiased Ionospheric Corrections in Brazilian Region for GNSS positioning based on SSR concept

    NASA Astrophysics Data System (ADS)

    Monico, J. F. G.; De Oliveira, P. S., Jr.; Morel, L.; Fund, F.; Durand, S.; Durand, F.

    2017-12-01

    Mitigation of ionospheric effects on GNSS (Global Navigation Satellite System) signals is very challenging, especially for GNSS positioning applications based on SSR (State Space Representation) concept, which requires the knowledge of spatial correlated errors with considerable accuracy level (centimeter). The presence of satellite and receiver hardware biases on GNSS measurements difficult the proper estimation of ionospheric corrections, reducing their physical meaning. This problematic can lead to ionospheric corrections biased of several meters and often presenting negative values, which is physically not possible. In this contribution, we discuss a strategy to obtain SSR ionospheric corrections based on GNSS measurements from CORS (Continuous Operation Reference Stations) Networks with minimal presence of hardware biases and consequently physical meaning. Preliminary results are presented on generation and application of such corrections for simulated users located in Brazilian region under high level of ionospheric activity.

  17. Bias correction of nutritional status estimates when reported age is used for calculating WHO indicators in children under five years of age.

    PubMed

    Quezada, Amado D; García-Guerra, Armando; Escobar, Leticia

    2016-06-01

    To assess the performance of a simple correction method for nutritional status estimates in children under five years of age when exact age is not available from the data. The proposed method was based on the assumption of symmetry of age distributions within a given month of age and validated in a large population-based survey sample of Mexican preschool children. The main distributional assumption was consistent with the data. All prevalence estimates derived from the correction method showed no statistically significant bias. In contrast, failing to correct attained age resulted in an underestimation of stunting in general and an overestimation of overweight or obesity among the youngest. The proposed method performed remarkably well in terms of bias correction of estimates and could be easily applied in situations in which either birth or interview dates are not available from the data.

  18. Effects of Sample Selection Bias on the Accuracy of Population Structure and Ancestry Inference

    PubMed Central

    Shringarpure, Suyash; Xing, Eric P.

    2014-01-01

    Population stratification is an important task in genetic analyses. It provides information about the ancestry of individuals and can be an important confounder in genome-wide association studies. Public genotyping projects have made a large number of datasets available for study. However, practical constraints dictate that of a geographical/ethnic population, only a small number of individuals are genotyped. The resulting data are a sample from the entire population. If the distribution of sample sizes is not representative of the populations being sampled, the accuracy of population stratification analyses of the data could be affected. We attempt to understand the effect of biased sampling on the accuracy of population structure analysis and individual ancestry recovery. We examined two commonly used methods for analyses of such datasets, ADMIXTURE and EIGENSOFT, and found that the accuracy of recovery of population structure is affected to a large extent by the sample used for analysis and how representative it is of the underlying populations. Using simulated data and real genotype data from cattle, we show that sample selection bias can affect the results of population structure analyses. We develop a mathematical framework for sample selection bias in models for population structure and also proposed a correction for sample selection bias using auxiliary information about the sample. We demonstrate that such a correction is effective in practice using simulated and real data. PMID:24637351

  19. Bias correction method for climate change impact assessment at a basin scale

    NASA Astrophysics Data System (ADS)

    Nyunt, C.; Jaranilla-sanchez, P. A.; Yamamoto, A.; Nemoto, T.; Kitsuregawa, M.; Koike, T.

    2012-12-01

    Climate change impact studies are mainly based on the general circulation models GCM and these studies play an important role to define suitable adaptation strategies for resilient environment in a basin scale management. For this purpose, this study summarized how to select appropriate GCM to decrease the certain uncertainty amount in analysis. This was applied to the Pampanga, Angat and Kaliwa rivers in Luzon Island, the main island of Philippine and these three river basins play important roles in irrigation water supply, municipal water source for Metro Manila. According to the GCM scores of both seasonal evolution of Asia summer monsoon and spatial correlation and root mean squared error of atmospheric variables over the region, finally six GCM is chosen. Next, we develop a complete, efficient and comprehensive statistical bias correction scheme covering extremes events, normal rainfall and frequency of dry period. Due to the coarse resolution and parameterization scheme of GCM, extreme rainfall underestimation, too many rain days with low intensity and poor representation of local seasonality have been known as bias of GCM. Extreme rainfall has unusual characteristics and it should be focused specifically. Estimated maximum extreme rainfall is crucial for planning and design of infrastructures in river basin. Developing countries have limited technical, financial and management resources for implementing adaptation measures and they need detailed information of drought and flood for near future. Traditionally, the analysis of extreme has been examined using annual maximum series (AMS) adjusted to a Gumbel or Lognormal distribution. The drawback is the loss of the second, third etc, largest rainfall. Another approach is partial duration series (PDS) constructed using the values above a selected threshold and permit more than one event per year. The generalized Pareto distribution (GPD) has been used to model PDS and it is the series of excess over a threshold. In this study, the lowest value of AMS of observed is selected as threshold and simultaneously same frequency is considered as extremes in corresponding GCM gridded series. After fitting to GP distribution, bias corrected GCM extreme is found by using the inverse function of observed extremes. The results show it can remove bias effectively. For projected climate, the same transfer function between historical observed and GCM was applied. Moreover, frequency analysis of maximum extreme intensity estimation was done for validation and then approximate for near future by using identical function as past. To fix the error in the number of no rain days of GCM, ranking order statistics is used and define in GCM same as the frequency of wet days in observed station. After this rank, GCM output will be zero and identify same threshold for future projection. Normal rainfall is classified as between threshold of extreme and no rain day. We assume monthly normal rainfall follow gamma distribution. Then, we mapped the CDF of GCM normal rainfall to station's one in each month and bias corrected rainfall is available. In summary, bias of GCM have been addressed efficiently and validated at point scale by seasonal climatology and at all stations for evaluating downscaled rainfall performance. The results show bias corrected and downscaled scheme is good enough for climate impact study.

  20. Non-Gaussian bias: insights from discrete density peaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desjacques, Vincent; Riotto, Antonio; Gong, Jinn-Ouk, E-mail: Vincent.Desjacques@unige.ch, E-mail: jinn-ouk.gong@apctp.org, E-mail: Antonio.Riotto@unige.ch

    2013-09-01

    Corrections induced by primordial non-Gaussianity to the linear halo bias can be computed from a peak-background split or the widespread local bias model. However, numerical simulations clearly support the prediction of the former, in which the non-Gaussian amplitude is proportional to the linear halo bias. To understand better the reasons behind the failure of standard Lagrangian local bias, in which the halo overdensity is a function of the local mass overdensity only, we explore the effect of a primordial bispectrum on the 2-point correlation of discrete density peaks. We show that the effective local bias expansion to peak clustering vastlymore » simplifies the calculation. We generalize this approach to excursion set peaks and demonstrate that the resulting non-Gaussian amplitude, which is a weighted sum of quadratic bias factors, precisely agrees with the peak-background split expectation, which is a logarithmic derivative of the halo mass function with respect to the normalisation amplitude. We point out that statistics of thresholded regions can be computed using the same formalism. Our results suggest that halo clustering statistics can be modelled consistently (in the sense that the Gaussian and non-Gaussian bias factors agree with peak-background split expectations) from a Lagrangian bias relation only if the latter is specified as a set of constraints imposed on the linear density field. This is clearly not the case of standard Lagrangian local bias. Therefore, one is led to consider additional variables beyond the local mass overdensity.« less

  1. Meta-analysis of the effect of road work zones on crash occurrence.

    PubMed

    Theofilatos, Athanasios; Ziakopoulos, Apostolos; Papadimitriou, Eleonora; Yannis, George; Diamandouros, Konstantinos

    2017-11-01

    There is strong evidence that work zones pose increased risk of crashes and injuries. The two most common risk factors associated with increased crash frequencies are work zone duration and length. However, relevant research on the topic is relatively limited. For that reason, this paper presents formal meta-analyses of studies that have estimated the relationship between the number of crashes and work zone duration and length, in order to provide overall estimates of those effects on crash frequencies. All studies presented in this paper are crash prediction models with similar specifications. According to the meta-analyses and after correcting for publication bias when it was considered appropriate, the summary estimates of regression coefficients were found to be 0.1703 for duration and 0.862 for length. These effects were significant for length but not for duration. However, the overall estimate of duration was significant before correcting for publication bias. Separate meta-analyses on the studies examining both duration and length was also carried out in order to have rough estimates of the combined effects. The estimate of duration was found to be 0.953, while for length was 0.847. Similar to previous meta-analyses the effect of duration after correcting for publication bias is not significant, while the effect of length was significant at a 95% level. Meta-regression findings indicate that the main factors influencing the overall estimates of the beta coefficients are study year and region for duration and study year and model specification for length. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Accurate and precise determination of isotopic ratios by MC-ICP-MS: a review.

    PubMed

    Yang, Lu

    2009-01-01

    For many decades the accurate and precise determination of isotope ratios has remained a very strong interest to many researchers due to its important applications in earth, environmental, biological, archeological, and medical sciences. Traditionally, thermal ionization mass spectrometry (TIMS) has been the technique of choice for achieving the highest accuracy and precision. However, recent developments in multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS) have brought a new dimension to this field. In addition to its simple and robust sample introduction, high sample throughput, and high mass resolution, the flat-topped peaks generated by this technique provide for accurate and precise determination of isotope ratios with precision reaching 0.001%, comparable to that achieved with TIMS. These features, in combination with the ability of the ICP source to ionize nearly all elements in the periodic table, have resulted in an increased use of MC-ICP-MS for such measurements in various sample matrices. To determine accurate and precise isotope ratios with MC-ICP-MS, utmost care must be exercised during sample preparation, optimization of the instrument, and mass bias corrections. Unfortunately, there are inconsistencies and errors evident in many MC-ICP-MS publications, including errors in mass bias correction models. This review examines "state-of-the-art" methodologies presented in the literature for achievement of precise and accurate determinations of isotope ratios by MC-ICP-MS. Some general rules for such accurate and precise measurements are suggested, and calculations of combined uncertainty of the data using a few common mass bias correction models are outlined.

  3. A comparison of low back kinetic estimates obtained through posture matching, rigid link modeling and an EMG-assisted model.

    PubMed

    Parkinson, R J; Bezaire, M; Callaghan, J P

    2011-07-01

    This study examined errors introduced by a posture matching approach (3DMatch) relative to dynamic three-dimensional rigid link and EMG-assisted models. Eighty-eight lifting trials of various combinations of heights (floor, 0.67, 1.2 m), asymmetry (left, right and center) and mass (7.6 and 9.7 kg) were videotaped while spine postures, ground reaction forces, segment orientations and muscle activations were documented and used to estimate joint moments and forces (L5/S1). Posture matching over predicted peak and cumulative extension moment (p < 0.0001 for all variables). There was no difference between peak compression estimates obtained with posture matching or EMG-assisted approaches (p = 0.7987). Posture matching over predicted cumulative (p < 0.0001) compressive loading due to a bias in standing, however, individualized bias correction eliminated the differences. Therefore, posture matching provides a method to analyze industrial lifting exposures that will predict kinetic values similar to those of more sophisticated models, provided necessary corrections are applied. Copyright © 2010 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  4. When do we care about political neutrality? The hypocritical nature of reaction to political bias

    PubMed Central

    Sulitzeanu-Kenan, Raanan

    2018-01-01

    Claims and accusations of political bias are common in many countries. The essence of such claims is a denunciation of alleged violations of political neutrality in the context of media coverage, legal and bureaucratic decisions, academic teaching etc. Yet the acts and messages that give rise to such claims are also embedded within a context of intergroup competition. Thus, in evaluating the seriousness of, and the need for taking a corrective action in reaction to a purported politically biased act people may consider both the alleged normative violation and the political implications of the act/message for the evaluator’s ingroup. The question thus arises whether partisans react similarly to ingroup-aiding and ingroup-harming actions or messages which they perceive as politically biased. In three separate studies, conducted in two countries, we show that political considerations strongly affect partisans’ reactions to actions and messages that they perceive as politically biased. Namely, ingroup-harming biased messages/acts are considered more serious and are more likely to warrant corrective action in comparison to ingroup-aiding biased messages/acts. We conclude by discussing the implications of these findings for the implementations of measures intended for correcting and preventing biases, and for the nature of conflict and competition between rival political groups. PMID:29723271

  5. When do we care about political neutrality? The hypocritical nature of reaction to political bias.

    PubMed

    Yair, Omer; Sulitzeanu-Kenan, Raanan

    2018-01-01

    Claims and accusations of political bias are common in many countries. The essence of such claims is a denunciation of alleged violations of political neutrality in the context of media coverage, legal and bureaucratic decisions, academic teaching etc. Yet the acts and messages that give rise to such claims are also embedded within a context of intergroup competition. Thus, in evaluating the seriousness of, and the need for taking a corrective action in reaction to a purported politically biased act people may consider both the alleged normative violation and the political implications of the act/message for the evaluator's ingroup. The question thus arises whether partisans react similarly to ingroup-aiding and ingroup-harming actions or messages which they perceive as politically biased. In three separate studies, conducted in two countries, we show that political considerations strongly affect partisans' reactions to actions and messages that they perceive as politically biased. Namely, ingroup-harming biased messages/acts are considered more serious and are more likely to warrant corrective action in comparison to ingroup-aiding biased messages/acts. We conclude by discussing the implications of these findings for the implementations of measures intended for correcting and preventing biases, and for the nature of conflict and competition between rival political groups.

  6. Evaluation of two Vaisala RS92 radiosonde solar radiative dry bias correction algorithms

    DOE PAGES

    Dzambo, Andrew M.; Turner, David D.; Mlawer, Eli J.

    2016-04-12

    Solar heating of the relative humidity (RH) probe on Vaisala RS92 radiosondes results in a large dry bias in the upper troposphere. Two different algorithms (Miloshevich et al., 2009, MILO hereafter; and Wang et al., 2013, WANG hereafter) have been designed to account for this solar radiative dry bias (SRDB). These corrections are markedly different with MILO adding up to 40 % more moisture to the original radiosonde profile than WANG; however, the impact of the two algorithms varies with height. The accuracy of these two algorithms is evaluated using three different approaches: a comparison of precipitable water vapor (PWV),more » downwelling radiative closure with a surface-based microwave radiometer at a high-altitude site (5.3 km m.s.l.), and upwelling radiative closure with the space-based Atmospheric Infrared Sounder (AIRS). The PWV computed from the uncorrected and corrected RH data is compared against PWV retrieved from ground-based microwave radiometers at tropical, midlatitude, and arctic sites. Although MILO generally adds more moisture to the original radiosonde profile in the upper troposphere compared to WANG, both corrections yield similar changes to the PWV, and the corrected data agree well with the ground-based retrievals. The two closure activities – done for clear-sky scenes – use the radiative transfer models MonoRTM and LBLRTM to compute radiance from the radiosonde profiles to compare against spectral observations. Both WANG- and MILO-corrected RHs are statistically better than original RH in all cases except for the driest 30 % of cases in the downwelling experiment, where both algorithms add too much water vapor to the original profile. In the upwelling experiment, the RH correction applied by the WANG vs. MILO algorithm is statistically different above 10 km for the driest 30 % of cases and above 8 km for the moistest 30 % of cases, suggesting that the MILO correction performs better than the WANG in clear-sky scenes. Lastly, the cause of this statistical significance is likely explained by the fact the WANG correction also accounts for cloud cover – a condition not accounted for in the radiance closure experiments.« less

  7. Evaluation of two Vaisala RS92 radiosonde solar radiative dry bias correction algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dzambo, Andrew M.; Turner, David D.; Mlawer, Eli J.

    Solar heating of the relative humidity (RH) probe on Vaisala RS92 radiosondes results in a large dry bias in the upper troposphere. Two different algorithms (Miloshevich et al., 2009, MILO hereafter; and Wang et al., 2013, WANG hereafter) have been designed to account for this solar radiative dry bias (SRDB). These corrections are markedly different with MILO adding up to 40 % more moisture to the original radiosonde profile than WANG; however, the impact of the two algorithms varies with height. The accuracy of these two algorithms is evaluated using three different approaches: a comparison of precipitable water vapor (PWV),more » downwelling radiative closure with a surface-based microwave radiometer at a high-altitude site (5.3 km m.s.l.), and upwelling radiative closure with the space-based Atmospheric Infrared Sounder (AIRS). The PWV computed from the uncorrected and corrected RH data is compared against PWV retrieved from ground-based microwave radiometers at tropical, midlatitude, and arctic sites. Although MILO generally adds more moisture to the original radiosonde profile in the upper troposphere compared to WANG, both corrections yield similar changes to the PWV, and the corrected data agree well with the ground-based retrievals. The two closure activities – done for clear-sky scenes – use the radiative transfer models MonoRTM and LBLRTM to compute radiance from the radiosonde profiles to compare against spectral observations. Both WANG- and MILO-corrected RHs are statistically better than original RH in all cases except for the driest 30 % of cases in the downwelling experiment, where both algorithms add too much water vapor to the original profile. In the upwelling experiment, the RH correction applied by the WANG vs. MILO algorithm is statistically different above 10 km for the driest 30 % of cases and above 8 km for the moistest 30 % of cases, suggesting that the MILO correction performs better than the WANG in clear-sky scenes. Lastly, the cause of this statistical significance is likely explained by the fact the WANG correction also accounts for cloud cover – a condition not accounted for in the radiance closure experiments.« less

  8. glopara files

    Science.gov Websites

    prepbufr BUFR biascr.$CDUMP.$CDATE Time dependent sat bias correction file abias text satang.$CDUMP.$CDATE Angle dependent sat bias correction satang text sfcanl.$CDUMP.$CDATE surface analysis sfcanl binary tcvitl.$CDUMP.$CDATE Tropical Storm Vitals syndata.tcvitals.tm00 text adpsfc.$CDUMP.$CDATE Surface land

  9. Being an honest broker of hydrology: Uncovering, communicating and addressing model error in a climate change streamflow dataset

    NASA Astrophysics Data System (ADS)

    Chegwidden, O.; Nijssen, B.; Pytlak, E.

    2017-12-01

    Any model simulation has errors, including errors in meteorological data, process understanding, model structure, and model parameters. These errors may express themselves as bias, timing lags, and differences in sensitivity between the model and the physical world. The evaluation and handling of these errors can greatly affect the legitimacy, validity and usefulness of the resulting scientific product. In this presentation we will discuss a case study of handling and communicating model errors during the development of a hydrologic climate change dataset for the Pacific Northwestern United States. The dataset was the result of a four-year collaboration between the University of Washington, Oregon State University, the Bonneville Power Administration, the United States Army Corps of Engineers and the Bureau of Reclamation. Along the way, the partnership facilitated the discovery of multiple systematic errors in the streamflow dataset. Through an iterative review process, some of those errors could be resolved. For the errors that remained, honest communication of the shortcomings promoted the dataset's legitimacy. Thoroughly explaining errors also improved ways in which the dataset would be used in follow-on impact studies. Finally, we will discuss the development of the "streamflow bias-correction" step often applied to climate change datasets that will be used in impact modeling contexts. We will describe the development of a series of bias-correction techniques through close collaboration among universities and stakeholders. Through that process, both universities and stakeholders learned about the others' expectations and workflows. This mutual learning process allowed for the development of methods that accommodated the stakeholders' specific engineering requirements. The iterative revision process also produced a functional and actionable dataset while preserving its scientific merit. We will describe how encountering earlier techniques' pitfalls allowed us to develop improved methods for scientists and practitioners alike.

  10. Sex determination from the femur in Portuguese populations with classical and machine-learning classifiers.

    PubMed

    Curate, F; Umbelino, C; Perinha, A; Nogueira, C; Silva, A M; Cunha, E

    2017-11-01

    The assessment of sex is of paramount importance in the establishment of the biological profile of a skeletal individual. Femoral relevance for sex estimation is indisputable, particularly when other exceedingly dimorphic skeletal regions are missing. As such, this study intended to generate population-specific osteometric models for the estimation of sex with the femur and to compare the accuracy of the models obtained through classical and machine-learning classifiers. A set of 15 standard femoral measurements was acquired in a training sample (100 females; 100 males) from the Coimbra Identified Skeletal Collection (University of Coimbra, Portugal) and models for sex classification were produced with logistic regression (LR), linear discriminant analysis (LDA), support vector machines (SVM), and reduce error pruning trees (REPTree). Under cross-validation, univariable sectioning points generated with REPTree correctly estimated sex in 60.0-87.5% of cases (systematic error ranging from 0.0 to 37.0%), while multivariable models correctly classified sex in 84.0-92.5% of cases (bias from 0.0 to 7.0%). All models were assessed in a holdout sample (24 females; 34 males) from the 21st Century Identified Skeletal Collection (University of Coimbra, Portugal), with an allocation accuracy ranging from 56.9 to 86.2% (bias from 4.4 to 67.0%) in the univariable models, and from 84.5 to 89.7% (bias from 3.7 to 23.3%) in the multivariable models. This study makes available a detailed description of sexual dimorphism in femoral linear dimensions in two Portuguese identified skeletal samples, emphasizing the relevance of the femur for the estimation of sex in skeletal remains in diverse conditions of completeness and preservation. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  11. High-Resolution Near Real-Time Drought Monitoring in South Asia

    NASA Astrophysics Data System (ADS)

    Aadhar, S.; Mishra, V.

    2017-12-01

    Drought in South Asia affect food and water security and pose challenges for millions of people. For policy-making, planning and management of water resources at the sub-basin or administrative levels, high-resolution datasets of precipitation and air temperature are required in near-real time. Here we develop a high resolution (0.05 degree) bias-corrected precipitation and temperature data that can be used to monitor near real-time drought conditions over South Asia. Moreover, the dataset can be used to monitor climatic extremes (heat waves, cold waves, dry and wet anomalies) in South Asia. A distribution mapping method was applied to correct bias in precipitation and air temperature (maximum and minimum), which performed well compared to the other bias correction method based on linear scaling. Bias-corrected precipitation and temperature data were used to estimate Standardized precipitation index (SPI) and Standardized Precipitation Evapotranspiration Index (SPEI) to assess the historical and current drought conditions in South Asia. We evaluated drought severity and extent against the satellite-based Normalized Difference Vegetation Index (NDVI) anomalies and satellite-driven Drought Severity Index (DSI) at 0.05˚. We find that the bias-corrected high-resolution data can effectively capture observed drought conditions as shown by the satellite-based drought estimates. High resolution near real-time dataset can provide valuable information for decision-making at district and sub- basin levels.

  12. Bias-correction of PERSIANN-CDR Extreme Precipitation Estimates Over the United States

    NASA Astrophysics Data System (ADS)

    Faridzad, M.; Yang, T.; Hsu, K. L.; Sorooshian, S.

    2017-12-01

    Ground-based precipitation measurements can be sparse or even nonexistent over remote regions which make it difficult for extreme event analysis. PERSIANN-CDR (CDR), with 30+ years of daily rainfall information, provides an opportunity to study precipitation for regions where ground measurements are limited. In this study, the use of CDR annual extreme precipitation for frequency analysis of extreme events over limited/ungauged basins is explored. The adjustment of CDR is implemented in two steps: (1) Calculated CDR bias correction factor at limited gauge locations based on the linear regression analysis of gauge and CDR annual maxima precipitation; and (2) Extend the bias correction factor to the locations where gauges are not available. The correction factors are estimated at gauge sites over various catchments, elevation zones, and climate regions and the results were generalized to ungauged sites based on regional and climatic similarity. Case studies were conducted on 20 basins with diverse climate and altitudes in the Eastern and Western US. Cross-validation reveals that the bias correction factors estimated on limited calibration data can be extended to regions with similar characteristics. The adjusted CDR estimates also outperform gauge interpolation on validation sites consistently. It is suggested that the CDR with bias adjustment has a potential for study frequency analysis of extreme events, especially for regions with limited gauge observations.

  13. A 2 x 2 Taxonomy of Multilevel Latent Contextual Models: Accuracy-Bias Trade-Offs in Full and Partial Error Correction Models

    ERIC Educational Resources Information Center

    Ludtke, Oliver; Marsh, Herbert W.; Robitzsch, Alexander; Trautwein, Ulrich

    2011-01-01

    In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data…

  14. Identifying misbehaving models using baseline climate variance

    NASA Astrophysics Data System (ADS)

    Schultz, Colin

    2011-06-01

    The majority of projections made using general circulation models (GCMs) are conducted to help tease out the effects on a region, or on the climate system as a whole, of changing climate dynamics. Sun et al., however, used model runs from 20 different coupled atmosphere-ocean GCMs to try to understand a different aspect of climate projections: how bias correction, model selection, and other statistical techniques might affect the estimated outcomes. As a case study, the authors focused on predicting the potential change in precipitation for the Murray-Darling Basin (MDB), a 1-million- square- kilometer area in southeastern Australia that suffered a recent decade of drought that left many wondering about the potential impacts of climate change on this important agricultural region. The authors first compared the precipitation predictions made by the models with 107 years of observations, and they then made bias corrections to adjust the model projections to have the same statistical properties as the observations. They found that while the spread of the projected values was reduced, the average precipitation projection for the end of the 21st century barely changed. Further, the authors determined that interannual variations in precipitation for the MDB could be explained by random chance, where the precipitation in a given year was independent of that in previous years.

  15. Modeling motivated misreports to sensitive survey questions.

    PubMed

    Böckenholt, Ulf

    2014-07-01

    Asking sensitive or personal questions in surveys or experimental studies can both lower response rates and increase item non-response and misreports. Although non-response is easily diagnosed, misreports are not. However, misreports cannot be ignored because they give rise to systematic bias. The purpose of this paper is to present a modeling approach that identifies misreports and corrects for them. Misreports are conceptualized as a motivated process under which respondents edit their answers before they report them. For example, systematic bias introduced by overreports of socially desirable behaviors or underreports of less socially desirable ones can be modeled, leading to more-valid inferences. The proposed approach is applied to a large-scale experimental study and shows that respondents who feel powerful tend to overclaim their knowledge.

  16. Hydrologic extremes - an intercomparison of multiple gridded statistical downscaling methods

    NASA Astrophysics Data System (ADS)

    Werner, A. T.; Cannon, A. J.

    2015-06-01

    Gridded statistical downscaling methods are the main means of preparing climate model data to drive distributed hydrological models. Past work on the validation of climate downscaling methods has focused on temperature and precipitation, with less attention paid to the ultimate outputs from hydrological models. Also, as attention shifts towards projections of extreme events, downscaling comparisons now commonly assess methods in terms of climate extremes, but hydrologic extremes are less well explored. Here, we test the ability of gridded downscaling models to replicate historical properties of climate and hydrologic extremes, as measured in terms of temporal sequencing (i.e., correlation tests) and distributional properties (i.e., tests for equality of probability distributions). Outputs from seven downscaling methods - bias correction constructed analogues (BCCA), double BCCA (DBCCA), BCCA with quantile mapping reordering (BCCAQ), bias correction spatial disaggregation (BCSD), BCSD using minimum/maximum temperature (BCSDX), climate imprint delta method (CI), and bias corrected CI (BCCI) - are used to drive the Variable Infiltration Capacity (VIC) model over the snow-dominated Peace River basin, British Columbia. Outputs are tested using split-sample validation on 26 climate extremes indices (ClimDEX) and two hydrologic extremes indices (3 day peak flow and 7 day peak flow). To characterize observational uncertainty, four atmospheric reanalyses are used as climate model surrogates and two gridded observational datasets are used as downscaling target data. The skill of the downscaling methods generally depended on reanalysis and gridded observational dataset. However, CI failed to reproduce the distribution and BCSD and BCSDX the timing of winter 7 day low flow events, regardless of reanalysis or observational dataset. Overall, DBCCA passed the greatest number of tests for the ClimDEX indices, while BCCAQ, which is designed to more accurately resolve event-scale spatial gradients, passed the greatest number of tests for hydrologic extremes. Non-stationarity in the observational/reanalysis datasets complicated the evaluation of downscaling performance. Comparing temporal homogeneity and trends in climate indices and hydrological model outputs calculated from downscaled reanalyses and gridded observations was useful for diagnosing the reliability of the various historical datasets. We recommend that such analyses be conducted before such data are used to construct future hydro-climatic change scenarios.

  17. Hydrologic extremes - an intercomparison of multiple gridded statistical downscaling methods

    NASA Astrophysics Data System (ADS)

    Werner, Arelia T.; Cannon, Alex J.

    2016-04-01

    Gridded statistical downscaling methods are the main means of preparing climate model data to drive distributed hydrological models. Past work on the validation of climate downscaling methods has focused on temperature and precipitation, with less attention paid to the ultimate outputs from hydrological models. Also, as attention shifts towards projections of extreme events, downscaling comparisons now commonly assess methods in terms of climate extremes, but hydrologic extremes are less well explored. Here, we test the ability of gridded downscaling models to replicate historical properties of climate and hydrologic extremes, as measured in terms of temporal sequencing (i.e. correlation tests) and distributional properties (i.e. tests for equality of probability distributions). Outputs from seven downscaling methods - bias correction constructed analogues (BCCA), double BCCA (DBCCA), BCCA with quantile mapping reordering (BCCAQ), bias correction spatial disaggregation (BCSD), BCSD using minimum/maximum temperature (BCSDX), the climate imprint delta method (CI), and bias corrected CI (BCCI) - are used to drive the Variable Infiltration Capacity (VIC) model over the snow-dominated Peace River basin, British Columbia. Outputs are tested using split-sample validation on 26 climate extremes indices (ClimDEX) and two hydrologic extremes indices (3-day peak flow and 7-day peak flow). To characterize observational uncertainty, four atmospheric reanalyses are used as climate model surrogates and two gridded observational data sets are used as downscaling target data. The skill of the downscaling methods generally depended on reanalysis and gridded observational data set. However, CI failed to reproduce the distribution and BCSD and BCSDX the timing of winter 7-day low-flow events, regardless of reanalysis or observational data set. Overall, DBCCA passed the greatest number of tests for the ClimDEX indices, while BCCAQ, which is designed to more accurately resolve event-scale spatial gradients, passed the greatest number of tests for hydrologic extremes. Non-stationarity in the observational/reanalysis data sets complicated the evaluation of downscaling performance. Comparing temporal homogeneity and trends in climate indices and hydrological model outputs calculated from downscaled reanalyses and gridded observations was useful for diagnosing the reliability of the various historical data sets. We recommend that such analyses be conducted before such data are used to construct future hydro-climatic change scenarios.

  18. Prospective motion correction with volumetric navigators (vNavs) reduces the bias and variance in brain morphometry induced by subject motion.

    PubMed

    Tisdall, M Dylan; Reuter, Martin; Qureshi, Abid; Buckner, Randy L; Fischl, Bruce; van der Kouwe, André J W

    2016-02-15

    Recent work has demonstrated that subject motion produces systematic biases in the metrics computed by widely used morphometry software packages, even when the motion is too small to produce noticeable image artifacts. In the common situation where the control population exhibits different behaviors in the scanner when compared to the experimental population, these systematic measurement biases may produce significant confounds for between-group analyses, leading to erroneous conclusions about group differences. While previous work has shown that prospective motion correction can improve perceived image quality, here we demonstrate that, in healthy subjects performing a variety of directed motions, the use of the volumetric navigator (vNav) prospective motion correction system significantly reduces the motion-induced bias and variance in morphometry. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Image-guided regularization level set evolution for MR image segmentation and bias field correction.

    PubMed

    Wang, Lingfeng; Pan, Chunhong

    2014-01-01

    Magnetic resonance (MR) image segmentation is a crucial step in surgical and treatment planning. In this paper, we propose a level-set-based segmentation method for MR images with intensity inhomogeneous problem. To tackle the initialization sensitivity problem, we propose a new image-guided regularization to restrict the level set function. The maximum a posteriori inference is adopted to unify segmentation and bias field correction within a single framework. Under this framework, both the contour prior and the bias field prior are fully used. As a result, the image intensity inhomogeneity can be well solved. Extensive experiments are provided to evaluate the proposed method, showing significant improvements in both segmentation and bias field correction accuracies as compared with other state-of-the-art approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Number-counts slope estimation in the presence of Poisson noise

    NASA Technical Reports Server (NTRS)

    Schmitt, Juergen H. M. M.; Maccacaro, Tommaso

    1986-01-01

    The slope determination of a power-law number flux relationship in the case of photon-limited sampling. This case is important for high-sensitivity X-ray surveys with imaging telescopes, where the error in an individual source measurement depends on integrated flux and is Poisson, rather than Gaussian, distributed. A bias-free method of slope estimation is developed that takes into account the exact error distribution, the influence of background noise, and the effects of varying limiting sensitivities. It is shown that the resulting bias corrections are quite insensitive to the bias correction procedures applied, as long as only sources with signal-to-noise ratio five or greater are considered. However, if sources with signal-to-noise ratio five or less are included, the derived bias corrections depend sensitively on the shape of the error distribution.

Top