Sample records for exposure measurement errors

  1. A toolkit for measurement error correction, with a focus on nutritional epidemiology

    PubMed Central

    Keogh, Ruth H; White, Ian R

    2014-01-01

    Exposure measurement error is a problem in many epidemiological studies, including those using biomarkers and measures of dietary intake. Measurement error typically results in biased estimates of exposure-disease associations, the severity and nature of the bias depending on the form of the error. To correct for the effects of measurement error, information additional to the main study data is required. Ideally, this is a validation sample in which the true exposure is observed. However, in many situations, it is not feasible to observe the true exposure, but there may be available one or more repeated exposure measurements, for example, blood pressure or dietary intake recorded at two time points. The aim of this paper is to provide a toolkit for measurement error correction using repeated measurements. We bring together methods covering classical measurement error and several departures from classical error: systematic, heteroscedastic and differential error. The correction methods considered are regression calibration, which is already widely used in the classical error setting, and moment reconstruction and multiple imputation, which are newer approaches with the ability to handle differential error. We emphasize practical application of the methods in nutritional epidemiology and other fields. We primarily consider continuous exposures in the exposure-outcome model, but we also outline methods for use when continuous exposures are categorized. The methods are illustrated using the data from a study of the association between fibre intake and colorectal cancer, where fibre intake is measured using a diet diary and repeated measures are available for a subset. © 2014 The Authors. PMID:24497385

  2. A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models.

    PubMed

    Dionisio, Kathie L; Chang, Howard H; Baxter, Lisa K

    2016-11-25

    Exposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health. ZIP-code level estimates of exposure for six pollutants (CO, NO x , EC, PM 2.5 , SO 4 , O 3 ) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error. Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs. Substantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3-85% for population error, and 31-85% for total error. When CO, NO x or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copollutants based on the estimated type I error rate. The impact of exposure error must be considered when interpreting results of copollutant epidemiologic models, due to the possibility of attenuation of main pollutant RRs and the increased probability of false positives when measurement error is present.

  3. Bayesian adjustment for measurement error in continuous exposures in an individually matched case-control study.

    PubMed

    Espino-Hernandez, Gabriela; Gustafson, Paul; Burstyn, Igor

    2011-05-14

    In epidemiological studies explanatory variables are frequently subject to measurement error. The aim of this paper is to develop a Bayesian method to correct for measurement error in multiple continuous exposures in individually matched case-control studies. This is a topic that has not been widely investigated. The new method is illustrated using data from an individually matched case-control study of the association between thyroid hormone levels during pregnancy and exposure to perfluorinated acids. The objective of the motivating study was to examine the risk of maternal hypothyroxinemia due to exposure to three perfluorinated acids measured on a continuous scale. Results from the proposed method are compared with those obtained from a naive analysis. Using a Bayesian approach, the developed method considers a classical measurement error model for the exposures, as well as the conditional logistic regression likelihood as the disease model, together with a random-effect exposure model. Proper and diffuse prior distributions are assigned, and results from a quality control experiment are used to estimate the perfluorinated acids' measurement error variability. As a result, posterior distributions and 95% credible intervals of the odds ratios are computed. A sensitivity analysis of method's performance in this particular application with different measurement error variability was performed. The proposed Bayesian method to correct for measurement error is feasible and can be implemented using statistical software. For the study on perfluorinated acids, a comparison of the inferences which are corrected for measurement error to those which ignore it indicates that little adjustment is manifested for the level of measurement error actually exhibited in the exposures. Nevertheless, a sensitivity analysis shows that more substantial adjustments arise if larger measurement errors are assumed. In individually matched case-control studies, the use of conditional logistic regression likelihood as a disease model in the presence of measurement error in multiple continuous exposures can be justified by having a random-effect exposure model. The proposed method can be successfully implemented in WinBUGS to correct individually matched case-control studies for several mismeasured continuous exposures under a classical measurement error model.

  4. Random measurement error: Why worry? An example of cardiovascular risk factors.

    PubMed

    Brakenhoff, Timo B; van Smeden, Maarten; Visseren, Frank L J; Groenwold, Rolf H H

    2018-01-01

    With the increased use of data not originally recorded for research, such as routine care data (or 'big data'), measurement error is bound to become an increasingly relevant problem in medical research. A common view among medical researchers on the influence of random measurement error (i.e. classical measurement error) is that its presence leads to some degree of systematic underestimation of studied exposure-outcome relations (i.e. attenuation of the effect estimate). For the common situation where the analysis involves at least one exposure and one confounder, we demonstrate that the direction of effect of random measurement error on the estimated exposure-outcome relations can be difficult to anticipate. Using three example studies on cardiovascular risk factors, we illustrate that random measurement error in the exposure and/or confounder can lead to underestimation as well as overestimation of exposure-outcome relations. We therefore advise medical researchers to refrain from making claims about the direction of effect of measurement error in their manuscripts, unless the appropriate inferential tools are used to study or alleviate the impact of measurement error from the analysis.

  5. Exposure Measurement Error in PM2.5 Health Effects Studies: A Pooled Analysis of Eight Personal Exposure Validation Studies

    EPA Science Inventory

    Background: Exposure measurement error is a concern in long-term PM2.5 health studies using ambient concentrations as exposures. We assessed error magnitude by estimating calibration coefficients as the association between personal PM2.5 exposures from validation studies and typ...

  6. A simulation study to quantify the impacts of exposure ...

    EPA Pesticide Factsheets

    BackgroundExposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health.MethodsZIP-code level estimates of exposure for six pollutants (CO, NOx, EC, PM2.5, SO4, O3) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error.Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs.ResultsSubstantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3–85% for population error, and 31–85% for total error. When CO, NOx or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copoll

  7. Joint nonparametric correction estimator for excess relative risk regression in survival analysis with exposure measurement error

    PubMed Central

    Wang, Ching-Yun; Cullings, Harry; Song, Xiao; Kopecky, Kenneth J.

    2017-01-01

    SUMMARY Observational epidemiological studies often confront the problem of estimating exposure-disease relationships when the exposure is not measured exactly. In the paper, we investigate exposure measurement error in excess relative risk regression, which is a widely used model in radiation exposure effect research. In the study cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies a generalized version of the classical additive measurement error model, but it may or may not have repeated measurements. In addition, an instrumental variable is available for individuals in a subset of the whole cohort. We develop a nonparametric correction (NPC) estimator using data from the subcohort, and further propose a joint nonparametric correction (JNPC) estimator using all observed data to adjust for exposure measurement error. An optimal linear combination estimator of JNPC and NPC is further developed. The proposed estimators are nonparametric, which are consistent without imposing a covariate or error distribution, and are robust to heteroscedastic errors. Finite sample performance is examined via a simulation study. We apply the developed methods to data from the Radiation Effects Research Foundation, in which chromosome aberration is used to adjust for the effects of radiation dose measurement error on the estimation of radiation dose responses. PMID:29354018

  8. Measurement Error and Environmental Epidemiology: A Policy Perspective

    PubMed Central

    Edwards, Jessie K.; Keil, Alexander P.

    2017-01-01

    Purpose of review Measurement error threatens public health by producing bias in estimates of the population impact of environmental exposures. Quantitative methods to account for measurement bias can improve public health decision making. Recent findings We summarize traditional and emerging methods to improve inference under a standard perspective, in which the investigator estimates an exposure response function, and a policy perspective, in which the investigator directly estimates population impact of a proposed intervention. Summary Under a policy perspective, the analysis must be sensitive to errors in measurement of factors that modify the effect of exposure on outcome, must consider whether policies operate on the true or measured exposures, and may increasingly need to account for potentially dependent measurement error of two or more exposures affected by the same policy or intervention. Incorporating approaches to account for measurement error into such a policy perspective will increase the impact of environmental epidemiology. PMID:28138941

  9. Shared and unshared exposure measurement error in occupational cohort studies and their effects on statistical inference in proportional hazards models.

    PubMed

    Hoffmann, Sabine; Laurier, Dominique; Rage, Estelle; Guihenneuc, Chantal; Ancelet, Sophie

    2018-01-01

    Exposure measurement error represents one of the most important sources of uncertainty in epidemiology. When exposure uncertainty is not or only poorly accounted for, it can lead to biased risk estimates and a distortion of the shape of the exposure-response relationship. In occupational cohort studies, the time-dependent nature of exposure and changes in the method of exposure assessment may create complex error structures. When a method of group-level exposure assessment is used, individual worker practices and the imprecision of the instrument used to measure the average exposure for a group of workers may give rise to errors that are shared between workers, within workers or both. In contrast to unshared measurement error, the effects of shared errors remain largely unknown. Moreover, exposure uncertainty and magnitude of exposure are typically highest for the earliest years of exposure. We conduct a simulation study based on exposure data of the French cohort of uranium miners to compare the effects of shared and unshared exposure uncertainty on risk estimation and on the shape of the exposure-response curve in proportional hazards models. Our results indicate that uncertainty components shared within workers cause more bias in risk estimation and a more severe attenuation of the exposure-response relationship than unshared exposure uncertainty or exposure uncertainty shared between individuals. These findings underline the importance of careful characterisation and modeling of exposure uncertainty in observational studies.

  10. Shared and unshared exposure measurement error in occupational cohort studies and their effects on statistical inference in proportional hazards models

    PubMed Central

    Laurier, Dominique; Rage, Estelle

    2018-01-01

    Exposure measurement error represents one of the most important sources of uncertainty in epidemiology. When exposure uncertainty is not or only poorly accounted for, it can lead to biased risk estimates and a distortion of the shape of the exposure-response relationship. In occupational cohort studies, the time-dependent nature of exposure and changes in the method of exposure assessment may create complex error structures. When a method of group-level exposure assessment is used, individual worker practices and the imprecision of the instrument used to measure the average exposure for a group of workers may give rise to errors that are shared between workers, within workers or both. In contrast to unshared measurement error, the effects of shared errors remain largely unknown. Moreover, exposure uncertainty and magnitude of exposure are typically highest for the earliest years of exposure. We conduct a simulation study based on exposure data of the French cohort of uranium miners to compare the effects of shared and unshared exposure uncertainty on risk estimation and on the shape of the exposure-response curve in proportional hazards models. Our results indicate that uncertainty components shared within workers cause more bias in risk estimation and a more severe attenuation of the exposure-response relationship than unshared exposure uncertainty or exposure uncertainty shared between individuals. These findings underline the importance of careful characterisation and modeling of exposure uncertainty in observational studies. PMID:29408862

  11. Accounting for Berkson and Classical Measurement Error in Radon Exposure Using a Bayesian Structural Approach in the Analysis of Lung Cancer Mortality in the French Cohort of Uranium Miners.

    PubMed

    Hoffmann, Sabine; Rage, Estelle; Laurier, Dominique; Laroche, Pierre; Guihenneuc, Chantal; Ancelet, Sophie

    2017-02-01

    Many occupational cohort studies on underground miners have demonstrated that radon exposure is associated with an increased risk of lung cancer mortality. However, despite the deleterious consequences of exposure measurement error on statistical inference, these analyses traditionally do not account for exposure uncertainty. This might be due to the challenging nature of measurement error resulting from imperfect surrogate measures of radon exposure. Indeed, we are typically faced with exposure uncertainty in a time-varying exposure variable where both the type and the magnitude of error may depend on period of exposure. To address the challenge of accounting for multiplicative and heteroscedastic measurement error that may be of Berkson or classical nature, depending on the year of exposure, we opted for a Bayesian structural approach, which is arguably the most flexible method to account for uncertainty in exposure assessment. We assessed the association between occupational radon exposure and lung cancer mortality in the French cohort of uranium miners and found the impact of uncorrelated multiplicative measurement error to be of marginal importance. However, our findings indicate that the retrospective nature of exposure assessment that occurred in the earliest years of mining of this cohort as well as many other cohorts of underground miners might lead to an attenuation of the exposure-risk relationship. More research is needed to address further uncertainties in the calculation of lung dose, since this step will likely introduce important sources of shared uncertainty.

  12. Measurement error in environmental epidemiology and the shape of exposure-response curves.

    PubMed

    Rhomberg, Lorenz R; Chandalia, Juhi K; Long, Christopher M; Goodman, Julie E

    2011-09-01

    Both classical and Berkson exposure measurement errors as encountered in environmental epidemiology data can result in biases in fitted exposure-response relationships that are large enough to affect the interpretation and use of the apparent exposure-response shapes in risk assessment applications. A variety of sources of potential measurement error exist in the process of estimating individual exposures to environmental contaminants, and the authors review the evaluation in the literature of the magnitudes and patterns of exposure measurement errors that prevail in actual practice. It is well known among statisticians that random errors in the values of independent variables (such as exposure in exposure-response curves) may tend to bias regression results. For increasing curves, this effect tends to flatten and apparently linearize what is in truth a steeper and perhaps more curvilinear or even threshold-bearing relationship. The degree of bias is tied to the magnitude of the measurement error in the independent variables. It has been shown that the degree of bias known to apply to actual studies is sufficient to produce a false linear result, and that although nonparametric smoothing and other error-mitigating techniques may assist in identifying a threshold, they do not guarantee detection of a threshold. The consequences of this could be great, as it could lead to a misallocation of resources towards regulations that do not offer any benefit to public health.

  13. #2 - An Empirical Assessment of Exposure Measurement Error and Effect Attenuation in Bi-Pollutant Epidemiologic Models

    EPA Science Inventory

    Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation...

  14. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    PubMed

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  15. Design considerations for case series models with exposure onset measurement error.

    PubMed

    Mohammed, Sandra M; Dalrymple, Lorien S; Sentürk, Damla; Nguyen, Danh V

    2013-02-28

    The case series model allows for estimation of the relative incidence of events, such as cardiovascular events, within a pre-specified time window after an exposure, such as an infection. The method requires only cases (individuals with events) and controls for all fixed/time-invariant confounders. The measurement error case series model extends the original case series model to handle imperfect data, where the timing of an infection (exposure) is not known precisely. In this work, we propose a method for power/sample size determination for the measurement error case series model. Extensive simulation studies are used to assess the accuracy of the proposed sample size formulas. We also examine the magnitude of the relative loss of power due to exposure onset measurement error, compared with the ideal situation where the time of exposure is measured precisely. To facilitate the design of case series studies, we provide publicly available web-based tools for determining power/sample size for both the measurement error case series model as well as the standard case series model. Copyright © 2012 John Wiley & Sons, Ltd.

  16. Measurement error is often neglected in medical literature: a systematic review.

    PubMed

    Brakenhoff, Timo B; Mitroiu, Marian; Keogh, Ruth H; Moons, Karel G M; Groenwold, Rolf H H; van Smeden, Maarten

    2018-06-01

    In medical research, covariates (e.g., exposure and confounder variables) are often measured with error. While it is well accepted that this introduces bias and imprecision in exposure-outcome relations, it is unclear to what extent such issues are currently considered in research practice. The objective was to study common practices regarding covariate measurement error via a systematic review of general medicine and epidemiology literature. Original research published in 2016 in 12 high impact journals was full-text searched for phrases relating to measurement error. Reporting of measurement error and methods to investigate or correct for it were quantified and characterized. Two hundred and forty-seven (44%) of the 565 original research publications reported on the presence of measurement error. 83% of these 247 did so with respect to the exposure and/or confounder variables. Only 18 publications (7% of 247) used methods to investigate or correct for measurement error. Consequently, it is difficult for readers to judge the robustness of presented results to the existence of measurement error in the majority of publications in high impact journals. Our systematic review highlights the need for increased awareness about the possible impact of covariate measurement error. Additionally, guidance on the use of measurement error correction methods is necessary. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models.

    EPA Science Inventory

    BackgroundExposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of a...

  18. Multipollutant measurement error in air pollution epidemiology studies arising from predicting exposures with penalized regression splines

    PubMed Central

    Bergen, Silas; Sheppard, Lianne; Kaufman, Joel D.; Szpiro, Adam A.

    2016-01-01

    Summary Air pollution epidemiology studies are trending towards a multi-pollutant approach. In these studies, exposures at subject locations are unobserved and must be predicted using observed exposures at misaligned monitoring locations. This induces measurement error, which can bias the estimated health effects and affect standard error estimates. We characterize this measurement error and develop an analytic bias correction when using penalized regression splines to predict exposure. Our simulations show bias from multi-pollutant measurement error can be severe, and in opposite directions or simultaneously positive or negative. Our analytic bias correction combined with a non-parametric bootstrap yields accurate coverage of 95% confidence intervals. We apply our methodology to analyze the association of systolic blood pressure with PM2.5 and NO2 in the NIEHS Sister Study. We find that NO2 confounds the association of systolic blood pressure with PM2.5 and vice versa. Elevated systolic blood pressure was significantly associated with increased PM2.5 and decreased NO2. Correcting for measurement error bias strengthened these associations and widened 95% confidence intervals. PMID:27789915

  19. Measurement Error Correction for Predicted Spatiotemporal Air Pollution Exposures.

    PubMed

    Keller, Joshua P; Chang, Howard H; Strickland, Matthew J; Szpiro, Adam A

    2017-05-01

    Air pollution cohort studies are frequently analyzed in two stages, first modeling exposure then using predicted exposures to estimate health effects in a second regression model. The difference between predicted and unobserved true exposures introduces a form of measurement error in the second stage health model. Recent methods for spatial data correct for measurement error with a bootstrap and by requiring the study design ensure spatial compatibility, that is, monitor and subject locations are drawn from the same spatial distribution. These methods have not previously been applied to spatiotemporal exposure data. We analyzed the association between fine particulate matter (PM2.5) and birth weight in the US state of Georgia using records with estimated date of conception during 2002-2005 (n = 403,881). We predicted trimester-specific PM2.5 exposure using a complex spatiotemporal exposure model. To improve spatial compatibility, we restricted to mothers residing in counties with a PM2.5 monitor (n = 180,440). We accounted for additional measurement error via a nonparametric bootstrap. Third trimester PM2.5 exposure was associated with lower birth weight in the uncorrected (-2.4 g per 1 μg/m difference in exposure; 95% confidence interval [CI]: -3.9, -0.8) and bootstrap-corrected (-2.5 g, 95% CI: -4.2, -0.8) analyses. Results for the unrestricted analysis were attenuated (-0.66 g, 95% CI: -1.7, 0.35). This study presents a novel application of measurement error correction for spatiotemporal air pollution exposures. Our results demonstrate the importance of spatial compatibility between monitor and subject locations and provide evidence of the association between air pollution exposure and birth weight.

  20. A Simulation Study of Categorizing Continuous Exposure Variables Measured with Error in Autism Research: Small Changes with Large Effects.

    PubMed

    Heavner, Karyn; Burstyn, Igor

    2015-08-24

    Variation in the odds ratio (OR) resulting from selection of cutoffs for categorizing continuous variables is rarely discussed. We present results for the effect of varying cutoffs used to categorize a mismeasured exposure in a simulated population in the context of autism spectrum disorders research. Simulated cohorts were created with three distinct exposure-outcome curves and three measurement error variances for the exposure. ORs were calculated using logistic regression for 61 cutoffs (mean ± 3 standard deviations) used to dichotomize the observed exposure. ORs were calculated for five categories with a wide range for the cutoffs. For each scenario and cutoff, the OR, sensitivity, and specificity were calculated. The three exposure-outcome relationships had distinctly shaped OR (versus cutoff) curves, but increasing measurement error obscured the shape. At extreme cutoffs, there was non-monotonic oscillation in the ORs that cannot be attributed to "small numbers." Exposure misclassification following categorization of the mismeasured exposure was differential, as predicted by theory. Sensitivity was higher among cases and specificity among controls. Cutoffs chosen for categorizing continuous variables can have profound effects on study results. When measurement error is not too great, the shape of the OR curve may provide insight into the true shape of the exposure-disease relationship.

  1. Negative control exposure studies in the presence of measurement error: implications for attempted effect estimate calibration

    PubMed Central

    Sanderson, Eleanor; Macdonald-Wallis, Corrie; Davey Smith, George

    2018-01-01

    Abstract Background Negative control exposure studies are increasingly being used in epidemiological studies to strengthen causal inference regarding an exposure-outcome association when unobserved confounding is thought to be present. Negative control exposure studies contrast the magnitude of association of the negative control, which has no causal effect on the outcome but is associated with the unmeasured confounders in the same way as the exposure, with the magnitude of the association of the exposure with the outcome. A markedly larger effect of the exposure on the outcome than the negative control on the outcome strengthens inference that the exposure has a causal effect on the outcome. Methods We investigate the effect of measurement error in the exposure and negative control variables on the results obtained from a negative control exposure study. We do this in models with continuous and binary exposure and negative control variables using analysis of the bias of the estimated coefficients and Monte Carlo simulations. Results Our results show that measurement error in either the exposure or negative control variables can bias the estimated results from the negative control exposure study. Conclusions Measurement error is common in the variables used in epidemiological studies; these results show that negative control exposure studies cannot be used to precisely determine the size of the effect of the exposure variable, or adequately adjust for unobserved confounding; however, they can be used as part of a body of evidence to aid inference as to whether a causal effect of the exposure on the outcome is present. PMID:29088358

  2. Negative control exposure studies in the presence of measurement error: implications for attempted effect estimate calibration.

    PubMed

    Sanderson, Eleanor; Macdonald-Wallis, Corrie; Davey Smith, George

    2018-04-01

    Negative control exposure studies are increasingly being used in epidemiological studies to strengthen causal inference regarding an exposure-outcome association when unobserved confounding is thought to be present. Negative control exposure studies contrast the magnitude of association of the negative control, which has no causal effect on the outcome but is associated with the unmeasured confounders in the same way as the exposure, with the magnitude of the association of the exposure with the outcome. A markedly larger effect of the exposure on the outcome than the negative control on the outcome strengthens inference that the exposure has a causal effect on the outcome. We investigate the effect of measurement error in the exposure and negative control variables on the results obtained from a negative control exposure study. We do this in models with continuous and binary exposure and negative control variables using analysis of the bias of the estimated coefficients and Monte Carlo simulations. Our results show that measurement error in either the exposure or negative control variables can bias the estimated results from the negative control exposure study. Measurement error is common in the variables used in epidemiological studies; these results show that negative control exposure studies cannot be used to precisely determine the size of the effect of the exposure variable, or adequately adjust for unobserved confounding; however, they can be used as part of a body of evidence to aid inference as to whether a causal effect of the exposure on the outcome is present.

  3. #2 - An Empirical Assessment of Exposure Measurement Error ...

    EPA Pesticide Factsheets

    Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation of effect estimates in single and bipollutantepidemiological models The National Exposure Research Laboratory (NERL) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA mission to protect human health and the environment. HEASD research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.

  4. Bayesian Analysis of Silica Exposure and Lung Cancer Using Human and Animal Studies.

    PubMed

    Bartell, Scott M; Hamra, Ghassan Badri; Steenland, Kyle

    2017-03-01

    Bayesian methods can be used to incorporate external information into epidemiologic exposure-response analyses of silica and lung cancer. We used data from a pooled mortality analysis of silica and lung cancer (n = 65,980), using untransformed and log-transformed cumulative exposure. Animal data came from chronic silica inhalation studies using rats. We conducted Bayesian analyses with informative priors based on the animal data and different cross-species extrapolation factors. We also conducted analyses with exposure measurement error corrections in the absence of a gold standard, assuming Berkson-type error that increased with increasing exposure. The pooled animal data exposure-response coefficient was markedly higher (log exposure) or lower (untransformed exposure) than the coefficient for the pooled human data. With 10-fold uncertainty, the animal prior had little effect on results for pooled analyses and only modest effects in some individual studies. One-fold uncertainty produced markedly different results for both pooled and individual studies. Measurement error correction had little effect in pooled analyses using log exposure. Using untransformed exposure, measurement error correction caused a 5% decrease in the exposure-response coefficient for the pooled analysis and marked changes in some individual studies. The animal prior had more impact for smaller human studies and for one-fold versus three- or 10-fold uncertainty. Adjustment for Berkson error using Bayesian methods had little effect on the exposure-response coefficient when exposure was log transformed or when the sample size was large. See video abstract at, http://links.lww.com/EDE/B160.

  5. An empirical assessment of exposure measurement errors and effect attenuation in bi-pollutant epidemiologic models

    EPA Science Inventory

    Using multipollutant models to understand the combined health effects of exposure to multiple pollutants is becoming more common. However, the complex relationships between pollutants and differing degrees of exposure error across pollutants can make health effect estimates from ...

  6. An empirical assessment of exposure measurement error and effect attenuation in bi-pollutant epidemiologic models

    EPA Science Inventory

    Background: Using multipollutant models to understand combined health effects of exposure to multiple pollutants is becoming more common. However, complex relationships between pollutants and differing degrees of exposure error across pollutants can make health effect estimates f...

  7. Wavelet-based functional linear mixed models: an application to measurement error-corrected distributed lag models.

    PubMed

    Malloy, Elizabeth J; Morris, Jeffrey S; Adar, Sara D; Suh, Helen; Gold, Diane R; Coull, Brent A

    2010-07-01

    Frequently, exposure data are measured over time on a grid of discrete values that collectively define a functional observation. In many applications, researchers are interested in using these measurements as covariates to predict a scalar response in a regression setting, with interest focusing on the most biologically relevant time window of exposure. One example is in panel studies of the health effects of particulate matter (PM), where particle levels are measured over time. In such studies, there are many more values of the functional data than observations in the data set so that regularization of the corresponding functional regression coefficient is necessary for estimation. Additional issues in this setting are the possibility of exposure measurement error and the need to incorporate additional potential confounders, such as meteorological or co-pollutant measures, that themselves may have effects that vary over time. To accommodate all these features, we develop wavelet-based linear mixed distributed lag models that incorporate repeated measures of functional data as covariates into a linear mixed model. A Bayesian approach to model fitting uses wavelet shrinkage to regularize functional coefficients. We show that, as long as the exposure error induces fine-scale variability in the functional exposure profile and the distributed lag function representing the exposure effect varies smoothly in time, the model corrects for the exposure measurement error without further adjustment. Both these conditions are likely to hold in the environmental applications we consider. We examine properties of the method using simulations and apply the method to data from a study examining the association between PM, measured as hourly averages for 1-7 days, and markers of acute systemic inflammation. We use the method to fully control for the effects of confounding by other time-varying predictors, such as temperature and co-pollutants.

  8. Instrumental variables vs. grouping approach for reducing bias due to measurement error.

    PubMed

    Batistatou, Evridiki; McNamee, Roseanne

    2008-01-01

    Attenuation of the exposure-response relationship due to exposure measurement error is often encountered in epidemiology. Given that error cannot be totally eliminated, bias correction methods of analysis are needed. Many methods require more than one exposure measurement per person to be made, but the `group mean OLS method,' in which subjects are grouped into several a priori defined groups followed by ordinary least squares (OLS) regression on the group means, can be applied with one measurement. An alternative approach is to use an instrumental variable (IV) method in which both the single error-prone measure and an IV are used in IV analysis. In this paper we show that the `group mean OLS' estimator is equal to an IV estimator with the group mean used as IV, but that the variance estimators for the two methods are different. We derive a simple expression for the bias in the common estimator which is a simple function of group size, reliability and contrast of exposure between groups, and show that the bias can be very small when group size is large. We compare this method with a new proposal (group mean ranking method), also applicable with a single exposure measurement, in which the IV is the rank of the group means. When there are two independent exposure measurements per subject, we propose a new IV method (EVROS IV) and compare it with Carroll and Stefanski's (CS IV) proposal in which the second measure is used as an IV; the new IV estimator combines aspects of the `group mean' and `CS' strategies. All methods are evaluated in terms of bias, precision and root mean square error via simulations and a dataset from occupational epidemiology. The `group mean ranking method' does not offer much improvement over the `group mean method.' Compared with the `CS' method, the `EVROS' method is less affected by low reliability of exposure. We conclude that the group IV methods we propose may provide a useful way to handle mismeasured exposures in epidemiology with or without replicate measurements. Our finding may also have implications for the use of aggregate variables in epidemiology to control for unmeasured confounding.

  9. Survival analysis with error-prone time-varying covariates: a risk set calibration approach

    PubMed Central

    Liao, Xiaomei; Zucker, David M.; Li, Yi; Spiegelman, Donna

    2010-01-01

    Summary Occupational, environmental, and nutritional epidemiologists are often interested in estimating the prospective effect of time-varying exposure variables such as cumulative exposure or cumulative updated average exposure, in relation to chronic disease endpoints such as cancer incidence and mortality. From exposure validation studies, it is apparent that many of the variables of interest are measured with moderate to substantial error. Although the ordinary regression calibration approach is approximately valid and efficient for measurement error correction of relative risk estimates from the Cox model with time-independent point exposures when the disease is rare, it is not adaptable for use with time-varying exposures. By re-calibrating the measurement error model within each risk set, a risk set regression calibration method is proposed for this setting. An algorithm for a bias-corrected point estimate of the relative risk using an RRC approach is presented, followed by the derivation of an estimate of its variance, resulting in a sandwich estimator. Emphasis is on methods applicable to the main study/external validation study design, which arises in important applications. Simulation studies under several assumptions about the error model were carried out, which demonstrated the validity and efficiency of the method in finite samples. The method was applied to a study of diet and cancer from Harvard’s Health Professionals Follow-up Study (HPFS). PMID:20486928

  10. Exposure measurement error in PM2.5 health effects studies: A pooled analysis of eight personal exposure validation studies

    PubMed Central

    2014-01-01

    Background Exposure measurement error is a concern in long-term PM2.5 health studies using ambient concentrations as exposures. We assessed error magnitude by estimating calibration coefficients as the association between personal PM2.5 exposures from validation studies and typically available surrogate exposures. Methods Daily personal and ambient PM2.5, and when available sulfate, measurements were compiled from nine cities, over 2 to 12 days. True exposure was defined as personal exposure to PM2.5 of ambient origin. Since PM2.5 of ambient origin could only be determined for five cities, personal exposure to total PM2.5 was also considered. Surrogate exposures were estimated as ambient PM2.5 at the nearest monitor or predicted outside subjects’ homes. We estimated calibration coefficients by regressing true on surrogate exposures in random effects models. Results When monthly-averaged personal PM2.5 of ambient origin was used as the true exposure, calibration coefficients equaled 0.31 (95% CI:0.14, 0.47) for nearest monitor and 0.54 (95% CI:0.42, 0.65) for outdoor home predictions. Between-city heterogeneity was not found for outdoor home PM2.5 for either true exposure. Heterogeneity was significant for nearest monitor PM2.5, for both true exposures, but not after adjusting for city-average motor vehicle number for total personal PM2.5. Conclusions Calibration coefficients were <1, consistent with previously reported chronic health risks using nearest monitor exposures being under-estimated when ambient concentrations are the exposure of interest. Calibration coefficients were closer to 1 for outdoor home predictions, likely reflecting less spatial error. Further research is needed to determine how our findings can be incorporated in future health studies. PMID:24410940

  11. Systematic review of statistical approaches to quantify, or correct for, measurement error in a continuous exposure in nutritional epidemiology.

    PubMed

    Bennett, Derrick A; Landry, Denise; Little, Julian; Minelli, Cosetta

    2017-09-19

    Several statistical approaches have been proposed to assess and correct for exposure measurement error. We aimed to provide a critical overview of the most common approaches used in nutritional epidemiology. MEDLINE, EMBASE, BIOSIS and CINAHL were searched for reports published in English up to May 2016 in order to ascertain studies that described methods aimed to quantify and/or correct for measurement error for a continuous exposure in nutritional epidemiology using a calibration study. We identified 126 studies, 43 of which described statistical methods and 83 that applied any of these methods to a real dataset. The statistical approaches in the eligible studies were grouped into: a) approaches to quantify the relationship between different dietary assessment instruments and "true intake", which were mostly based on correlation analysis and the method of triads; b) approaches to adjust point and interval estimates of diet-disease associations for measurement error, mostly based on regression calibration analysis and its extensions. Two approaches (multiple imputation and moment reconstruction) were identified that can deal with differential measurement error. For regression calibration, the most common approach to correct for measurement error used in nutritional epidemiology, it is crucial to ensure that its assumptions and requirements are fully met. Analyses that investigate the impact of departures from the classical measurement error model on regression calibration estimates can be helpful to researchers in interpreting their findings. With regard to the possible use of alternative methods when regression calibration is not appropriate, the choice of method should depend on the measurement error model assumed, the availability of suitable calibration study data and the potential for bias due to violation of the classical measurement error model assumptions. On the basis of this review, we provide some practical advice for the use of methods to assess and adjust for measurement error in nutritional epidemiology.

  12. Spatial measurement error and correction by spatial SIMEX in linear regression models when using predicted air pollution exposures.

    PubMed

    Alexeeff, Stacey E; Carroll, Raymond J; Coull, Brent

    2016-04-01

    Spatial modeling of air pollution exposures is widespread in air pollution epidemiology research as a way to improve exposure assessment. However, there are key sources of exposure model uncertainty when air pollution is modeled, including estimation error and model misspecification. We examine the use of predicted air pollution levels in linear health effect models under a measurement error framework. For the prediction of air pollution exposures, we consider a universal Kriging framework, which may include land-use regression terms in the mean function and a spatial covariance structure for the residuals. We derive the bias induced by estimation error and by model misspecification in the exposure model, and we find that a misspecified exposure model can induce asymptotic bias in the effect estimate of air pollution on health. We propose a new spatial simulation extrapolation (SIMEX) procedure, and we demonstrate that the procedure has good performance in correcting this asymptotic bias. We illustrate spatial SIMEX in a study of air pollution and birthweight in Massachusetts. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. Performance of bias-correction methods for exposure measurement error using repeated measurements with and without missing data.

    PubMed

    Batistatou, Evridiki; McNamee, Roseanne

    2012-12-10

    It is known that measurement error leads to bias in assessing exposure effects, which can however, be corrected if independent replicates are available. For expensive replicates, two-stage (2S) studies that produce data 'missing by design', may be preferred over a single-stage (1S) study, because in the second stage, measurement of replicates is restricted to a sample of first-stage subjects. Motivated by an occupational study on the acute effect of carbon black exposure on respiratory morbidity, we compare the performance of several bias-correction methods for both designs in a simulation study: an instrumental variable method (EVROS IV) based on grouping strategies, which had been recommended especially when measurement error is large, the regression calibration and the simulation extrapolation methods. For the 2S design, either the problem of 'missing' data was ignored or the 'missing' data were imputed using multiple imputations. Both in 1S and 2S designs, in the case of small or moderate measurement error, regression calibration was shown to be the preferred approach in terms of root mean square error. For 2S designs, regression calibration as implemented by Stata software is not recommended in contrast to our implementation of this method; the 'problematic' implementation of regression calibration although substantially improved with use of multiple imputations. The EVROS IV method, under a good/fairly good grouping, outperforms the regression calibration approach in both design scenarios when exposure mismeasurement is severe. Both in 1S and 2S designs with moderate or large measurement error, simulation extrapolation severely failed to correct for bias. Copyright © 2012 John Wiley & Sons, Ltd.

  14. Influence of exposure assessment and parameterization on exposure response. Aspects of epidemiologic cohort analysis using the Libby Amphibole asbestos worker cohort.

    PubMed

    Bateson, Thomas F; Kopylev, Leonid

    2015-01-01

    Recent meta-analyses of occupational epidemiology studies identified two important exposure data quality factors in predicting summary effect measures for asbestos-associated lung cancer mortality risk: sufficiency of job history data and percent coverage of work history by measured exposures. The objective was to evaluate different exposure parameterizations suggested in the asbestos literature using the Libby, MT asbestos worker cohort and to evaluate influences of exposure measurement error caused by historically estimated exposure data on lung cancer risks. Focusing on workers hired after 1959, when job histories were well-known and occupational exposures were predominantly based on measured exposures (85% coverage), we found that cumulative exposure alone, and with allowance of exponential decay, fit lung cancer mortality data similarly. Residence-time-weighted metrics did not fit well. Compared with previous analyses based on the whole cohort of Libby workers hired after 1935, when job histories were less well-known and exposures less frequently measured (47% coverage), our analyses based on higher quality exposure data yielded an effect size as much as 3.6 times higher. Future occupational cohort studies should continue to refine retrospective exposure assessment methods, consider multiple exposure metrics, and explore new methods of maintaining statistical power while minimizing exposure measurement error.

  15. The ipRGC-Driven Pupil Response with Light Exposure, Refractive Error, and Sleep.

    PubMed

    Abbott, Kaleb S; Queener, Hope M; Ostrin, Lisa A

    2018-04-01

    We investigated links between the intrinsically photosensitive retinal ganglion cells, light exposure, refractive error, and sleep. Results showed that morning melatonin was associated with light exposure, with modest differences in sleep quality between myopes and emmetropes. Findings suggest a complex relationship between light exposure and these physiological processes. Intrinsically photosensitive retinal ganglion cells (ipRGCs) signal environmental light, with pathways to the midbrain to control pupil size and circadian rhythm. Evidence suggests that light exposure plays a role in refractive error development. Our goal was to investigate links between light exposure, ipRGCs, refractive error, and sleep. Fifty subjects, aged 17-40, participated (19 emmetropes and 31 myopes). A subset of subjects (n = 24) wore an Actiwatch Spectrum for 1 week. The Pittsburgh Sleep Quality Index (PSQI) was administered, and saliva samples were collected for melatonin analysis. The post-illumination pupil response (PIPR) to 1 s and 5 s long- and short-wavelength stimuli was measured. Pupil metrics included the 6 s and 30 s PIPR and early and late area under the curve. Subjects spent 104.8 ± 46.6 min outdoors per day over the previous week. Morning melatonin concentration (6.9 ± 3.5 pg/ml) was significantly associated with time outdoors and objectively measured light exposure (P = .01 and .002, respectively). Pupil metrics were not significantly associated with light exposure or refractive error. PSQI scores indicated good sleep quality for emmetropes (score 4.2 ± 2.3) and poor sleep quality for myopes (5.6 ± 2.2, P = .04). We found that light exposure and time outdoors influenced morning melatonin concentration. No differences in melatonin or the ipRGC-driven pupil response were observed between refractive error groups, although myopes exhibited poor sleep quality compared to emmetropes. Findings suggest that a complex relationship between light exposure, ipRGCs, refractive error, and sleep exists.

  16. Role-modeling and medical error disclosure: a national survey of trainees.

    PubMed

    Martinez, William; Hickson, Gerald B; Miller, Bonnie M; Doukas, David J; Buckley, John D; Song, John; Sehgal, Niraj L; Deitz, Jennifer; Braddock, Clarence H; Lehmann, Lisa Soleymani

    2014-03-01

    To measure trainees' exposure to negative and positive role-modeling for responding to medical errors and to examine the association between that exposure and trainees' attitudes and behaviors regarding error disclosure. Between May 2011 and June 2012, 435 residents at two large academic medical centers and 1,187 medical students from seven U.S. medical schools received anonymous, electronic questionnaires. The questionnaire asked respondents about (1) experiences with errors, (2) training for responding to errors, (3) behaviors related to error disclosure, (4) exposure to role-modeling for responding to errors, and (5) attitudes regarding disclosure. Using multivariate regression, the authors analyzed whether frequency of exposure to negative and positive role-modeling independently predicted two primary outcomes: (1) attitudes regarding disclosure and (2) nontransparent behavior in response to a harmful error. The response rate was 55% (884/1,622). Training on how to respond to errors had the largest independent, positive effect on attitudes (standardized effect estimate, 0.32, P < .001); negative role-modeling had the largest independent, negative effect (standardized effect estimate, -0.26, P < .001). Positive role-modeling had a positive effect on attitudes (standardized effect estimate, 0.26, P < .001). Exposure to negative role-modeling was independently associated with an increased likelihood of trainees' nontransparent behavior in response to an error (OR 1.37, 95% CI 1.15-1.64; P < .001). Exposure to role-modeling predicts trainees' attitudes and behavior regarding the disclosure of harmful errors. Negative role models may be a significant impediment to disclosure among trainees.

  17. Residential magnetic fields predicted from wiring configurations: II. Relationships To childhood leukemia.

    PubMed

    Thomas, D C; Bowman, J D; Jiang, L; Jiang, F; Peters, J M

    1999-10-01

    Case-control data on childhood leukemia in Los Angeles County were reanalyzed with residential magnetic fields predicted from the wiring configurations of nearby transmission and distribution lines. As described in a companion paper, the 24-h means of the magnetic field's magnitude in subjects' homes were predicted by a physically based regression model that had been fitted to 24-h measurements and wiring data. In addition, magnetic field exposures were adjusted for the most likely form of exposure assessment errors: classic errors for the 24-h measurements and Berkson errors for the predictions from wire configurations. Although the measured fields had no association with childhood leukemia (P for trend=.88), the risks were significant for predicted magnetic fields above 1.25 mG (odds ratio=2.00, 95% confidence interval=1.03-3.89), and a significant dose-response was seen (P for trend=.02). When exposures were determined by a combination of predictions and measurements that corrects for errors, the odds ratio (odd ratio=2.19, 95% confidence interval=1.12-4.31) and the trend (p =.007) showed somewhat greater significance. These findings support the hypothesis that magnetic fields from electrical lines are causally related to childhood leukemia but that this association has been inconsistent among epidemiologic studies due to different types of exposure assessment error. In these data, the leukemia risks from a child's residential magnetic field exposure appears to be better assessed by wire configurations than by 24-h area measurements. However, the predicted fields only partially account for the effect of the Wertheimer-Leeper wire code in a multivariate analysis and do not completely explain why these wire codes have been so often associated with childhood leukemia. The most plausible explanation for our findings is that the causal factor is another magnetic field exposure metric correlated to both wire code and the field's time-averaged magnitude. Copyright 1999 Wiley-Liss, Inc.

  18. LIMITATIONS ON THE USES OF MULTIMEDIA EXPOSURE MEASUREMENTS FOR MULTIPATHWAY EXPOSURE ASSESSMENT - PART II: EFFECTS OF MISSING DATA AND IMPRECISION

    EPA Science Inventory

    Multimedia data from two probability-based exposure studies were investigated in terms of how missing data and measurement-error imprecision affected estimation of population parameters and associations. Missing data resulted mainly from individuals' refusing to participate in c...

  19. Panel discussion review: session 1--exposure assessment and related errors in air pollution epidemiologic studies.

    PubMed

    Sarnat, Jeremy A; Wilson, William E; Strand, Matthew; Brook, Jeff; Wyzga, Ron; Lumley, Thomas

    2007-12-01

    Examining the validity of exposure metrics used in air pollution epidemiologic models has been a key focus of recent exposure assessment studies. The objective of this work has been, largely, to determine what a given exposure metric represents and to quantify and reduce any potential errors resulting from using these metrics in lieu of true exposure measurements. The current manuscript summarizes the presentations of the co-authors from a recent EPA workshop, held in December 2006, dealing with the role and contributions of exposure assessment in addressing these issues. Results are presented from US and Canadian exposure and pollutant measurement studies as well as theoretical simulations to investigate what both particulate and gaseous pollutant concentrations represent and the potential errors resulting from their use in air pollution epidemiologic studies. Quantifying the association between ambient pollutant concentrations and corresponding personal exposures has led to the concept of defining attenuation factors, or alpha. Specifically, characterizing pollutant-specific estimates for alpha was shown to be useful in developing regression calibration methods involving PM epidemiologic risk estimates. For some gaseous pollutants such as NO2 and SO2, the associations between ambient concentrations and personal exposures were shown to be complex and still poorly understood. Results from recent panel studies suggest that ambient NO2 measurements may, in some locations, be serving as surrogates to traffic pollutants, including traffic-related PM2.5, hopanes, steranes, and oxidized nitrogen compounds (rather than NO2).

  20. Bias correction by use of errors-in-variables regression models in studies with K-X-ray fluorescence bone lead measurements.

    PubMed

    Lamadrid-Figueroa, Héctor; Téllez-Rojo, Martha M; Angeles, Gustavo; Hernández-Ávila, Mauricio; Hu, Howard

    2011-01-01

    In-vivo measurement of bone lead by means of K-X-ray fluorescence (KXRF) is the preferred biological marker of chronic exposure to lead. Unfortunately, considerable measurement error associated with KXRF estimations can introduce bias in estimates of the effect of bone lead when this variable is included as the exposure in a regression model. Estimates of uncertainty reported by the KXRF instrument reflect the variance of the measurement error and, although they can be used to correct the measurement error bias, they are seldom used in epidemiological statistical analyzes. Errors-in-variables regression (EIV) allows for correction of bias caused by measurement error in predictor variables, based on the knowledge of the reliability of such variables. The authors propose a way to obtain reliability coefficients for bone lead measurements from uncertainty data reported by the KXRF instrument and compare, by the use of Monte Carlo simulations, results obtained using EIV regression models vs. those obtained by the standard procedures. Results of the simulations show that Ordinary Least Square (OLS) regression models provide severely biased estimates of effect, and that EIV provides nearly unbiased estimates. Although EIV effect estimates are more imprecise, their mean squared error is much smaller than that of OLS estimates. In conclusion, EIV is a better alternative than OLS to estimate the effect of bone lead when measured by KXRF. Copyright © 2010 Elsevier Inc. All rights reserved.

  1. Air pollution exposure prediction approaches used in air pollution epidemiology studies

    EPA Science Inventory

    Epidemiological studies of the health effects of air pollution have traditionally relied upon surrogates of personal exposures, most commonly ambient concentration measurements from central-site monitors. However, this approach may introduce exposure prediction errors and miscla...

  2. Poisoning Safety Fact Sheet (2015)

    MedlinePlus

    ... in emergency departments after getting into a medication, accounting for 68% of medication-related visits for young ... and under (31% of dosing errors), followed by measurement errors (30%). 2 • For every 10 poison exposures ...

  3. Estimating the acute health effects of coarse particulate matter accounting for exposure measurement error.

    PubMed

    Chang, Howard H; Peng, Roger D; Dominici, Francesca

    2011-10-01

    In air pollution epidemiology, there is a growing interest in estimating the health effects of coarse particulate matter (PM) with aerodynamic diameter between 2.5 and 10 μm. Coarse PM concentrations can exhibit considerable spatial heterogeneity because the particles travel shorter distances and do not remain suspended in the atmosphere for an extended period of time. In this paper, we develop a modeling approach for estimating the short-term effects of air pollution in time series analysis when the ambient concentrations vary spatially within the study region. Specifically, our approach quantifies the error in the exposure variable by characterizing, on any given day, the disagreement in ambient concentrations measured across monitoring stations. This is accomplished by viewing monitor-level measurements as error-prone repeated measurements of the unobserved population average exposure. Inference is carried out in a Bayesian framework to fully account for uncertainty in the estimation of model parameters. Finally, by using different exposure indicators, we investigate the sensitivity of the association between coarse PM and daily hospital admissions based on a recent national multisite time series analysis. Among Medicare enrollees from 59 US counties between the period 1999 and 2005, we find a consistent positive association between coarse PM and same-day admission for cardiovascular diseases.

  4. Dynamic changes in brain activity during prism adaptation.

    PubMed

    Luauté, Jacques; Schwartz, Sophie; Rossetti, Yves; Spiridon, Mona; Rode, Gilles; Boisson, Dominique; Vuilleumier, Patrik

    2009-01-07

    Prism adaptation does not only induce short-term sensorimotor plasticity, but also longer-term reorganization in the neural representation of space. We used event-related fMRI to study dynamic changes in brain activity during both early and prolonged exposure to visual prisms. Participants performed a pointing task before, during, and after prism exposure. Measures of trial-by-trial pointing errors and corrections allowed parametric analyses of brain activity as a function of performance. We show that during the earliest phase of prism exposure, anterior intraparietal sulcus was primarily implicated in error detection, whereas parieto-occipital sulcus was implicated in error correction. Cerebellum activity showed progressive increases during prism exposure, in accordance with a key role for spatial realignment. This time course further suggests that the cerebellum might promote neural changes in superior temporal cortex, which was selectively activated during the later phase of prism exposure and could mediate the effects of prism adaptation on cognitive spatial representations.

  5. Application of alternative spatiotemporal metrics of ambient air pollution exposure in a time-series epidemiological study in Atlanta

    EPA Science Inventory

    Exposure error in studies of ambient air pollution and health that use city-wide measures of exposure may be substantial for pollutants that exhibit spatiotemporal variability. Alternative spatiotemporal metrics of exposure for traffic-related and regional pollutants were applied...

  6. Effects of Exposure Measurement Error in the Analysis of Health Effects from Traffic-Related Air Pollution

    EPA Science Inventory

    In large epidemiological studies, many researchers use surrogates of air pollution exposure such as geographic information system (GIS)-based characterizations of traffic or simple housing characteristics. It is important to validate these surrogates against measured pollutant co...

  7. Correcting for Measurement Error in Time-Varying Covariates in Marginal Structural Models.

    PubMed

    Kyle, Ryan P; Moodie, Erica E M; Klein, Marina B; Abrahamowicz, Michał

    2016-08-01

    Unbiased estimation of causal parameters from marginal structural models (MSMs) requires a fundamental assumption of no unmeasured confounding. Unfortunately, the time-varying covariates used to obtain inverse probability weights are often error-prone. Although substantial measurement error in important confounders is known to undermine control of confounders in conventional unweighted regression models, this issue has received comparatively limited attention in the MSM literature. Here we propose a novel application of the simulation-extrapolation (SIMEX) procedure to address measurement error in time-varying covariates, and we compare 2 approaches. The direct approach to SIMEX-based correction targets outcome model parameters, while the indirect approach corrects the weights estimated using the exposure model. We assess the performance of the proposed methods in simulations under different clinically plausible assumptions. The simulations demonstrate that measurement errors in time-dependent covariates may induce substantial bias in MSM estimators of causal effects of time-varying exposures, and that both proposed SIMEX approaches yield practically unbiased estimates in scenarios featuring low-to-moderate degrees of error. We illustrate the proposed approach in a simple analysis of the relationship between sustained virological response and liver fibrosis progression among persons infected with hepatitis C virus, while accounting for measurement error in γ-glutamyltransferase, using data collected in the Canadian Co-infection Cohort Study from 2003 to 2014. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. An assessment of air pollutant exposure methods in Mexico City, Mexico.

    PubMed

    Rivera-González, Luis O; Zhang, Zhenzhen; Sánchez, Brisa N; Zhang, Kai; Brown, Daniel G; Rojas-Bracho, Leonora; Osornio-Vargas, Alvaro; Vadillo-Ortega, Felipe; O'Neill, Marie S

    2015-05-01

    Geostatistical interpolation methods to estimate individual exposure to outdoor air pollutants can be used in pregnancy cohorts where personal exposure data are not collected. Our objectives were to a) develop four assessment methods (citywide average (CWA); nearest monitor (NM); inverse distance weighting (IDW); and ordinary Kriging (OK)), and b) compare daily metrics and cross-validations of interpolation models. We obtained 2008 hourly data from Mexico City's outdoor air monitoring network for PM10, PM2.5, O3, CO, NO2, and SO2 and constructed daily exposure metrics for 1,000 simulated individual locations across five populated geographic zones. Descriptive statistics from all methods were calculated for dry and wet seasons, and by zone. We also evaluated IDW and OK methods' ability to predict measured concentrations at monitors using cross validation and a coefficient of variation (COV). All methods were performed using SAS 9.3, except ordinary Kriging which was modeled using R's gstat package. Overall, mean concentrations and standard deviations were similar among the different methods for each pollutant. Correlations between methods were generally high (r=0.77 to 0.99). However, ranges of estimated concentrations determined by NM, IDW, and OK were wider than the ranges for CWA. Root mean square errors for OK were consistently equal to or lower than for the IDW method. OK standard errors varied considerably between pollutants and the computed COVs ranged from 0.46 (least error) for SO2 and PM10 to 3.91 (most error) for PM2.5. OK predicted concentrations measured at the monitors better than IDW and NM. Given the similarity in results for the exposure methods, OK is preferred because this method alone provides predicted standard errors which can be incorporated in statistical models. The daily estimated exposures calculated using these different exposure methods provide flexibility to evaluate multiple windows of exposure during pregnancy, not just trimester or pregnancy-long exposures. Many studies evaluating associations between outdoor air pollution and adverse pregnancy outcomes rely on outdoor air pollution monitoring data linked to information gathered from large birth registries, and often lack residence location information needed to estimate individual exposure. This study simulated 1,000 residential locations to evaluate four air pollution exposure assessment methods, and describes possible exposure misclassification from using spatial averaging versus geostatistical interpolation models. An implication of this work is that policies to reduce air pollution and exposure among pregnant women based on epidemiologic literature should take into account possible error in estimates of effect when spatial averages alone are evaluated.

  9. Volume error analysis for lung nodules attached to pulmonary vessels in an anthropomorphic thoracic phantom

    NASA Astrophysics Data System (ADS)

    Kinnard, Lisa M.; Gavrielides, Marios A.; Myers, Kyle J.; Zeng, Rongping; Peregoy, Jennifer; Pritchard, William; Karanian, John W.; Petrick, Nicholas

    2008-03-01

    High-resolution CT, three-dimensional (3D) methods for nodule volumetry have been introduced, with the hope that such methods will be more accurate and consistent than currently used planar measures of size. However, the error associated with volume estimation methods still needs to be quantified. Volume estimation error is multi-faceted in the sense that it is impacted by characteristics of the patient, the software tool and the CT system. The overall goal of this research is to quantify the various sources of measurement error and, when possible, minimize their effects. In the current study, we estimated nodule volume from ten repeat scans of an anthropomorphic phantom containing two synthetic spherical lung nodules (diameters: 5 and 10 mm; density: -630 HU), using a 16-slice Philips CT with 20, 50, 100 and 200 mAs exposures and 0.8 and 3.0 mm slice thicknesses. True volume was estimated from an average of diameter measurements, made using digital calipers. We report variance and bias results for volume measurements as a function of slice thickness, nodule diameter, and X-ray exposure.

  10. Modeling individual exposures to ambient PM2.5 in the diabetes and the environment panel study (DEPS)

    EPA Science Inventory

    Air pollution epidemiology studies of ambient fine particulate matter (PM2.5) often use outdoor concentrations as exposure surrogates, which can induce exposure error. The goal of this study was to improve ambient PM2.5 exposure assessments for a repeated measurements study with ...

  11. Direct Measures of Character Mislocalizations with Masked/Unmasked Exposures.

    ERIC Educational Resources Information Center

    Chastain, Garvin; And Others

    Butler (1980) compared errors representing intrusions and mislocalizations on 3x3 letter displays under pattern-mask versus no-mask conditions and found that pattern masking increased character mislocalization errors (naming a character in the display but not in the target position as being the target) over intrusion errors (naming a character not…

  12. Berkson error adjustment and other exposure surrogates in occupational case-control studies, with application to the Canadian INTEROCC study.

    PubMed

    Oraby, Tamer; Sivaganesan, Siva; Bowman, Joseph D; Kincl, Laurel; Richardson, Lesley; McBride, Mary; Siemiatycki, Jack; Cardis, Elisabeth; Krewski, Daniel

    2018-05-01

    Many epidemiological studies assessing the relationship between exposure and disease are carried out without data on individual exposures. When this barrier is encountered in occupational studies, the subject exposures are often evaluated with a job-exposure matrix (JEM), which consists of mean exposure for occupational categories measured on a comparable group of workers. One of the objectives of the seven-country case-control study of occupational exposure and brain cancer risk, INTEROCC, was to investigate the relationship of occupational exposure to electromagnetic fields (EMF) in different frequency ranges and brain cancer risk. In this paper, we use the Canadian data from INTEROCC to estimate the odds of developing brain tumours due to occupational exposure to EMF. The first step was to find the best EMF exposure surrogate among the arithmetic mean, the geometric mean, and the mean of log-normal exposure distribution for each occupation in the JEM, in comparison to Berkson error adjustments via numerical approximation of the likelihood function. Contrary to previous studies of Berkson errors in JEMs, we found that the geometric mean was the best exposure surrogate. This analysis provided no evidence that cumulative lifetime exposure to extremely low frequency magnetic fields increases brain cancer risk, a finding consistent with other recent epidemiological studies.

  13. Mediation analysis when a continuous mediator is measured with error and the outcome follows a generalized linear model

    PubMed Central

    Valeri, Linda; Lin, Xihong; VanderWeele, Tyler J.

    2014-01-01

    Mediation analysis is a popular approach to examine the extent to which the effect of an exposure on an outcome is through an intermediate variable (mediator) and the extent to which the effect is direct. When the mediator is mis-measured the validity of mediation analysis can be severely undermined. In this paper we first study the bias of classical, non-differential measurement error on a continuous mediator in the estimation of direct and indirect causal effects in generalized linear models when the outcome is either continuous or discrete and exposure-mediator interaction may be present. Our theoretical results as well as a numerical study demonstrate that in the presence of non-linearities the bias of naive estimators for direct and indirect effects that ignore measurement error can take unintuitive directions. We then develop methods to correct for measurement error. Three correction approaches using method of moments, regression calibration and SIMEX are compared. We apply the proposed method to the Massachusetts General Hospital lung cancer study to evaluate the effect of genetic variants mediated through smoking on lung cancer risk. PMID:25220625

  14. Robust best linear estimator for Cox regression with instrumental variables in whole cohort and surrogates with additive measurement error in calibration sample

    PubMed Central

    Wang, Ching-Yun; Song, Xiao

    2017-01-01

    SUMMARY Biomedical researchers are often interested in estimating the effect of an environmental exposure in relation to a chronic disease endpoint. However, the exposure variable of interest may be measured with errors. In a subset of the whole cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies an additive measurement error model, but it may not have repeated measurements. The subset in which the surrogate variables are available is called a calibration sample. In addition to the surrogate variables that are available among the subjects in the calibration sample, we consider the situation when there is an instrumental variable available for all study subjects. An instrumental variable is correlated with the unobserved true exposure variable, and hence can be useful in the estimation of the regression coefficients. In this paper, we propose a nonparametric method for Cox regression using the observed data from the whole cohort. The nonparametric estimator is the best linear combination of a nonparametric correction estimator from the calibration sample and the difference of the naive estimators from the calibration sample and the whole cohort. The asymptotic distribution is derived, and the finite sample performance of the proposed estimator is examined via intensive simulation studies. The methods are applied to the Nutritional Biomarkers Study of the Women’s Health Initiative. PMID:27546625

  15. Microenvironment Tracker (MicroTrac)

    EPA Science Inventory

    Epidemiologic studies have shown associations between air pollution concentrations measured at central-site ambient monitors and adverse health outcomes. Using central-site concentrations as exposure surrogates, however, can lead to exposure errors due to time spent in various in...

  16. MEASUREMENT ERROR ESTIMATION AND CORRECTION METHODS TO MINIMIZE EXPOSURE MISCLASSIFICATION IN EPIDEMIOLOGICAL STUDIES: PROJECT SUMMARY

    EPA Science Inventory

    This project summary highlights recent findings from research undertaken to develop improved methods to assess potential human health risks related to drinking water disinfection byproduct (DBP) exposures.

  17. On the use of mobile phones and wearable microphones for noise exposure measurements: Calibration and measurement accuracy

    NASA Astrophysics Data System (ADS)

    Dumoulin, Romain

    Despite the fact that noise-induced hearing loss remains the number one occupational disease in developed countries, individual noise exposure levels are still rarely known and infrequently tracked. Indeed, efforts to standardize noise exposure levels present disadvantages such as costly instrumentation and difficulties associated with on site implementation. Given their advanced technical capabilities and widespread daily usage, mobile phones could be used to measure noise levels and make noise monitoring more accessible. However, the use of mobile phones for measuring noise exposure is currently limited due to the lack of formal procedures for their calibration and challenges regarding the measurement procedure. Our research investigated the calibration of mobile phone-based solutions for measuring noise exposure using a mobile phone's built-in microphones and wearable external microphones. The proposed calibration approach integrated corrections that took into account microphone placement error. The corrections were of two types: frequency-dependent, using a digital filter and noise level-dependent, based on the difference between the C-weighted noise level minus A-weighted noise level of the noise measured by the phone. The electro-acoustical limitations and measurement calibration procedure of the mobile phone were investigated. The study also sought to quantify the effect of noise exposure characteristics on the accuracy of calibrated mobile phone measurements. Measurements were carried out in reverberant and semi-anechoic chambers with several mobiles phone units of the same model, two types of external devices (an earpiece and a headset with an in-line microphone) and an acoustical test fixture (ATF). The proposed calibration approach significantly improved the accuracy of the noise level measurements in diffuse and free fields, with better results in the diffuse field and with ATF positions causing little or no acoustic shadowing. Several sources of errors and uncertainties were identified including the errors associated with the inter-unit-variability, the presence of signal saturation and the microphone placement relative to the source and the wearer. The results of the investigations and validation measurements led to recommendations regarding the measurement procedure including the use of external microphones having lower sensitivity and provided the basis for a standardized and unique factory default calibration method intended for implementation in any mobile phone. A user-defined adjustment was proposed to minimize the errors associated with calibration and the acoustical field. Mobile phones implementing the proposed laboratory calibration and used with external microphones showed great potential as noise exposure instruments. Combined with their potential as training and prevention tools, the expansion of their use could significantly help reduce the risks of noise-induced hearing loss.

  18. Mixtures of Berkson and classical covariate measurement error in the linear mixed model: Bias analysis and application to a study on ultrafine particles.

    PubMed

    Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette

    2018-05-01

    The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. The importance of the exposure metric in air pollution epidemiology studies: When does it matter, and why?

    EPA Science Inventory

    Exposure error in ambient air pollution epidemiologic studies may introduce bias and/or attenuation of the health risk estimate, reduce statistical significance, and lower statistical power. Alternative exposure metrics are increasingly being used in place of central-site measure...

  20. Estimating Personal Exposures from Ambient Air Pollution Measures - Using Meta-Analysis to Assess Measurement Error

    EPA Science Inventory

    Although ambient concentrations of particulate matter ≤ 10μm (PM10) are often used as proxies for total personal exposure, correlation (r) between ambient and personal PM10 concentrations varies. Factors underlying this variation and its effect on he...

  1. Phase measurement error in summation of electron holography series.

    PubMed

    McLeod, Robert A; Bergen, Michael; Malac, Marek

    2014-06-01

    Off-axis electron holography is a method for the transmission electron microscope (TEM) that measures the electric and magnetic properties of a specimen. The electrostatic and magnetic potentials modulate the electron wavefront phase. The error in measurement of the phase therefore determines the smallest observable changes in electric and magnetic properties. Here we explore the summation of a hologram series to reduce the phase error and thereby improve the sensitivity of electron holography. Summation of hologram series requires independent registration and correction of image drift and phase wavefront drift, the consequences of which are discussed. Optimization of the electro-optical configuration of the TEM for the double biprism configuration is examined. An analytical model of image and phase drift, composed of a combination of linear drift and Brownian random-walk, is derived and experimentally verified. The accuracy of image registration via cross-correlation and phase registration is characterized by simulated hologram series. The model of series summation errors allows the optimization of phase error as a function of exposure time and fringe carrier frequency for a target spatial resolution. An experimental example of hologram series summation is provided on WS2 fullerenes. A metric is provided to measure the object phase error from experimental results and compared to analytical predictions. The ultimate experimental object root-mean-square phase error is 0.006 rad (2π/1050) at a spatial resolution less than 0.615 nm and a total exposure time of 900 s. The ultimate phase error in vacuum adjacent to the specimen is 0.0037 rad (2π/1700). The analytical prediction of phase error differs with the experimental metrics by +7% inside the object and -5% in the vacuum, indicating that the model can provide reliable quantitative predictions. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.

  2. Measurement error in mobile source air pollution exposure estimates due to residential mobility during pregnancy

    PubMed Central

    Pennington, Audrey Flak; Strickland, Matthew J.; Klein, Mitchel; Zhai, Xinxin; Russell, Armistead G.; Hansen, Craig; Darrow, Lyndsey A.

    2018-01-01

    Prenatal air pollution exposure is frequently estimated using maternal residential location at the time of delivery as a proxy for residence during pregnancy. We describe residential mobility during pregnancy among 19,951 children from the Kaiser Air Pollution and Pediatric Asthma Study, quantify measurement error in spatially-resolved estimates of prenatal exposure to mobile source fine particulate matter (PM2.5) due to ignoring this mobility, and simulate the impact of this error on estimates of epidemiologic associations. Two exposure estimates were compared, one calculated using complete residential histories during pregnancy (weighted average based on time spent at each address) and the second calculated using only residence at birth. Estimates were computed using annual averages of primary PM2.5 from traffic emissions modeled using a research line-source dispersion model (RLINE) at 250 meter resolution. In this cohort, 18.6% of children were born to mothers who moved at least once during pregnancy. Mobile source PM2.5 exposure estimates calculated using complete residential histories during pregnancy and only residence at birth were highly correlated (rS>0.9). Simulations indicated that ignoring residential mobility resulted in modest bias of epidemiologic associations toward the null, but varied by maternal characteristics and prenatal exposure windows of interest (ranging from −2% to −10% bias). PMID:27966666

  3. Measurement error in mobile source air pollution exposure estimates due to residential mobility during pregnancy.

    PubMed

    Pennington, Audrey Flak; Strickland, Matthew J; Klein, Mitchel; Zhai, Xinxin; Russell, Armistead G; Hansen, Craig; Darrow, Lyndsey A

    2017-09-01

    Prenatal air pollution exposure is frequently estimated using maternal residential location at the time of delivery as a proxy for residence during pregnancy. We describe residential mobility during pregnancy among 19,951 children from the Kaiser Air Pollution and Pediatric Asthma Study, quantify measurement error in spatially resolved estimates of prenatal exposure to mobile source fine particulate matter (PM 2.5 ) due to ignoring this mobility, and simulate the impact of this error on estimates of epidemiologic associations. Two exposure estimates were compared, one calculated using complete residential histories during pregnancy (weighted average based on time spent at each address) and the second calculated using only residence at birth. Estimates were computed using annual averages of primary PM 2.5 from traffic emissions modeled using a Research LINE-source dispersion model for near-surface releases (RLINE) at 250 m resolution. In this cohort, 18.6% of children were born to mothers who moved at least once during pregnancy. Mobile source PM 2.5 exposure estimates calculated using complete residential histories during pregnancy and only residence at birth were highly correlated (r S >0.9). Simulations indicated that ignoring residential mobility resulted in modest bias of epidemiologic associations toward the null, but varied by maternal characteristics and prenatal exposure windows of interest (ranging from -2% to -10% bias).

  4. Exposure assessment in investigations of waterborne illness: a quantitative estimate of measurement error

    PubMed Central

    Jones, Andria Q; Dewey, Catherine E; Doré, Kathryn; Majowicz, Shannon E; McEwen, Scott A; Waltner-Toews, David

    2006-01-01

    Background Exposure assessment is typically the greatest weakness of epidemiologic studies of disinfection by-products (DBPs) in drinking water, which largely stems from the difficulty in obtaining accurate data on individual-level water consumption patterns and activity. Thus, surrogate measures for such waterborne exposures are commonly used. Little attention however, has been directed towards formal validation of these measures. Methods We conducted a study in the City of Hamilton, Ontario (Canada) in 2001–2002, to assess the accuracy of two surrogate measures of home water source: (a) urban/rural status as assigned using residential postal codes, and (b) mapping of residential postal codes to municipal water systems within a Geographic Information System (GIS). We then assessed the accuracy of a commonly-used surrogate measure of an individual's actual drinking water source, namely, their home water source. Results The surrogates for home water source provided good classification of residents served by municipal water systems (approximately 98% predictive value), but did not perform well in classifying those served by private water systems (average: 63.5% predictive value). More importantly, we found that home water source was a poor surrogate measure of the individuals' actual drinking water source(s), being associated with high misclassification errors. Conclusion This study demonstrated substantial misclassification errors associated with a surrogate measure commonly used in studies of drinking water disinfection byproducts. Further, the limited accuracy of two surrogate measures of an individual's home water source heeds caution in their use in exposure classification methodology. While these surrogates are inexpensive and convenient, they should not be substituted for direct collection of accurate data pertaining to the subjects' waterborne disease exposure. In instances where such surrogates must be used, estimation of the misclassification and its subsequent effects are recommended for the interpretation and communication of results. Our results also lend support for further investigation into the quantification of the exposure misclassification associated with these surrogate measures, which would provide useful estimates for consideration in interpretation of waterborne disease studies. PMID:16729887

  5. Errors in causal inference: an organizational schema for systematic error and random error.

    PubMed

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Thyroid cancer following scalp irradiation: a reanalysis accounting for uncertainty in dosimetry.

    PubMed

    Schafer, D W; Lubin, J H; Ron, E; Stovall, M; Carroll, R J

    2001-09-01

    In the 1940s and 1950s, over 20,000 children in Israel were treated for tinea capitis (scalp ringworm) by irradiation to induce epilation. Follow-up studies showed that the radiation exposure was associated with the development of malignant thyroid neoplasms. Despite this clear evidence of an effect, the magnitude of the dose-response relationship is much less clear because of probable errors in individual estimates of dose to the thyroid gland. Such errors have the potential to bias dose-response estimation, a potential that was not widely appreciated at the time of the original analyses. We revisit this issue, describing in detail how errors in dosimetry might occur, and we develop a new dose-response model that takes the uncertainties of the dosimetry into account. Our model for the uncertainty in dosimetry is a complex and new variant of the classical multiplicative Berkson error model, having components of classical multiplicative measurement error as well as missing data. Analysis of the tinea capitis data suggests that measurement error in the dosimetry has only a negligible effect on dose-response estimation and inference as well as on the modifying effect of age at exposure.

  7. A simulation study to determine the attenuation and bias in health risk estimates due to exposure measurement error in bi-pollutant models

    EPA Science Inventory

    To understand the combined health effects of exposure to ambient air pollutant mixtures, it is becoming more common to include multiple pollutants in epidemiologic models. However, the complex spatial and temporal pattern of ambient pollutant concentrations and related exposures ...

  8. Accounting for independent nondifferential misclassification does not increase certainty that an observed association is in the correct direction.

    PubMed

    Greenland, Sander; Gustafson, Paul

    2006-07-01

    Researchers sometimes argue that their exposure-measurement errors are independent of other errors and are nondifferential with respect to disease, resulting in estimation bias toward the null. Among well-known problems with such arguments are that independence and nondifferentiality are harder to satisfy than ordinarily appreciated (e.g., because of correlation of errors in questionnaire items, and because of uncontrolled covariate effects on error rates); small violations of independence or nondifferentiality may lead to bias away from the null; and, if exposure is polytomous, the bias produced by independent nondifferential error is not always toward the null. The authors add to this list by showing that, in a 2 x 2 table (for which independent nondifferential error produces bias toward the null), accounting for independent nondifferential error does not reduce the p value even though it increases the point estimate. Thus, such accounting should not increase certainty that an association is present.

  9. Robust best linear estimator for Cox regression with instrumental variables in whole cohort and surrogates with additive measurement error in calibration sample.

    PubMed

    Wang, Ching-Yun; Song, Xiao

    2016-11-01

    Biomedical researchers are often interested in estimating the effect of an environmental exposure in relation to a chronic disease endpoint. However, the exposure variable of interest may be measured with errors. In a subset of the whole cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies an additive measurement error model, but it may not have repeated measurements. The subset in which the surrogate variables are available is called a calibration sample. In addition to the surrogate variables that are available among the subjects in the calibration sample, we consider the situation when there is an instrumental variable available for all study subjects. An instrumental variable is correlated with the unobserved true exposure variable, and hence can be useful in the estimation of the regression coefficients. In this paper, we propose a nonparametric method for Cox regression using the observed data from the whole cohort. The nonparametric estimator is the best linear combination of a nonparametric correction estimator from the calibration sample and the difference of the naive estimators from the calibration sample and the whole cohort. The asymptotic distribution is derived, and the finite sample performance of the proposed estimator is examined via intensive simulation studies. The methods are applied to the Nutritional Biomarkers Study of the Women's Health Initiative. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Estimation of health effects of prenatal methylmercury exposure using structural equation models.

    PubMed

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe; Weihe, Pal

    2002-10-14

    Observational studies in epidemiology always involve concerns regarding validity, especially measurement error, confounding, missing data, and other problems that may affect the study outcomes. Widely used standard statistical techniques, such as multiple regression analysis, may to some extent adjust for these shortcomings. However, structural equations may incorporate most of these considerations, thereby providing overall adjusted estimations of associations. This approach was used in a large epidemiological data set from a prospective study of developmental methyl-mercury toxicity. Structural equation models were developed for assessment of the association between biomarkers of prenatal mercury exposure and neuropsychological test scores in 7 year old children. Eleven neurobehavioral outcomes were grouped into motor function and verbally mediated function. Adjustment for local dependence and item bias was necessary for a satisfactory fit of the model, but had little impact on the estimated mercury effects. The mercury effect on the two latent neurobehavioral functions was similar to the strongest effects seen for individual test scores of motor function and verbal skills. Adjustment for contaminant exposure to poly chlorinated biphenyls (PCBs) changed the estimates only marginally, but the mercury effect could be reduced to non-significance by assuming a large measurement error for the PCB biomarker. The structural equation analysis allows correction for measurement error in exposure variables, incorporation of multiple outcomes and incomplete cases. This approach therefore deserves to be applied more frequently in the analysis of complex epidemiological data sets.

  11. Assessment of ecologic regression in the study of lung cancer and indoor radon.

    PubMed

    Stidley, C A; Samet, J M

    1994-02-01

    Ecologic regression studies conducted to assess the cancer risk of indoor radon to the general population are subject to methodological limitations, and they have given seemingly contradictory results. The authors use simulations to examine the effects of two major methodological problems that affect these studies: measurement error and misspecification of the risk model. In a simulation study of the effect of measurement error caused by the sampling process used to estimate radon exposure for a geographic unit, both the effect of radon and the standard error of the effect estimate were underestimated, with greater bias for smaller sample sizes. In another simulation study, which addressed the consequences of uncontrolled confounding by cigarette smoking, even small negative correlations between county geometric mean annual radon exposure and the proportion of smokers resulted in negative average estimates of the radon effect. A third study considered consequences of using simple linear ecologic models when the true underlying model relation between lung cancer and radon exposure is nonlinear. These examples quantify potential biases and demonstrate the limitations of estimating risks from ecologic studies of lung cancer and indoor radon.

  12. Kinematic markers dissociate error correction from sensorimotor realignment during prism adaptation.

    PubMed

    O'Shea, Jacinta; Gaveau, Valérie; Kandel, Matthieu; Koga, Kazuo; Susami, Kenji; Prablanc, Claude; Rossetti, Yves

    2014-03-01

    This study investigated the motor control mechanisms that enable healthy individuals to adapt their pointing movements during prism exposure to a rightward optical shift. In the prism adaptation literature, two processes are typically distinguished. Strategic motor adjustments are thought to drive the pattern of rapid endpoint error correction typically observed during the early stage of prism exposure. This is distinguished from so-called 'true sensorimotor realignment', normally measured with a different pointing task, at the end of prism exposure, which reveals a compensatory leftward 'prism after-effect'. Here, we tested whether each mode of motor compensation - strategic adjustments versus 'true sensorimotor realignment' - could be distinguished, by analyzing patterns of kinematic change during prism exposure. We hypothesized that fast feedforward versus slower feedback error corrective processes would map onto two distinct phases of the reach trajectory. Specifically, we predicted that feedforward adjustments would drive rapid compensation of the initial (acceleration) phase of the reach, resulting in the rapid reduction of endpoint errors typically observed early during prism exposure. By contrast, we expected visual-proprioceptive realignment to unfold more slowly and to reflect feedback influences during the terminal (deceleration) phase of the reach. The results confirmed these hypotheses. Rapid error reduction during the early stage of prism exposure was achieved by trial-by-trial adjustments of the motor plan, which were proportional to the endpoint error feedback from the previous trial. By contrast, compensation of the terminal reach phase unfolded slowly across the duration of prism exposure. Even after 100 trials of pointing through prisms, adaptation was incomplete, with participants continuing to exhibit a small rightward shift in both the reach endpoints and in the terminal phase of reach trajectories. Individual differences in the degree of adaptation of the terminal reach phase predicted the magnitude of prism after-effects. In summary, this study identifies distinct kinematic signatures of fast strategic versus slow sensorimotor realignment processes, which combine to adjust motor performance to compensate for a prismatic shift. © 2013 Elsevier Ltd. All rights reserved.

  13. Using Marginal Structural Measurement-Error Models to Estimate the Long-term Effect of Antiretroviral Therapy on Incident AIDS or Death

    PubMed Central

    Cole, Stephen R.; Jacobson, Lisa P.; Tien, Phyllis C.; Kingsley, Lawrence; Chmiel, Joan S.; Anastos, Kathryn

    2010-01-01

    To estimate the net effect of imperfectly measured highly active antiretroviral therapy on incident acquired immunodeficiency syndrome or death, the authors combined inverse probability-of-treatment-and-censoring weighted estimation of a marginal structural Cox model with regression-calibration methods. Between 1995 and 2007, 950 human immunodeficiency virus–positive men and women were followed in 2 US cohort studies. During 4,054 person-years, 374 initiated highly active antiretroviral therapy, 211 developed acquired immunodeficiency syndrome or died, and 173 dropped out. Accounting for measured confounders and determinants of dropout, the weighted hazard ratio for acquired immunodeficiency syndrome or death comparing use of highly active antiretroviral therapy in the prior 2 years with no therapy was 0.36 (95% confidence limits: 0.21, 0.61). This association was relatively constant over follow-up (P = 0.19) and stronger than crude or adjusted hazard ratios of 0.75 and 0.95, respectively. Accounting for measurement error in reported exposure using external validation data on 331 men and women provided a hazard ratio of 0.17, with bias shifted from the hazard ratio to the estimate of precision as seen by the 2.5-fold wider confidence limits (95% confidence limits: 0.06, 0.43). Marginal structural measurement-error models can simultaneously account for 3 major sources of bias in epidemiologic research: validated exposure measurement error, measured selection bias, and measured time-fixed and time-varying confounding. PMID:19934191

  14. Regression calibration for models with two predictor variables measured with error and their interaction, using instrumental variables and longitudinal data.

    PubMed

    Strand, Matthew; Sillau, Stefan; Grunwald, Gary K; Rabinovitch, Nathan

    2014-02-10

    Regression calibration provides a way to obtain unbiased estimators of fixed effects in regression models when one or more predictors are measured with error. Recent development of measurement error methods has focused on models that include interaction terms between measured-with-error predictors, and separately, methods for estimation in models that account for correlated data. In this work, we derive explicit and novel forms of regression calibration estimators and associated asymptotic variances for longitudinal models that include interaction terms, when data from instrumental and unbiased surrogate variables are available but not the actual predictors of interest. The longitudinal data are fit using linear mixed models that contain random intercepts and account for serial correlation and unequally spaced observations. The motivating application involves a longitudinal study of exposure to two pollutants (predictors) - outdoor fine particulate matter and cigarette smoke - and their association in interactive form with levels of a biomarker of inflammation, leukotriene E4 (LTE 4 , outcome) in asthmatic children. Because the exposure concentrations could not be directly observed, we used measurements from a fixed outdoor monitor and urinary cotinine concentrations as instrumental variables, and we used concentrations of fine ambient particulate matter and cigarette smoke measured with error by personal monitors as unbiased surrogate variables. We applied the derived regression calibration methods to estimate coefficients of the unobserved predictors and their interaction, allowing for direct comparison of toxicity of the different pollutants. We used simulations to verify accuracy of inferential methods based on asymptotic theory. Copyright © 2013 John Wiley & Sons, Ltd.

  15. Exploring the Relationship of Task Performance and Physical and Cognitive Fatigue During a Daylong Light Precision Task.

    PubMed

    Yung, Marcus; Manji, Rahim; Wells, Richard P

    2017-11-01

    Our aim was to explore the relationship between fatigue and operation system performance during a simulated light precision task over an 8-hr period using a battery of physical (central and peripheral) and cognitive measures. Fatigue may play an important role in the relationship between poor ergonomics and deficits in quality and productivity. However, well-controlled laboratory studies in this area have several limitations, including the lack of work relevance of fatigue exposures and lack of both physical and cognitive measures. There remains a need to understand the relationship between physical and cognitive fatigue and task performance at exposure levels relevant to realistic production or light precision work. Errors and fatigue measures were tracked over the course of a micropipetting task. Fatigue responses from 10 measures and errors in pipetting technique, precision, and targeting were submitted to principal component analysis to descriptively analyze features and patterns. Fatigue responses and error rates contributed to three principal components (PCs), accounting for 50.9% of total variance. Fatigue responses grouped within the three PCs reflected central and peripheral upper extremity fatigue, postural sway, and changes in oculomotor behavior. In an 8-hr light precision task, error rates shared similar patterns to both physical and cognitive fatigue responses, and/or increases in arousal level. The findings provide insight toward the relationship between fatigue and operation system performance (e.g., errors). This study contributes to a body of literature documenting task errors and fatigue, reflecting physical (both central and peripheral) and cognitive processes.

  16. Measurement error in epidemiologic studies of air pollution based on land-use regression models.

    PubMed

    Basagaña, Xavier; Aguilera, Inmaculada; Rivera, Marcela; Agis, David; Foraster, Maria; Marrugat, Jaume; Elosua, Roberto; Künzli, Nino

    2013-10-15

    Land-use regression (LUR) models are increasingly used to estimate air pollution exposure in epidemiologic studies. These models use air pollution measurements taken at a small set of locations and modeling based on geographical covariates for which data are available at all study participant locations. The process of LUR model development commonly includes a variable selection procedure. When LUR model predictions are used as explanatory variables in a model for a health outcome, measurement error can lead to bias of the regression coefficients and to inflation of their variance. In previous studies dealing with spatial predictions of air pollution, bias was shown to be small while most of the effect of measurement error was on the variance. In this study, we show that in realistic cases where LUR models are applied to health data, bias in health-effect estimates can be substantial. This bias depends on the number of air pollution measurement sites, the number of available predictors for model selection, and the amount of explainable variability in the true exposure. These results should be taken into account when interpreting health effects from studies that used LUR models.

  17. The efficacy of protoporphyrin as a predictive biomarker for lead exposure in canvasback ducks: effect of sample storage time

    USGS Publications Warehouse

    Franson, J.C.; Hohman, W.L.; Moore, J.L.; Smith, M.R.

    1996-01-01

    We used 363 blood samples collected from wild canvasback dueks (Aythya valisineria) at Catahoula Lake, Louisiana, U.S.A. to evaluate the effect of sample storage time on the efficacy of erythrocytic protoporphyrin as an indicator of lead exposure. The protoporphyrin concentration of each sample was determined by hematofluorometry within 5 min of blood collection and after refrigeration at 4 °C for 24 and 48 h. All samples were analyzed for lead by atomic absorption spectrophotometry. Based on a blood lead concentration of ≥0.2 ppm wet weight as positive evidence for lead exposure, the protoporphyrin technique resulted in overall error rates of 29%, 20%, and 19% and false negative error rates of 47%, 29% and 25% when hematofluorometric determinations were made on blood at 5 min, 24 h, and 48 h, respectively. False positive error rates were less than 10% for all three measurement times. The accuracy of the 24-h erythrocytic protoporphyrin classification of blood samples as positive or negative for lead exposure was significantly greater than the 5-min classification, but no improvement in accuracy was gained when samples were tested at 48 h. The false negative errors were probably due, at least in part, to the lag time between lead exposure and the increase of blood protoporphyrin concentrations. False negatives resulted in an underestimation of the true number of canvasbacks exposed to lead, indicating that hematofluorometry provides a conservative estimate of lead exposure.

  18. Radiofrequency Electromagnetic Radiation and Memory Performance: Sources of Uncertainty in Epidemiological Cohort Studies.

    PubMed

    Brzozek, Christopher; Benke, Kurt K; Zeleke, Berihun M; Abramson, Michael J; Benke, Geza

    2018-03-26

    Uncertainty in experimental studies of exposure to radiation from mobile phones has in the past only been framed within the context of statistical variability. It is now becoming more apparent to researchers that epistemic or reducible uncertainties can also affect the total error in results. These uncertainties are derived from a wide range of sources including human error, such as data transcription, model structure, measurement and linguistic errors in communication. The issue of epistemic uncertainty is reviewed and interpreted in the context of the MoRPhEUS, ExPOSURE and HERMES cohort studies which investigate the effect of radiofrequency electromagnetic radiation from mobile phones on memory performance. Research into this field has found inconsistent results due to limitations from a range of epistemic sources. Potential analytic approaches are suggested based on quantification of epistemic error using Monte Carlo simulation. It is recommended that future studies investigating the relationship between radiofrequency electromagnetic radiation and memory performance pay more attention to treatment of epistemic uncertainties as well as further research into improving exposure assessment. Use of directed acyclic graphs is also encouraged to display the assumed covariate relationship.

  19. Theoretical and experimental errors for in situ measurements of plant water potential.

    PubMed

    Shackel, K A

    1984-07-01

    Errors in psychrometrically determined values of leaf water potential caused by tissue resistance to water vapor exchange and by lack of thermal equilibrium were evaluated using commercial in situ psychrometers (Wescor Inc., Logan, UT) on leaves of Tradescantia virginiana (L.). Theoretical errors in the dewpoint method of operation for these sensors were demonstrated. After correction for these errors, in situ measurements of leaf water potential indicated substantial errors caused by tissue resistance to water vapor exchange (4 to 6% reduction in apparent water potential per second of cooling time used) resulting from humidity depletions in the psychrometer chamber during the Peltier condensation process. These errors were avoided by use of a modified procedure for dewpoint measurement. Large changes in apparent water potential were caused by leaf and psychrometer exposure to moderate levels of irradiance. These changes were correlated with relatively small shifts in psychrometer zero offsets (-0.6 to -1.0 megapascals per microvolt), indicating substantial errors caused by nonisothermal conditions between the leaf and the psychrometer. Explicit correction for these errors is not possible with the current psychrometer design.

  20. Theoretical and Experimental Errors for In Situ Measurements of Plant Water Potential 1

    PubMed Central

    Shackel, Kenneth A.

    1984-01-01

    Errors in psychrometrically determined values of leaf water potential caused by tissue resistance to water vapor exchange and by lack of thermal equilibrium were evaluated using commercial in situ psychrometers (Wescor Inc., Logan, UT) on leaves of Tradescantia virginiana (L.). Theoretical errors in the dewpoint method of operation for these sensors were demonstrated. After correction for these errors, in situ measurements of leaf water potential indicated substantial errors caused by tissue resistance to water vapor exchange (4 to 6% reduction in apparent water potential per second of cooling time used) resulting from humidity depletions in the psychrometer chamber during the Peltier condensation process. These errors were avoided by use of a modified procedure for dewpoint measurement. Large changes in apparent water potential were caused by leaf and psychrometer exposure to moderate levels of irradiance. These changes were correlated with relatively small shifts in psychrometer zero offsets (−0.6 to −1.0 megapascals per microvolt), indicating substantial errors caused by nonisothermal conditions between the leaf and the psychrometer. Explicit correction for these errors is not possible with the current psychrometer design. PMID:16663701

  1. Quantifying the potential impact of measurement error in an investigation of autism spectrum disorder (ASD).

    PubMed

    Heavner, Karyn; Newschaffer, Craig; Hertz-Picciotto, Irva; Bennett, Deborah; Burstyn, Igor

    2014-05-01

    The Early Autism Risk Longitudinal Investigation (EARLI), an ongoing study of a risk-enriched pregnancy cohort, examines genetic and environmental risk factors for autism spectrum disorders (ASDs). We simulated the potential effects of both measurement error (ME) in exposures and misclassification of ASD-related phenotype (assessed as Autism Observation Scale for Infants (AOSI) scores) on measures of association generated under this study design. We investigated the impact on the power to detect true associations with exposure and the false positive rate (FPR) for a non-causal correlate of exposure (X2, r=0.7) for continuous AOSI score (linear model) versus dichotomised AOSI (logistic regression) when the sample size (n), degree of ME in exposure, and strength of the expected (true) OR (eOR)) between exposure and AOSI varied. Exposure was a continuous variable in all linear models and dichotomised at one SD above the mean in logistic models. Simulations reveal complex patterns and suggest that: (1) There was attenuation of associations that increased with eOR and ME; (2) The FPR was considerable under many scenarios; and (3) The FPR has a complex dependence on the eOR, ME and model choice, but was greater for logistic models. The findings will stimulate work examining cost-effective strategies to reduce the impact of ME in realistic sample sizes and affirm the importance for EARLI of investment in biological samples that help precisely quantify a wide range of environmental exposures.

  2. Fringe localization requirements for three-dimensional flow visualization of shock waves in diffuse-illumination double-pulse holographic interferometry

    NASA Technical Reports Server (NTRS)

    Decker, A. J.

    1982-01-01

    A theory of fringe localization in rapid-double-exposure, diffuse-illumination holographic interferometry was developed. The theory was then applied to compare holographic measurements with laser anemometer measurements of shock locations in a transonic axial-flow compressor rotor. The computed fringe localization error was found to agree well with the measured localization error. It is shown how the view orientation and the curvature and positional variation of the strength of a shock wave are used to determine the localization error and to minimize it. In particular, it is suggested that the view direction not deviate from tangency at the shock surface by more than 30 degrees.

  3. Multiple indicators, multiple causes measurement error models

    DOE PAGES

    Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; ...

    2014-06-25

    Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this study are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methodsmore » for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. Finally, as a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure.« less

  4. Multiple Indicators, Multiple Causes Measurement Error Models

    PubMed Central

    Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; Carroll, Raymond J.

    2014-01-01

    Multiple Indicators, Multiple Causes Models (MIMIC) are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times however when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are: (1) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model, (2) to develop likelihood based estimation methods for the MIMIC ME model, (3) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. As a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure. PMID:24962535

  5. Multiple indicators, multiple causes measurement error models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.

    Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this study are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methodsmore » for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. Finally, as a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure.« less

  6. Some potential errors in the measurement of mercury gas exchange at the soil surface using a dynamic flux chamber.

    PubMed

    Gillis, A; Miller, D R

    2000-10-09

    A series of controlled environment experiments were conducted to examine the use of a dynamic flux chamber to measure soil emission and absorption of total gaseous mercury (TGM). Uncertainty about the appropriate airflow rates through the chamber and chamber exposure to ambient wind are shown to be major sources of potential error. Soil surface mercury flux measurements over a range of chamber airflow rates showed a positive linear relationship between flux rates and airflow rate through the chamber. Mercury flux measurements using the chamber in an environmental wind tunnel showed that exposure of the system to ambient winds decreased the measured flux rates by 40% at a wind speed of 1.0 m s(-1) and 90% at a wind speed of 2 m s(-1). Wind tunnel measurements also showed that the chamber footprint was limited to the area of soil inside the chamber and there is little uncertainty of the footprint size in dry soil.

  7. A zero-augmented generalized gamma regression calibration to adjust for covariate measurement error: A case of an episodically consumed dietary intake

    PubMed Central

    Agogo, George O.

    2017-01-01

    Measurement error in exposure variables is a serious impediment in epidemiological studies that relate exposures to health outcomes. In nutritional studies, interest could be in the association between long-term dietary intake and disease occurrence. Long-term intake is usually assessed with food frequency questionnaire (FFQ), which is prone to recall bias. Measurement error in FFQ-reported intakes leads to bias in parameter estimate that quantifies the association. To adjust for bias in the association, a calibration study is required to obtain unbiased intake measurements using a short-term instrument such as 24-hour recall (24HR). The 24HR intakes are used as response in regression calibration to adjust for bias in the association. For foods not consumed daily, 24HR-reported intakes are usually characterized by excess zeroes, right skewness, and heteroscedasticity posing serious challenge in regression calibration modeling. We proposed a zero-augmented calibration model to adjust for measurement error in reported intake, while handling excess zeroes, skewness, and heteroscedasticity simultaneously without transforming 24HR intake values. We compared the proposed calibration method with the standard method and with methods that ignore measurement error by estimating long-term intake with 24HR and FFQ-reported intakes. The comparison was done in real and simulated datasets. With the 24HR, the mean increase in mercury level per ounce fish intake was about 0.4; with the FFQ intake, the increase was about 1.2. With both calibration methods, the mean increase was about 2.0. Similar trend was observed in the simulation study. In conclusion, the proposed calibration method performs at least as good as the standard method. PMID:27704599

  8. Impact of exposure measurement error in air pollution epidemiology: effect of error type in time-series studies.

    PubMed

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E

    2011-06-22

    Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling instrument imprecision and spatial variability as different error types, we estimate direction and magnitude of the effects of error over a range of error types.

  9. A hybrid solution using computational prediction and measured data to accurately determine process corrections with reduced overlay sampling

    NASA Astrophysics Data System (ADS)

    Noyes, Ben F.; Mokaberi, Babak; Mandoy, Ram; Pate, Alex; Huijgen, Ralph; McBurney, Mike; Chen, Owen

    2017-03-01

    Reducing overlay error via an accurate APC feedback system is one of the main challenges in high volume production of the current and future nodes in the semiconductor industry. The overlay feedback system directly affects the number of dies meeting overlay specification and the number of layers requiring dedicated exposure tools through the fabrication flow. Increasing the former number and reducing the latter number is beneficial for the overall efficiency and yield of the fabrication process. An overlay feedback system requires accurate determination of the overlay error, or fingerprint, on exposed wafers in order to determine corrections to be automatically and dynamically applied to the exposure of future wafers. Since current and future nodes require correction per exposure (CPE), the resolution of the overlay fingerprint must be high enough to accommodate CPE in the overlay feedback system, or overlay control module (OCM). Determining a high resolution fingerprint from measured data requires extremely dense overlay sampling that takes a significant amount of measurement time. For static corrections this is acceptable, but in an automated dynamic correction system this method creates extreme bottlenecks for the throughput of said system as new lots have to wait until the previous lot is measured. One solution is using a less dense overlay sampling scheme and employing computationally up-sampled data to a dense fingerprint. That method uses a global fingerprint model over the entire wafer; measured localized overlay errors are therefore not always represented in its up-sampled output. This paper will discuss a hybrid system shown in Fig. 1 that combines a computationally up-sampled fingerprint with the measured data to more accurately capture the actual fingerprint, including local overlay errors. Such a hybrid system is shown to result in reduced modelled residuals while determining the fingerprint, and better on-product overlay performance.

  10. A method for sensitivity analysis to assess the effects of measurement error in multiple exposure variables using external validation data.

    PubMed

    Agogo, George O; van der Voet, Hilko; van 't Veer, Pieter; Ferrari, Pietro; Muller, David C; Sánchez-Cantalejo, Emilio; Bamia, Christina; Braaten, Tonje; Knüppel, Sven; Johansson, Ingegerd; van Eeuwijk, Fred A; Boshuizen, Hendriek C

    2016-10-13

    Measurement error in self-reported dietary intakes is known to bias the association between dietary intake and a health outcome of interest such as risk of a disease. The association can be distorted further by mismeasured confounders, leading to invalid results and conclusions. It is, however, difficult to adjust for the bias in the association when there is no internal validation data. We proposed a method to adjust for the bias in the diet-disease association (hereafter, association), due to measurement error in dietary intake and a mismeasured confounder, when there is no internal validation data. The method combines prior information on the validity of the self-report instrument with the observed data to adjust for the bias in the association. We compared the proposed method with the method that ignores the confounder effect, and with the method that ignores measurement errors completely. We assessed the sensitivity of the estimates to various magnitudes of measurement error, error correlations and uncertainty in the literature-reported validation data. We applied the methods to fruits and vegetables (FV) intakes, cigarette smoking (confounder) and all-cause mortality data from the European Prospective Investigation into Cancer and Nutrition study. Using the proposed method resulted in about four times increase in the strength of association between FV intake and mortality. For weakly correlated errors, measurement error in the confounder minimally affected the hazard ratio estimate for FV intake. The effect was more pronounced for strong error correlations. The proposed method permits sensitivity analysis on measurement error structures and accounts for uncertainties in the reported validity coefficients. The method is useful in assessing the direction and quantifying the magnitude of bias in the association due to measurement errors in the confounders.

  11. VizieR Online Data Catalog: HST VI Photometry of Six LMC Old Globular Clusters (Olsen+ 1998)

    NASA Astrophysics Data System (ADS)

    Olsen, K. A. G.; Hodge, P. W.; Mateo, M.; Olszewski, E. W.; Schommer, R. A.; Suntzeff, N. B.; Walker, A. R.

    1998-11-01

    The following tables contain the results of photometry performed on Hubble Space Telescope WFPC2 images of the Large Magellanic Cloud globular clusters NGC 1754, 1835, 1898, 1916, 2005, and 2019. The magnitudes reported here were measured from Planetary Camera F555W and F814W images using DoPHOT (Schechter, Mateo, & Saha 1993) and afterwards transformed to Johnson V/Kron-Cousins I using equation 9 of Holtzman et al. (1995PASP..107.1065H). We carried out photometry on both long (1500 sec combined in F555W, 1800 sec in F814W) and short (40 sec combined in F555W, 60 sec in F814W) exposures. Where the short exposure photometry produced smaller errors, we report those magnitudes in place of those measured from the long exposures. For each star, we give an integer identifier, its x and y pixel position as measured in the F555W PC image, its V and I magnitude, the photometric errors reported by DoPHOT, both the V and I DoPHOT object types (multiplied by 10 if the reported magnitude was measured in the short exposure frame), and a flag if the star was removed during our procedure for statistical field star subtraction. Summary of data reduction and assessment of photometric accuracy: Cosmic ray rejection, correction for the y-dependent CTE effect (Holtzman et al. 1995a), geometric distortion correction, and bad pixel flagging were applied to the images before performing photometry. For the photometry, we used version 2.5 of DoPHOT, modified by Eric Deutsch to handle floating-point images. We found that there were insufficient numbers of bright, isolated stars in the PC frames for producing aperture corrections. Aperture corrections as a function of position in the frame were instead derived using WFPC2 point spread functions kindly provided by Peter Stetson. As these artificially generated aperture corrections agree well with ones derived from isolated stars in the WF chips, we trust that they are reliable to better than 0.05 mag. In agreement with the report of Whitmore & Heyer (1997), we found an offset in mean magnitudes between the short- and long-exposure photometry. We corrected for this effect by adjusting the short-exposure magnitudes to match, on average, those of the long exposures. Finally, we merged the short- and long- exposure lists of photometry as described above and transformed the magnitudes from the WFPC2 system to Johnson V/Kron-Cousins I, applying the Holtzman et al. (1995PASP..107.1065H) zero points. Statistical field star subtraction was performed using color-magnitude diagrams of the field stars produced from the combined WF frames. Completeness and random and systematic errors in the photometry were extensively modelled through artificial star tests. Crowding causes the completeness to be a strong function of position in the frame, with detection being most difficult near the cluster centers. In addition, we found that crowding introduces systematic errors in the photometry, generally <0.05 mag, that depend on the V-I and V of the star. Fortunately, these errors are well-understood. However, unknown errors in the zero points may persist at the ~0.05 mag level. (5 data files).

  12. VizieR Online Data Catalog: HST VI Photometry of Six LMC Old Globular Clusters (Olsen+ 1998)

    NASA Astrophysics Data System (ADS)

    Olsen, K. A. G.; Hodge, P. W.; Mateo, M.; Olszewski, E. W.; Schommer, R. A.; Suntzeff, N. B.; Walker, A. R.

    1998-11-01

    The following tables contain the results of photometry performed on Hubble Space Telescope WFPC2 images of the Large Magellanic Cloud globular clusters NGC 1754, 1835, 1898, 1916, 2005, and 2019. The magnitudes reported here were measured from Planetary Camera F555W and F814W images using DoPHOT (Schechter, Mateo, & Saha 1993) and afterwards transformed to Johnson V/Kron-Cousins I using equation 9 of Holtzman et al. (1995PASP..107.1065H). We carried out photometry on both long (1500 sec combined in F555W, 1800 sec in F814W) and short (40 sec combined in F555W, 60 sec in F814W) exposures. Where the short exposure photometry produced smaller errors, we report those magnitudes in place of those measured from the long exposures. For each star, we give an integer identifier, its x and y pixel position as measured in the F555W PC image, its V and I magnitude, the photometric errors reported by DoPHOT, both the V and I DoPHOT object types (multiplied by 10 if the reported magnitude was measured in the short exposure frame), and a flag if the star was removed during our procedure for statistical field star subtraction. Summary of data reduction and assessment of photometric accuracy: Cosmic ray rejection, correction for the y-dependent CTE effect (Holtzman et al. 1995a), geometric distortion correction, and bad pixel flagging were applied to the images before performing photometry. For the photometry, we used version 2.5 of DoPHOT, modified by Eric Deutsch to handle floating-point images. We found that there were insufficient numbers of bright, isolated stars in the PC frames for producing aperture corrections. Aperture corrections as a function of position in the frame were instead derived using WFPC2 point spread functions kindly provided by Peter Stetson. As these artificially generated aperture corrections agree well with ones derived from isolated stars in the WF chips, we trust that they are reliable to better than 0.05 mag. In agreement with the report of Whitmore & Heyer (1997), we found an offset in mean magnitudes between the short- and long-exposure photometry. We corrected for this effect by adjusting the short-exposure magnitudes to match, on average, those of the long exposures. Finally, we merged the short- and long- exposure lists of photometry as described above and transformed the magnitudes from the WFPC2 system to Johnson V/Kron-Cousins I, applying the Holtzman et al. (1995PASP..107.1065H) zero points. Statistical field star subtraction was performed using color-magnitude diagrams of the field stars produced from the combined WF frames. Completeness and random and systematic errors in the photometry were extensively modelled through artificial star tests. Crowding causes the completeness to be a strong function of position in the frame, with detection being most difficult near the cluster centers. In addition, we found that crowding introduces systematic errors in the photometry, generally <0.05 mag, that depend on the V-I and V of the star. Fortunately, these errors are well-understood. However, unknown errors in the zero points may persist at the ~0.05 mag level. (6 data files).

  13. Impact of Exposure Uncertainty on the Association between Perfluorooctanoate and Preeclampsia in the C8 Health Project Population.

    PubMed

    Avanasi, Raghavendhran; Shin, Hyeong-Moo; Vieira, Verónica M; Savitz, David A; Bartell, Scott M

    2016-01-01

    Uncertainty in exposure estimates from models can result in exposure measurement error and can potentially affect the validity of epidemiological studies. We recently used a suite of environmental models and an integrated exposure and pharmacokinetic model to estimate individual perfluorooctanoate (PFOA) serum concentrations and assess the association with preeclampsia from 1990 through 2006 for the C8 Health Project participants. The aims of the current study are to evaluate impact of uncertainty in estimated PFOA drinking-water concentrations on estimated serum concentrations and their reported epidemiological association with preeclampsia. For each individual public water district, we used Monte Carlo simulations to vary the year-by-year PFOA drinking-water concentration by randomly sampling from lognormal distributions for random error in the yearly public water district PFOA concentrations, systematic error specific to each water district, and global systematic error in the release assessment (using the estimated concentrations from the original fate and transport model as medians and a range of 2-, 5-, and 10-fold uncertainty). Uncertainty in PFOA water concentrations could cause major changes in estimated serum PFOA concentrations among participants. However, there is relatively little impact on the resulting epidemiological association in our simulations. The contribution of exposure uncertainty to the total uncertainty (including regression parameter variance) ranged from 5% to 31%, and bias was negligible. We found that correlated exposure uncertainty can substantially change estimated PFOA serum concentrations, but results in only minor impacts on the epidemiological association between PFOA and preeclampsia. Avanasi R, Shin HM, Vieira VM, Savitz DA, Bartell SM. 2016. Impact of exposure uncertainty on the association between perfluorooctanoate and preeclampsia in the C8 Health Project population. Environ Health Perspect 124:126-132; http://dx.doi.org/10.1289/ehp.1409044.

  14. Flame exposure time on Langmuir probe degradation, ion density, and thermionic emission for flame temperature.

    PubMed

    Doyle, S J; Salvador, P R; Xu, K G

    2017-11-01

    The paper examines the effect of exposure time of Langmuir probes in an atmospheric premixed methane-air flame. The effects of probe size and material composition on current measurements were investigated, with molybdenum and tungsten probe tips ranging in diameter from 0.0508 to 0.1651 mm. Repeated prolonged exposures to the flame, with five runs of 60 s, resulted in gradual probe degradations (-6% to -62% area loss) which affected the measurements. Due to long flame exposures, two ion saturation currents were observed, resulting in significantly different ion densities ranging from 1.16 × 10 16 to 2.71 × 10 19 m -3 . The difference between the saturation currents is caused by thermionic emissions from the probe tip. As thermionic emission is temperature dependent, the flame temperature could thus be estimated from the change in current. The flame temperatures calculated from the difference in saturation currents (1734-1887 K) were compared to those from a conventional thermocouple (1580-1908 K). Temperature measurements obtained from tungsten probes placed in rich flames yielded the highest percent error (9.66%-18.70%) due to smaller emission current densities at lower temperatures. The molybdenum probe yielded an accurate temperature value with only 1.29% error. Molybdenum also demonstrated very low probe degradation in comparison to the tungsten probe tips (area reductions of 6% vs. 58%, respectively). The results also show that very little exposure time (<5 s) is needed to obtain a valid ion density measurement and that prolonged flame exposures can yield the flame temperature but also risks damage to the Langmuir probe tip.

  15. Functional Logistic Regression Approach to Detecting Gene by Longitudinal Environmental Exposure Interaction in a Case-Control Study

    PubMed Central

    Wei, Peng; Tang, Hongwei; Li, Donghui

    2014-01-01

    Most complex human diseases are likely the consequence of the joint actions of genetic and environmental factors. Identification of gene-environment (GxE) interactions not only contributes to a better understanding of the disease mechanisms, but also improves disease risk prediction and targeted intervention. In contrast to the large number of genetic susceptibility loci discovered by genome-wide association studies, there have been very few successes in identifying GxE interactions which may be partly due to limited statistical power and inaccurately measured exposures. While existing statistical methods only consider interactions between genes and static environmental exposures, many environmental/lifestyle factors, such as air pollution and diet, change over time, and cannot be accurately captured at one measurement time point or by simply categorizing into static exposure categories. There is a dearth of statistical methods for detecting gene by time-varying environmental exposure interactions. Here we propose a powerful functional logistic regression (FLR) approach to model the time-varying effect of longitudinal environmental exposure and its interaction with genetic factors on disease risk. Capitalizing on the powerful functional data analysis framework, our proposed FLR model is capable of accommodating longitudinal exposures measured at irregular time points and contaminated by measurement errors, commonly encountered in observational studies. We use extensive simulations to show that the proposed method can control the Type I error and is more powerful than alternative ad hoc methods. We demonstrate the utility of this new method using data from a case-control study of pancreatic cancer to identify the windows of vulnerability of lifetime body mass index on the risk of pancreatic cancer as well as genes which may modify this association. PMID:25219575

  16. Functional logistic regression approach to detecting gene by longitudinal environmental exposure interaction in a case-control study.

    PubMed

    Wei, Peng; Tang, Hongwei; Li, Donghui

    2014-11-01

    Most complex human diseases are likely the consequence of the joint actions of genetic and environmental factors. Identification of gene-environment (G × E) interactions not only contributes to a better understanding of the disease mechanisms, but also improves disease risk prediction and targeted intervention. In contrast to the large number of genetic susceptibility loci discovered by genome-wide association studies, there have been very few successes in identifying G × E interactions, which may be partly due to limited statistical power and inaccurately measured exposures. Although existing statistical methods only consider interactions between genes and static environmental exposures, many environmental/lifestyle factors, such as air pollution and diet, change over time, and cannot be accurately captured at one measurement time point or by simply categorizing into static exposure categories. There is a dearth of statistical methods for detecting gene by time-varying environmental exposure interactions. Here, we propose a powerful functional logistic regression (FLR) approach to model the time-varying effect of longitudinal environmental exposure and its interaction with genetic factors on disease risk. Capitalizing on the powerful functional data analysis framework, our proposed FLR model is capable of accommodating longitudinal exposures measured at irregular time points and contaminated by measurement errors, commonly encountered in observational studies. We use extensive simulations to show that the proposed method can control the Type I error and is more powerful than alternative ad hoc methods. We demonstrate the utility of this new method using data from a case-control study of pancreatic cancer to identify the windows of vulnerability of lifetime body mass index on the risk of pancreatic cancer as well as genes that may modify this association. © 2014 Wiley Periodicals, Inc.

  17. Radiofrequency Electromagnetic Radiation and Memory Performance: Sources of Uncertainty in Epidemiological Cohort Studies

    PubMed Central

    Zeleke, Berihun M.; Abramson, Michael J.; Benke, Geza

    2018-01-01

    Uncertainty in experimental studies of exposure to radiation from mobile phones has in the past only been framed within the context of statistical variability. It is now becoming more apparent to researchers that epistemic or reducible uncertainties can also affect the total error in results. These uncertainties are derived from a wide range of sources including human error, such as data transcription, model structure, measurement and linguistic errors in communication. The issue of epistemic uncertainty is reviewed and interpreted in the context of the MoRPhEUS, ExPOSURE and HERMES cohort studies which investigate the effect of radiofrequency electromagnetic radiation from mobile phones on memory performance. Research into this field has found inconsistent results due to limitations from a range of epistemic sources. Potential analytic approaches are suggested based on quantification of epistemic error using Monte Carlo simulation. It is recommended that future studies investigating the relationship between radiofrequency electromagnetic radiation and memory performance pay more attention to treatment of epistemic uncertainties as well as further research into improving exposure assessment. Use of directed acyclic graphs is also encouraged to display the assumed covariate relationship. PMID:29587425

  18. AQMEII3: the EU and NA regional scale program of the ...

    EPA Pesticide Factsheets

    The presentation builds on the work presented last year at the 14th CMAS meeting and it is applied to the work performed in the context of the AQMEII-HTAP collaboration. The analysis is conducted within the framework of the third phase of AQMEII (Air Quality Model Evaluation International Initiative) and encompasses the gauging of model performance through measurement-to-model comparison, error decomposition and time series analysis of the models biases. Through the comparison of several regional-scale chemistry transport modelling systems applied to simulate meteorology and air quality over two continental areas, this study aims at i) apportioning the error to the responsible processes through time-scale analysis, and ii) help detecting causes of models error, and iii) identify the processes and scales most urgently requiring dedicated investigations. The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while the apportioning of the error into its constituent parts (bias, variance and covariance) can help assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the previous phases of AQMEII. The National Exposure Research Laboratory (NERL) Computational Exposur

  19. Reducing random measurement error in assessing postural load on the back in epidemiologic surveys.

    PubMed

    Burdorf, A

    1995-02-01

    The goal of this study was to design strategies to assess postural load on the back in occupational epidemiology by taking into account the reliability of measurement methods and the variability of exposure among the workers under study. Intermethod reliability studies were evaluated to estimate the systematic bias (accuracy) and random measurement error (precision) of various methods to assess postural load on the back. Intramethod reliability studies were reviewed to estimate random variability of back load over time. Intermethod surveys have shown that questionnaires have a moderate reliability for gross activities such as sitting, whereas duration of trunk flexion and rotation should be assessed by observation methods or inclinometers. Intramethod surveys indicate that exposure variability can markedly affect the reliability of estimates of back load if the estimates are based upon a single measurement over a certain time period. Equations have been presented to evaluate various study designs according to the reliability of the measurement method, the optimum allocation of the number of repeated measurements per subject, and the number of subjects in the study. Prior to a large epidemiologic study, an exposure-oriented survey should be conducted to evaluate the performance of measurement instruments and to estimate sources of variability for back load. The strategy for assessing back load can be optimized by balancing the number of workers under study and the number of repeated measurements per worker.

  20. Microenvironment Tracker (MicroTrac) | Science Inventory ...

    EPA Pesticide Factsheets

    Epidemiologic studies have shown associations between air pollution concentrations measured at central-site ambient monitors and adverse health outcomes. Using central-site concentrations as exposure surrogates, however, can lead to exposure errors due to time spent in various indoor and outdoor microenvironments (ME) with pollutant concentrations that can be substantially different from central-site concentrations. These exposure errors can introduce bias and incorrect confidence intervals in health effect estimates, which diminish the power of such studies to establish correct conclusions about the exposure and health effects association. The significance of this issue was highlighted in the National Research Council (NRC) Report “Research Priorities for Airborne Particulate Matter”, which recommends that EPA address exposure error in health studies. To address this limitation, we developed MicroTrac, an automated classification model that estimates time of day and duration spent in eight ME (indoors and outdoors at home, work, school; inside vehicles; other locations) from personal global positioning system (GPS) data and geocoded boundaries of buildings (e.g., home, work, school). MicroTrac has several innovative design features: (1) using GPS signal quality to account for GPS signal loss inside certain buildings, (2) spatial buffering of building boundaries to account for the spatial inaccuracy of the GPS device, and (3) temporal buffering of GPS positi

  1. A study and simulation of the impact of high-order aberrations to overlay error distribution

    NASA Astrophysics Data System (ADS)

    Sun, G.; Wang, F.; Zhou, C.

    2011-03-01

    With reduction of design rules, a number of corresponding new technologies, such as i-HOPC, HOWA and DBO have been proposed and applied to eliminate overlay error. When these technologies are in use, any high-order error distribution needs to be clearly distinguished in order to remove the underlying causes. Lens aberrations are normally thought to mainly impact the Matching Machine Overlay (MMO). However, when using Image-Based overlay (IBO) measurement tools, aberrations become the dominant influence on single machine overlay (SMO) and even on stage repeatability performance. In this paper, several measurements of the error distributions of the lens of SMEE SSB600/10 prototype exposure tool are presented. Models that characterize the primary influence from lens magnification, high order distortion, coma aberration and telecentricity are shown. The contribution to stage repeatability (as measured with IBO tools) from the above errors was predicted with simulator and compared to experiments. Finally, the drift of every lens distortion that impact to SMO over several days was monitored and matched with the result of measurements.

  2. SU-E-J-243: Possibility of Exposure Dose Reduction of Cone-Beam Computed Tomography in An Image Guided Patient Positioning System by Using Various Noise Suppression Filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamezawa, H; Fujimoto General Hospital, Miyakonojo, Miyazaki; Arimura, H

    Purpose: To investigate the possibility of exposure dose reduction of the cone-beam computed tomography (CBCT) in an image guided patient positioning system by using 6 noise suppression filters. Methods: First, a reference dose (RD) and low-dose (LD)-CBCT (X-ray volume imaging system, Elekta Co.) images were acquired with a reference dose of 86.2 mGy (weighted CT dose index: CTDIw) and various low doses of 1.4 to 43.1 mGy, respectively. Second, an automated rigid registration for three axes was performed for estimating setup errors between a planning CT image and the LD-CBCT images, which were processed by 6 noise suppression filters, i.e.,more » averaging filter (AF), median filter (MF), Gaussian filter (GF), bilateral filter (BF), edge preserving smoothing filter (EPF) and adaptive partial median filter (AMF). Third, residual errors representing the patient positioning accuracy were calculated as an Euclidean distance between the setup error vectors estimated using the LD-CBCT image and RD-CBCT image. Finally, the relationships between the residual error and CTDIw were obtained for 6 noise suppression filters, and then the CTDIw for LD-CBCT images processed by the noise suppression filters were measured at the same residual error, which was obtained with the RD-CBCT. This approach was applied to an anthropomorphic pelvic phantom and two cancer patients. Results: For the phantom, the exposure dose could be reduced from 61% (GF) to 78% (AMF) by applying the noise suppression filters to the CBCT images. The exposure dose in a prostate cancer case could be reduced from 8% (AF) to 61% (AMF), and the exposure dose in a lung cancer case could be reduced from 9% (AF) to 37% (AMF). Conclusion: Using noise suppression filters, particularly an adaptive partial median filter, could be feasible to decrease the additional exposure dose to patients in image guided patient positioning systems.« less

  3. Characterizing Air Pollution Exposure Misclassification Errors Using Detailed Cell Phone Location Data

    NASA Astrophysics Data System (ADS)

    Yu, H.; Russell, A. G.; Mulholland, J. A.

    2017-12-01

    In air pollution epidemiologic studies with spatially resolved air pollution data, exposures are often estimated using the home locations of individual subjects. Due primarily to lack of data or logistic difficulties, the spatiotemporal mobility of subjects are mostly neglected, which are expected to result in exposure misclassification errors. In this study, we applied detailed cell phone location data to characterize potential exposure misclassification errors associated with home-based exposure estimation of air pollution. The cell phone data sample consists of 9,886 unique simcard IDs collected on one mid-week day in October, 2013 from Shenzhen, China. The Community Multi-scale Air Quality model was used to simulate hourly ambient concentrations of six chosen pollutants at 3 km spatial resolution, which were then fused with observational data to correct for potential modeling biases and errors. Air pollution exposure for each simcard ID was estimated by matching hourly pollutant concentrations with detailed location data for corresponding IDs. Finally, the results were compared with exposure estimates obtained using the home location method to assess potential exposure misclassification errors. Our results show that the home-based method is likely to have substantial exposure misclassification errors, over-estimating exposures for subjects with higher exposure levels and under-estimating exposures for those with lower exposure levels. This has the potential to lead to a bias-to-the-null in the health effect estimates. Our findings suggest that the use of cell phone data has the potential for improving the characterization of exposure and exposure misclassification in air pollution epidemiology studies.

  4. Skeletal and body composition evaluation

    NASA Technical Reports Server (NTRS)

    Mazess, R. B.

    1983-01-01

    Research on radiation detectors for absorptiometry; analysis of errors affective single photon absorptiometry and development of instrumentation; analysis of errors affecting dual photon absorptiometry and development of instrumentation; comparison of skeletal measurements with other techniques; cooperation with NASA projects for skeletal evaluation in spaceflight (Experiment MO-78) and in laboratory studies with immobilized animals; studies of postmenopausal osteoporosis; organization of scientific meetings and workshops on absorptiometric measurement; and development of instrumentation for measurement of fluid shifts in the human body were performed. Instrumentation was developed that allows accurate and precise (2% error) measurements of mineral content in compact and trabecular bone and of the total skeleton. Instrumentation was also developed to measure fluid shifts in the extremities. Radiation exposure with those procedures is low (2-10 MREM). One hundred seventy three technical reports and one hundred and four published papers of studies from the University of Wisconsin Bone Mineral Lab are listed.

  5. A simulation study to quantify the impacts of exposure ...

    EPA Pesticide Factsheets

    A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, process models, and decision support tools for use both within and outside of EPA.

  6. The Mine Safety and Health Administration's criterion threshold value policy increases miners' risk of pneumoconiosis.

    PubMed

    Weeks, James L

    2006-06-01

    The Mine Safety and Health Administration (MSHA) proposes to issue citations for non-compliance with the exposure limit for respirable coal mine dust when measured exposure exceeds the exposure limit with a "high degree of confidence." This criterion threshold value (CTV) is derived from the sampling and analytical error of the measurement method. This policy is based on a combination of statistical and legal reasoning: the one-tailed 95% confidence limit of the sampling method, the apparent principle of due process and a standard of proof analogous to "beyond a reasonable doubt." This policy raises the effective exposure limit, it is contrary to the precautionary principle, it is not a fair sharing of the burden of uncertainty, and it employs an inappropriate standard of proof. Its own advisory committee and NIOSH have advised against this policy. For longwall mining sections, it results in a failure to issue citations for approximately 36% of the measured values that exceed the statutory exposure limit. Citations for non-compliance with the respirable dust standard should be issued for any measure exposure that exceeds the exposure limit.

  7. Magnetic field exposure stiffens regenerating plant protoplast cell walls.

    PubMed

    Haneda, Toshihiko; Fujimura, Yuu; Iino, Masaaki

    2006-02-01

    Single suspension-cultured plant cells (Catharanthus roseus) and their protoplasts were anchored to a glass plate and exposed to a magnetic field of 302 +/- 8 mT for several hours. Compression forces required to produce constant cell deformation were measured parallel to the magnetic field by means of a cantilever-type force sensor. Exposure of intact cells to the magnetic field did not result in any changes within experimental error, while exposure of regenerating protoplasts significantly increased the measured forces and stiffened regenerating protoplasts. The diameters of intact cells or regenerating protoplasts were not changed after exposure to the magnetic field. Measured forces for regenerating protoplasts with and without exposure to the magnetic field increased linearly with incubation time, with these forces being divided into components based on the elasticity of synthesized cell walls and cytoplasm. Cell wall synthesis was also measured using a cell wall-specific fluorescent dye, and no changes were noted after exposure to the magnetic field. Analysis suggested that exposure to the magnetic field roughly tripled the Young's modulus of the newly synthesized cell wall without any lag.

  8. Effect of geocoding errors on traffic-related air pollutant exposure and concentration estimates

    EPA Science Inventory

    Exposure to traffic-related air pollutants is highest very near roads, and thus exposure estimates are sensitive to positional errors. This study evaluates positional and PM2.5 concentration errors that result from the use of automated geocoding methods and from linearized approx...

  9. Clinical Research Methodology 2: Observational Clinical Research.

    PubMed

    Sessler, Daniel I; Imrey, Peter B

    2015-10-01

    Case-control and cohort studies are invaluable research tools and provide the strongest feasible research designs for addressing some questions. Case-control studies usually involve retrospective data collection. Cohort studies can involve retrospective, ambidirectional, or prospective data collection. Observational studies are subject to errors attributable to selection bias, confounding, measurement bias, and reverse causation-in addition to errors of chance. Confounding can be statistically controlled to the extent that potential factors are known and accurately measured, but, in practice, bias and unknown confounders usually remain additional potential sources of error, often of unknown magnitude and clinical impact. Causality-the most clinically useful relation between exposure and outcome-can rarely be definitively determined from observational studies because intentional, controlled manipulations of exposures are not involved. In this article, we review several types of observational clinical research: case series, comparative case-control and cohort studies, and hybrid designs in which case-control analyses are performed on selected members of cohorts. We also discuss the analytic issues that arise when groups to be compared in an observational study, such as patients receiving different therapies, are not comparable in other respects.

  10. Detecting drift bias and exposure errors in solar and photosynthetically active radiation data

    USDA-ARS?s Scientific Manuscript database

    All-black thermopile pyranometers are commonly used to measure solar radiation. Ensuring that the sensors are stable and free of drift is critical to accurately measure small variations in global solar irradiance (K'), which is a potential driver of changes in surface temperature. We demonstrate tha...

  11. The role of the location of personal exposimeters on the human body in their use for assessing exposure to the electromagnetic field in the radiofrequency range 98-2450 MHz and compliance analysis: evaluation by virtual measurements.

    PubMed

    Gryz, Krzysztof; Zradziński, Patryk; Karpowicz, Jolanta

    2015-01-01

    The use of radiofrequency (98-2450 MHz range) personal exposimeters to measure the electric field (E-field) in far-field exposure conditions was modelled numerically using human body model Gustav and finite integration technique software. Calculations with 256 models of exposure scenarios show that the human body has a significant influence on the results of measurements using a single body-worn exposimeter in various locations near the body ((from -96 to +133)%, measurement errors with respect to the unperturbed E-field value). When an exposure assessment involves the exposure limitations provided for the strength of an unperturbed E-field. To improve the application of exposimeters in compliance tests, such discrepancies in the results of measurements by a body-worn exposimeter may be compensated by using of a correction factor applied to the measurement results or alternatively to the exposure limit values. The location of a single exposimeter on the waist to the back side of the human body or on the front of the chest reduces the range of exposure assessments uncertainty (covering various exposure conditions). However, still the uncertainty of exposure assessments using a single exposimeter remains significantly higher than the assessment of the unperturbed E-field using spot measurements.

  12. The Role of the Location of Personal Exposimeters on the Human Body in Their Use for Assessing Exposure to the Electromagnetic Field in the Radiofrequency Range 98–2450 MHz and Compliance Analysis: Evaluation by Virtual Measurements

    PubMed Central

    Zradziński, Patryk

    2015-01-01

    The use of radiofrequency (98–2450 MHz range) personal exposimeters to measure the electric field (E-field) in far-field exposure conditions was modelled numerically using human body model Gustav and finite integration technique software. Calculations with 256 models of exposure scenarios show that the human body has a significant influence on the results of measurements using a single body-worn exposimeter in various locations near the body ((from −96 to +133)%, measurement errors with respect to the unperturbed E-field value). When an exposure assessment involves the exposure limitations provided for the strength of an unperturbed E-field. To improve the application of exposimeters in compliance tests, such discrepancies in the results of measurements by a body-worn exposimeter may be compensated by using of a correction factor applied to the measurement results or alternatively to the exposure limit values. The location of a single exposimeter on the waist to the back side of the human body or on the front of the chest reduces the range of exposure assessments uncertainty (covering various exposure conditions). However, still the uncertainty of exposure assessments using a single exposimeter remains significantly higher than the assessment of the unperturbed E-field using spot measurements. PMID:25879021

  13. Validating novel air pollution sensors to improve exposure estimates for epidemiological analyses and citizen science.

    PubMed

    Jerrett, Michael; Donaire-Gonzalez, David; Popoola, Olalekan; Jones, Roderic; Cohen, Ronald C; Almanza, Estela; de Nazelle, Audrey; Mead, Iq; Carrasco-Turigas, Glòria; Cole-Hunter, Tom; Triguero-Mas, Margarita; Seto, Edmund; Nieuwenhuijsen, Mark

    2017-10-01

    Low cost, personal air pollution sensors may reduce exposure measurement errors in epidemiological investigations and contribute to citizen science initiatives. Here we assess the validity of a low cost personal air pollution sensor. Study participants were drawn from two ongoing epidemiological projects in Barcelona, Spain. Participants repeatedly wore the pollution sensor - which measured carbon monoxide (CO), nitric oxide (NO), and nitrogen dioxide (NO 2 ). We also compared personal sensor measurements to those from more expensive instruments. Our personal sensors had moderate to high correlations with government monitors with averaging times of 1-h and 30-min epochs (r ~ 0.38-0.8) for NO and CO, but had low to moderate correlations with NO 2 (~0.04-0.67). Correlations between the personal sensors and more expensive research instruments were higher than with the government monitors. The sensors were able to detect high and low air pollution levels in agreement with expectations (e.g., high levels on or near busy roadways and lower levels in background residential areas and parks). Our findings suggest that the low cost, personal sensors have potential to reduce exposure measurement error in epidemiological studies and provide valid data for citizen science studies. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Predictive models of poly(ethylene-terephthalate) film degradation under multi-factor accelerated weathering exposures

    PubMed Central

    Ngendahimana, David K.; Fagerholm, Cara L.; Sun, Jiayang; Bruckman, Laura S.

    2017-01-01

    Accelerated weathering exposures were performed on poly(ethylene-terephthalate) (PET) films. Longitudinal multi-level predictive models as a function of PET grades and exposure types were developed for the change in yellowness index (YI) and haze (%). Exposures with similar change in YI were modeled using a linear fixed-effects modeling approach. Due to the complex nature of haze formation, measurement uncertainty, and the differences in the samples’ responses, the change in haze (%) depended on individual samples’ responses and a linear mixed-effects modeling approach was used. When compared to fixed-effects models, the addition of random effects in the haze formation models significantly increased the variance explained. For both modeling approaches, diagnostic plots confirmed independence and homogeneity with normally distributed residual errors. Predictive R2 values for true prediction error and predictive power of the models demonstrated that the models were not subject to over-fitting. These models enable prediction under pre-defined exposure conditions for a given exposure time (or photo-dosage in case of UV light exposure). PET degradation under cyclic exposures combining UV light and condensing humidity is caused by photolytic and hydrolytic mechanisms causing yellowing and haze formation. Quantitative knowledge of these degradation pathways enable cross-correlation of these lab-based exposures with real-world conditions for service life prediction. PMID:28498875

  15. Validity of self-reported solar UVR exposure compared with objectively measured UVR exposure.

    PubMed

    Glanz, Karen; Gies, Peter; O'Riordan, David L; Elliott, Tom; Nehl, Eric; McCarty, Frances; Davis, Erica

    2010-12-01

    Reliance on verbal self-report of solar exposure in skin cancer prevention and epidemiologic studies may be problematic if self-report data are not valid due to systematic errors in recall, social desirability bias, or other reasons. This study examines the validity of self-reports of exposure to ultraviolet radiation (UVR) compared to objectively measured exposure among children and adults in outdoor recreation settings in 4 regions of the United States. Objective UVR exposures of 515 participants were measured using polysulfone film badge UVR dosimeters on 2 days. The same subjects provided self-reported UVR exposure data on surveys and 4-day sun exposure diaries, for comparison to their objectively measured exposure. Dosimeter data showed that lifeguards had the greatest UVR exposure (24.5% of weekday ambient UVR), children the next highest exposures (10.3% ambient weekday UVR), and parents had the lowest (6.6% ambient weekday UVR). Similar patterns were observed in self-report data. Correlations between diary reports and dosimeter findings were fair to good and were highest for lifeguards (r = 0.38-0.57), followed by parents (r = 0.28-0.29) and children (r = 0.18-0.34). Correlations between survey and diary measures were moderate to good for lifeguards (r = 0.20-0.54) and children (r = 0.35-0.53). This is the largest study of its kind to date, and supports the utility of self-report measures of solar UVR exposure. Overall, self-reports of sun exposure produce valid measures of UVR exposure among parents, children, and lifeguards who work outdoors. ©2010 AACR.

  16. Estimated radiation exposure of German commercial airline cabin crew in the years 1960-2003 modeled using dose registry data for 2004-2015.

    PubMed

    Wollschläger, Daniel; Hammer, Gaël Paul; Schafft, Thomas; Dreger, Steffen; Blettner, Maria; Zeeb, Hajo

    2018-05-01

    Exposure to ionizing radiation of cosmic origin is an occupational risk factor in commercial aircrew. In a historic cohort of 26,774 German aircrew, radiation exposure was previously estimated only for cockpit crew using a job-exposure matrix (JEM). Here, a new method for retrospectively estimating cabin crew dose is developed. The German Federal Radiation Registry (SSR) documents individual monthly effective doses for all aircrew. SSR-provided doses on 12,941 aircrew from 2004 to 2015 were used to model cabin crew dose as a function of age, sex, job category, solar activity, and male pilots' dose; the mean annual effective dose was 2.25 mSv (range 0.01-6.39 mSv). In addition to an inverse association with solar activity, exposure followed age- and sex-dependent patterns related to individual career development and life phases. JEM-derived annual cockpit crew doses agreed with SSR-provided doses for 2004 (correlation 0.90, 0.40 mSv root mean squared error), while the estimated average annual effective dose for cabin crew had a prediction error of 0.16 mSv, equaling 7.2% of average annual dose. Past average annual cabin crew dose can be modeled by exploiting systematic external influences as well as individual behavioral determinants of radiation exposure, thereby enabling future dose-response analyses of the full aircrew cohort including measurement error information.

  17. Temporal Variability of Daily Personal Magnetic Field Exposure Metrics in Pregnant Women

    PubMed Central

    Lewis, Ryan C.; Evenson, Kelly R.; Savitz, David A.; Meeker, John D.

    2015-01-01

    Recent epidemiology studies of power-frequency magnetic fields and reproductive health have characterized exposures using data collected from personal exposure monitors over a single day, possibly resulting in exposure misclassification due to temporal variability in daily personal magnetic field exposure metrics, but relevant data in adults are limited. We assessed the temporal variability of daily central tendency (time-weighted average, median) and peak (upper percentiles, maximum) personal magnetic field exposure metrics over seven consecutive days in 100 pregnant women. When exposure was modeled as a continuous variable, central tendency metrics had substantial reliability, whereas peak metrics had fair (maximum) to moderate (upper percentiles) reliability. The predictive ability of a single day metric to accurately classify participants into exposure categories based on a weeklong metric depended on the selected exposure threshold, with sensitivity decreasing with increasing exposure threshold. Consistent with the continuous measures analysis, sensitivity was higher for central tendency metrics than for peak metrics. If there is interest in peak metrics, more than one day of measurement is needed over the window of disease susceptibility to minimize measurement error, but one day may be sufficient for central tendency metrics. PMID:24691007

  18. Performance of GPS-devices for environmental exposure assessment.

    PubMed

    Beekhuizen, Johan; Kromhout, Hans; Huss, Anke; Vermeulen, Roel

    2013-01-01

    Integration of individual time-location patterns with spatially resolved exposure maps enables a more accurate estimation of personal exposures to environmental pollutants than using estimates at fixed locations. Current global positioning system (GPS) devices can be used to track an individual's location. However, information on GPS-performance in environmental exposure assessment is largely missing. We therefore performed two studies. First, a commute-study, where the commute of 12 individuals was tracked twice, testing GPS-performance for five transport modes and two wearing modes. Second, an urban-tracking study, where one individual was tracked repeatedly through different areas, focused on the effect of building obstruction on GPS-performance. The median error from the true path for walking was 3.7 m, biking 2.9 m, train 4.8 m, bus 4.9 m, and car 3.3 m. Errors were larger in a high-rise commercial area (median error=7.1 m) compared with a low-rise residential area (median error=2.2 m). Thus, GPS-performance largely depends on the transport mode and urban built-up. Although ~85% of all errors were <10 m, almost 1% of the errors were >50 m. Modern GPS-devices are useful tools for environmental exposure assessment, but large GPS-errors might affect estimates of exposures with high spatial variability.

  19. Current nitrogen dioxide exposures among railroad workers.

    PubMed

    Woskie, S R; Hammond, S K; Smith, T J; Schenker, M B

    1989-07-01

    As part of a series of epidemiologic studies of the mortality patterns of railroad workers, various air contaminants were measured to characterize the workers' current exposures to diesel exhaust. Nitrogen dioxide (NO2), which is a constituent of diesel exhaust, was examined as one possible marker of diesel exposure. An adaptation of the Palmes personal passive sampler was used to measure the NO2 exposures of 477 U.S. railroad workers at four railroads. The range of NO2 exposures expressed as the arithmetic average +/- two standard errors for the five career job groups were as follows: signal maintainers, 16-24 parts per billion (ppb); clerks/dispatchers/station agents, 23-43 ppb; engineers/firers, 26-38 ppb; brakers/conductors, 50-74 ppb; and locomotive shop workers, 95-127 ppb. Variations among railroads and across seasons were not significant for most job groups.

  20. Five-year lung function observations and associations with a smoking ban among healthy miners at high altitude (4000 m).

    PubMed

    Vinnikov, Denis; Blanc, Paul D; Brimkulov, Nurlan; Redding-Jones, Rupert

    2013-12-01

    To assess the annual lung function decline associated with the reduction of secondhand smoke exposure in a high-altitude industrial workforce. We performed pulmonary function tests annually among 109 high-altitude gold-mine workers over 5 years of follow-up. The first 3 years included greater likelihood of exposure to secondhand smoke exposure before the initiation of extensive smoking restrictions that came into force in the last 2 years of observation. In repeated measures modeling, taking into account the time elapsed in relation to the smoking ban, there was a 115 ± 9 (standard error) mL per annum decline in lung function before the ban, but a 178 ± 20 (standard error) mL per annum increase afterward (P < 0.001, both slopes). Institution of a workplace smoking ban at high altitude may be beneficial in terms of lung function decline.

  1. The Impact of Gene-Environment Dependence and Misclassification in Genetic Association Studies Incorporating Gene-Environment Interactions

    PubMed Central

    Lindström, Sara; Yen, Yu-Chun; Spiegelman, Donna; Kraft, Peter

    2009-01-01

    The possibility of gene-environment interaction can be exploited to identify genetic variants associated with disease using a joint test of genetic main effect and gene-environment interaction. We consider how exposure misclassification and dependence between the true exposure E and the tested genetic variant G affect this joint test in absolute terms and relative to three other tests: the marginal test (G), the standard test for multiplicative gene-environment interaction (GE), and the case-only test for interaction (GE-CO). All tests can have inflated Type I error rate when E and G are correlated in the underlying population. For the GE and G-GE tests this inflation is only noticeable when the gene-environment dependence is unusually strong; the inflation can be large for the GE-CO test even for modest correlation. The joint G-GE test has greater power than the GE test generally, and greater power than the G test when there is no genetic main effect and the measurement error is small to moderate. The joint G-GE test is an attractive test for assessing genetic association when there is limited knowledge about casual mechanisms a priori, even in the presence of misclassification in environmental exposure measurement and correlation between exposure and genetic variants. PMID:19521099

  2. Systems Issues Pertaining to Holographic Optical Data Storage in Thick Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Timucin, Dogan A.; Gary, Charles K.; Oezcan, Meric; Smithey, Daniel T.; Crew, Marshall; Lau, Sonie (Technical Monitor)

    1998-01-01

    The optical data storage capacity and raw bit-error-rate achievable with thick photochromic bacteriorhodopsin (BR) films are investigated for sequential recording and read- out of angularly- and shift-multiplexed digital holograms inside a thick blue-membrane D85N BR film. We address the determination of an exposure schedule that produces equal diffraction efficiencies among each of the multiplexed holograms. This exposure schedule is determined by numerical simulations of the holographic recording process within the BR material, and maximizes the total grating strength. We also experimentally measure the shift selectivity and compare the results to theoretical predictions. Finally, we evaluate the bit-error-rate of a single hologram, and of multiple holograms stored within the film.

  3. Recall bias in melanoma risk factors and measurement error effects: a nested case-control study within the Norwegian Women and Cancer Study.

    PubMed

    Parr, Christine L; Hjartåker, Anette; Laake, Petter; Lund, Eiliv; Veierød, Marit B

    2009-02-01

    Case-control studies of melanoma have the potential for recall bias after much public information about the relation with ultraviolet radiation. Recall bias has been investigated in few studies and only for some risk factors. A nested case-control study of recall bias was conducted in 2004 within the Norwegian Women and Cancer Study: 208 melanoma cases and 2,080 matched controls were invited. Data were analyzed for 162 cases (response, 78%) and 1,242 controls (response, 77%). Questionnaire responses to several host factors and ultraviolet exposures collected at enrollment in 1991-1997 and in 2004 were compared stratified on case-control status. Shifts in responses were observed among both cases and controls, but a shift in cases was observed only for skin color after chronic sun exposure, and a larger shift in cases was observed for nevi. Weighted kappa was lower for cases than for controls for most age intervals of sunburn, sunbathing vacations, and solarium use. Differences in odds ratio estimates of melanoma based on prospective and retrospective measurements indicate measurement error that is difficult to characterize. The authors conclude that indications of recall bias were found in this sample of Norwegian women, but that the results were inconsistent for the different exposures.

  4. Defect printability of ArF alternative phase-shift mask: a critical comparison of simulation and experiment

    NASA Astrophysics Data System (ADS)

    Ozawa, Ken; Komizo, Tooru; Ohnuma, Hidetoshi

    2002-07-01

    An alternative phase shift mask (alt-PSM) is a promising device for extending optical lithography to finer design rules. There have been few reports, however, on the mask's ability to identify phase defects. We report here an alt-PSM of a single-trench type with undercut for ArF exposure, with programmed phase defects used to evaluate defect printability by measuring aerial images with a Zeiss MSM193 measuring system. The experimental results are simulated using the TEMPEST program. First, a critical comparison of the simulation and the experiment is conducted. The actual measured topographies of quartz defects are used in the simulation. Moreover, a general simulation study on defect printability using an alt-PSM for ArF exposure is conducted. The defect dimensions, which produce critical CD errors, are determined by simulation that takes into account the full 3-dimensional structure of phase defects as well as a simplified structure. The critical dimensions of an isolated bump defect identified by the alt-PSM of a single-trench type with undercut for ArF exposure are 300 nm in bottom dimension and 74 degrees in height (phase) for the real shape, where the depth of wet-etching is 100 nm and the CD error limit is +/- 5 percent.

  5. Defect printability of alternating phase-shift mask: a critical comparison of simulation and experiment

    NASA Astrophysics Data System (ADS)

    Ozawa, Ken; Komizo, Tooru; Kikuchi, Koji; Ohnuma, Hidetoshi; Kawahira, Hiroichi

    2002-07-01

    An alternative phase shift mask (alt-PSM) is a promising device for extending optical lithography to finer design rules. There have been few reports, however, on the mask's ability to identify phase defects. We report here an alt-PSM of a dual-trench type for KrF exposure, with programmed quartz defects used to evaluate defect printability by measuring aerial images with a Zeiss MSM100 measuring system. The experimental results are simulated using the TEMPEST program. First, a critical comparison of the simulation and the experiment is conducted. The actual measured topography of quartz defects are used in the simulation. Moreover, a general simulation study on defect printability using an alt-PSM for ArF exposure is conducted. The defect dimensions, which produce critical CD errors are determined by simulation that takes into account the full 3-dimensional structure of phase defects as well as a simplified structure. The critical dimensions of an isolated defect identified by the alt-PSM of a single-trench type for ArF exposure are 240 nm in bottom diameter and 50 degrees in height (phase) for the cylindrical shape and 240 nm in bottom diameter and 90 degrees in height (phase) for the rotating trapezoidal shape, where the CD error limit is +/- 5%.

  6. Influence of precision of emission characteristic parameters on model prediction error of VOCs/formaldehyde from dry building material.

    PubMed

    Wei, Wenjuan; Xiong, Jianyin; Zhang, Yinping

    2013-01-01

    Mass transfer models are useful in predicting the emissions of volatile organic compounds (VOCs) and formaldehyde from building materials in indoor environments. They are also useful for human exposure evaluation and in sustainable building design. The measurement errors in the emission characteristic parameters in these mass transfer models, i.e., the initial emittable concentration (C 0), the diffusion coefficient (D), and the partition coefficient (K), can result in errors in predicting indoor VOC and formaldehyde concentrations. These errors have not yet been quantitatively well analyzed in the literature. This paper addresses this by using modelling to assess these errors for some typical building conditions. The error in C 0, as measured in environmental chambers and applied to a reference living room in Beijing, has the largest influence on the model prediction error in indoor VOC and formaldehyde concentration, while the error in K has the least effect. A correlation between the errors in D, K, and C 0 and the error in the indoor VOC and formaldehyde concentration prediction is then derived for engineering applications. In addition, the influence of temperature on the model prediction of emissions is investigated. It shows the impact of temperature fluctuations on the prediction errors in indoor VOC and formaldehyde concentrations to be less than 7% at 23±0.5°C and less than 30% at 23±2°C.

  7. Motoneuron axon pathfinding errors in zebrafish: Differential effects related to concentration and timing of nicotine exposure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menelaou, Evdokia; Paul, Latoya T.; Perera, Surangi N.

    Nicotine exposure during embryonic stages of development can affect many neurodevelopmental processes. In the developing zebrafish, exposure to nicotine was reported to cause axonal pathfinding errors in the later born secondary motoneurons (SMNs). These alterations in SMN axon morphology coincided with muscle degeneration at high nicotine concentrations (15–30 μM). Previous work showed that the paralytic mutant zebrafish known as sofa potato exhibited nicotine-induced effects onto SMN axons at these high concentrations but in the absence of any muscle deficits, indicating that pathfinding errors could occur independent of muscle effects. In this study, we used varying concentrations of nicotine at differentmore » developmental windows of exposure to specifically isolate its effects onto subpopulations of motoneuron axons. We found that nicotine exposure can affect SMN axon morphology in a dose-dependent manner. At low concentrations of nicotine, SMN axons exhibited pathfinding errors, in the absence of any nicotine-induced muscle abnormalities. Moreover, the nicotine exposure paradigms used affected the 3 subpopulations of SMN axons differently, but the dorsal projecting SMN axons were primarily affected. We then identified morphologically distinct pathfinding errors that best described the nicotine-induced effects on dorsal projecting SMN axons. To test whether SMN pathfinding was potentially influenced by alterations in the early born primary motoneuron (PMN), we performed dual labeling studies, where both PMN and SMN axons were simultaneously labeled with antibodies. We show that only a subset of the SMN axon pathfinding errors coincided with abnormal PMN axonal targeting in nicotine-exposed zebrafish. We conclude that nicotine exposure can exert differential effects depending on the levels of nicotine and developmental exposure window. - Highlights: • Embryonic nicotine exposure can specifically affect secondary motoneuron axons in a dose-dependent manner. • The nicotine-induced secondary motoneuron axonal pathfinding errors can occur independent of any muscle fiber alterations. • Nicotine exposure primarily affects dorsal projecting secondary motoneurons axons. • Nicotine-induced primary motoneuron axon pathfinding errors can influence secondary motoneuron axon morphology.« less

  8. Concerning the Video Drift Method to Measure Double Stars

    NASA Astrophysics Data System (ADS)

    Nugent, Richard L.; Iverson, Ernest W.

    2015-05-01

    Classical methods to measure position angles and separations of double stars rely on just a few measurements either from visual observations or photographic means. Visual and photographic CCD observations are subject to errors from the following sources: misalignments from eyepiece/camera/barlow lens/micrometer/focal reducers, systematic errors from uncorrected optical distortions, aberrations from the telescope system, camera tilt, magnitude and color effects. Conventional video methods rely on calibration doubles and graphically calculating the east-west direction plus careful choice of select video frames stacked for measurement. Atmospheric motion is one of the larger sources of error in any exposure/measurement method which is on the order of 0.5-1.5. Ideally, if a data set from a short video can be used to derive position angle and separation, with each data set self-calibrating independent of any calibration doubles or star catalogues, this would provide measurements of high systematic accuracy. These aims are achieved by the video drift method first proposed by the authors in 2011. This self calibrating video method automatically analyzes 1,000's of measurements from a short video clip.

  9. Evaluation of measurement errors of temperature and relative humidity from HOBO data logger under different conditions of exposure to solar radiation.

    PubMed

    da Cunha, Antonio Ribeiro

    2015-05-01

    This study aimed to assess measurements of temperature and relative humidity obtained with HOBO a data logger, under various conditions of exposure to solar radiation, comparing them with those obtained through the use of a temperature/relative humidity probe and a copper-constantan thermocouple psychrometer, which are considered the standards for obtaining such measurements. Data were collected over a 6-day period (from 25 March to 1 April, 2010), during which the equipment was monitored continuously and simultaneously. We employed the following combinations of equipment and conditions: a HOBO data logger in full sunlight; a HOBO data logger shielded within a white plastic cup with windows for air circulation; a HOBO data logger shielded within a gill-type shelter (multi-plate prototype plastic); a copper-constantan thermocouple psychrometer exposed to natural ventilation and protected from sunlight; and a temperature/relative humidity probe under a commercial, multi-plate radiation shield. Comparisons between the measurements obtained with the various devices were made on the basis of statistical indicators: linear regression, with coefficient of determination; index of agreement; maximum absolute error; and mean absolute error. The prototype multi-plate shelter (gill-type) used in order to protect the HOBO data logger was found to provide the best protection against the effects of solar radiation on measurements of temperature and relative humidity. The precision and accuracy of a device that measures temperature and relative humidity depend on an efficient shelter that minimizes the interference caused by solar radiation, thereby avoiding erroneous analysis of the data obtained.

  10. Consequences of kriging and land use regression for PM2.5 predictions in epidemiologic analyses: Insights into spatial variability using high-resolution satellite data

    PubMed Central

    Alexeeff, Stacey E.; Schwartz, Joel; Kloog, Itai; Chudnovsky, Alexandra; Koutrakis, Petros; Coull, Brent A.

    2016-01-01

    Many epidemiological studies use predicted air pollution exposures as surrogates for true air pollution levels. These predicted exposures contain exposure measurement error, yet simulation studies have typically found negligible bias in resulting health effect estimates. However, previous studies typically assumed a statistical spatial model for air pollution exposure, which may be oversimplified. We address this shortcoming by assuming a realistic, complex exposure surface derived from fine-scale (1km x 1km) remote-sensing satellite data. Using simulation, we evaluate the accuracy of epidemiological health effect estimates in linear and logistic regression when using spatial air pollution predictions from kriging and land use regression models. We examined chronic (long-term) and acute (short-term) exposure to air pollution. Results varied substantially across different scenarios. Exposure models with low out-of-sample R2 yielded severe biases in the health effect estimates of some models, ranging from 60% upward bias to 70% downward bias. One land use regression exposure model with greater than 0.9 out-of-sample R2 yielded upward biases up to 13% for acute health effect estimates. Almost all models drastically underestimated the standard errors. Land use regression models performed better in chronic effects simulations. These results can help researchers when interpreting health effect estimates in these types of studies. PMID:24896768

  11. Using cell phone location to assess misclassification errors in air pollution exposure estimation.

    PubMed

    Yu, Haofei; Russell, Armistead; Mulholland, James; Huang, Zhijiong

    2018-02-01

    Air pollution epidemiologic and health impact studies often rely on home addresses to estimate individual subject's pollution exposure. In this study, we used detailed cell phone location data, the call detail record (CDR), to account for the impact of spatiotemporal subject mobility on estimates of ambient air pollutant exposure. This approach was applied on a sample with 9886 unique simcard IDs in Shenzhen, China, on one mid-week day in October 2013. Hourly ambient concentrations of six chosen pollutants were simulated by the Community Multi-scale Air Quality model fused with observational data, and matched with detailed location data for these IDs. The results were compared with exposure estimates using home addresses to assess potential exposure misclassification errors. We found the misclassifications errors are likely to be substantial when home location alone is applied. The CDR based approach indicates that the home based approach tends to over-estimate exposures for subjects with higher exposure levels and under-estimate exposures for those with lower exposure levels. Our results show that the cell phone location based approach can be used to assess exposure misclassification error and has the potential for improving exposure estimates in air pollution epidemiology studies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. An interlaboratory comparison of dosimetry for a multi-institutional radiobiological research project: Observations, problems, solutions and lessons learned.

    PubMed

    Seed, Thomas M; Xiao, Shiyun; Manley, Nancy; Nikolich-Zugich, Janko; Pugh, Jason; Van den Brink, Marcel; Hirabayashi, Yoko; Yasutomo, Koji; Iwama, Atsushi; Koyasu, Shigeo; Shterev, Ivo; Sempowski, Gregory; Macchiarini, Francesca; Nakachi, Kei; Kunugi, Keith C; Hammer, Clifford G; Dewerd, Lawrence A

    2016-01-01

    An interlaboratory comparison of radiation dosimetry was conducted to determine the accuracy of doses being used experimentally for animal exposures within a large multi-institutional research project. The background and approach to this effort are described and discussed in terms of basic findings, problems and solutions. Dosimetry tests were carried out utilizing optically stimulated luminescence (OSL) dosimeters embedded midline into mouse carcasses and thermal luminescence dosimeters (TLD) embedded midline into acrylic phantoms. The effort demonstrated that the majority (4/7) of the laboratories was able to deliver sufficiently accurate exposures having maximum dosing errors of ≤5%. Comparable rates of 'dosimetric compliance' were noted between OSL- and TLD-based tests. Data analysis showed a highly linear relationship between 'measured' and 'target' doses, with errors falling largely between 0 and 20%. Outliers were most notable for OSL-based tests, while multiple tests by 'non-compliant' laboratories using orthovoltage X-rays contributed heavily to the wide variation in dosing errors. For the dosimetrically non-compliant laboratories, the relatively high rates of dosing errors were problematic, potentially compromising the quality of ongoing radiobiological research. This dosimetry effort proved to be instructive in establishing rigorous reviews of basic dosimetry protocols ensuring that dosing errors were minimized.

  13. Advances in Understanding Air Pollution and Cardiovascular Diseases: The Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air)

    PubMed Central

    Kaufman, Joel D.; Spalt, Elizabeth W.; Curl, Cynthia L.; Hajat, Anjum; Jones, Miranda R.; Kim, Sun-Young; Vedal, Sverre; Szpiro, Adam A.; Gassett, Amanda; Sheppard, Lianne; Daviglus, Martha L.; Adar, Sara D.

    2016-01-01

    The Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air) leveraged the platform of the MESA cohort into a prospective longitudinal study of relationships between air pollution and cardiovascular health. MESA Air researchers developed fine-scale, state-of-the-art air pollution exposure models for the MESA Air communities, creating individual exposure estimates for each participant. These models combine cohort-specific exposure monitoring, existing monitoring systems, and an extensive database of geographic and meteorological information. Together with extensive phenotyping in MESA—and adding participants and health measurements to the cohort—MESA Air investigated environmental exposures on a wide range of outcomes. Advances by the MESA Air team included not only a new approach to exposure modeling but also biostatistical advances in addressing exposure measurement error and temporal confounding. The MESA Air study advanced our understanding of the impact of air pollutants on cardiovascular disease and provided a research platform for advances in environmental epidemiology. PMID:27741981

  14. Adaptive Responses in Eye-Head-Hand Coordination Following Exposures to a Virtual Environment as a Possible Space Flight Analog

    NASA Technical Reports Server (NTRS)

    Harm, Deborah L.; Taylor, L. C.; Bloomberg, J. J.

    2007-01-01

    Virtual environments (VE) offer unique training opportunities, particularly for training astronauts and preadapting them to the novel sensory conditions of microgravity. Sensorimotor aftereffects of VEs are often quite similar to adaptive sensorimotor responses observed in astronauts during and/or following space flight. The purpose of this research was to compare disturbances in sensorimotor coordination produced by dome virtual environment display and to examine the effects of exposure duration, and repeated exposures to VR systems. The current study examined disturbances in eye-head-hand (EHH) and eye-head coordination. Preliminary results will be presented. Eleven subjects have participated in the study to date. One training session was completed in order to achieve stable performance on the EHH coordination and VE tasks. Three experimental sessions were performed each separated by one day. Subjects performed a navigation and pick and place task in a dome immersive display VE for 30 or 60 min. The subjects were asked to move objects from one set of 15 pedestals to the other set across a virtual square room through a random pathway as quickly and accurately as possible. EHH coordination was measured before, immediately after, and at 1 hr, 2 hr, 4 hr and 6 hr following exposure to VR. EHH coordination was measured as position errors and reaction time in a pointing task that included multiple horizontal and vertical LED targets. Repeated measures ANOVAs were used to analyze the data. In general, we observed significant increases in position errors for both horizontal and vertical targets. The largest decrements were observed immediately following exposure to VR and showed a fairly rapid recovery across test sessions, but not across days. Subjects generally showed faster RTs across days. Individuals recovered from the detrimental effects of exposure to the VE on position errors within 1-2 hours. The fact that subjects did not significantly improve across days suggests that in order to achieve dual adaptation of EHH coordination may require more than three training sessions. These findings provide some direction for developing training schedules for VE users that facilitate adaptation, support the idea that preflight training of astronauts may serve as useful countermeasure for the sensorimotor effects of space flight, and support the idea that VEs may serve as an analog for sensorimotor effects of spaceflight.

  15. Estimators for longitudinal latent exposure models: examining measurement model assumptions.

    PubMed

    Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D

    2017-06-15

    Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  16. English speech sound development in preschool-aged children from bilingual English-Spanish environments.

    PubMed

    Gildersleeve-Neumann, Christina E; Kester, Ellen S; Davis, Barbara L; Peña, Elizabeth D

    2008-07-01

    English speech acquisition by typically developing 3- to 4-year-old children with monolingual English was compared to English speech acquisition by typically developing 3- to 4-year-old children with bilingual English-Spanish backgrounds. We predicted that exposure to Spanish would not affect the English phonetic inventory but would increase error frequency and type in bilingual children. Single-word speech samples were collected from 33 children. Phonetically transcribed samples for the 3 groups (monolingual English children, English-Spanish bilingual children who were predominantly exposed to English, and English-Spanish bilingual children with relatively equal exposure to English and Spanish) were compared at 2 time points and for change over time for phonetic inventory, phoneme accuracy, and error pattern frequencies. Children demonstrated similar phonetic inventories. Some bilingual children produced Spanish phonemes in their English and produced few consonant cluster sequences. Bilingual children with relatively equal exposure to English and Spanish averaged more errors than did bilingual children who were predominantly exposed to English. Both bilingual groups showed higher error rates than English-only children overall, particularly for syllable-level error patterns. All language groups decreased in some error patterns, although the ones that decreased were not always the same across language groups. Some group differences of error patterns and accuracy were significant. Vowel error rates did not differ by language group. Exposure to English and Spanish may result in a higher English error rate in typically developing bilinguals, including the application of Spanish phonological properties to English. Slightly higher error rates are likely typical for bilingual preschool-aged children. Change over time at these time points for all 3 groups was similar, suggesting that all will reach an adult-like system in English with exposure and practice.

  17. Semiparametric Bayesian analysis of gene-environment interactions with error in measurement of environmental covariates and missing genetic data.

    PubMed

    Lobach, Iryna; Mallick, Bani; Carroll, Raymond J

    2011-01-01

    Case-control studies are widely used to detect gene-environment interactions in the etiology of complex diseases. Many variables that are of interest to biomedical researchers are difficult to measure on an individual level, e.g. nutrient intake, cigarette smoking exposure, long-term toxic exposure. Measurement error causes bias in parameter estimates, thus masking key features of data and leading to loss of power and spurious/masked associations. We develop a Bayesian methodology for analysis of case-control studies for the case when measurement error is present in an environmental covariate and the genetic variable has missing data. This approach offers several advantages. It allows prior information to enter the model to make estimation and inference more precise. The environmental covariates measured exactly are modeled completely nonparametrically. Further, information about the probability of disease can be incorporated in the estimation procedure to improve quality of parameter estimates, what cannot be done in conventional case-control studies. A unique feature of the procedure under investigation is that the analysis is based on a pseudo-likelihood function therefore conventional Bayesian techniques may not be technically correct. We propose an approach using Markov Chain Monte Carlo sampling as well as a computationally simple method based on an asymptotic posterior distribution. Simulation experiments demonstrated that our method produced parameter estimates that are nearly unbiased even for small sample sizes. An application of our method is illustrated using a population-based case-control study of the association between calcium intake with the risk of colorectal adenoma development.

  18. Real cell overlay measurement through design based metrology

    NASA Astrophysics Data System (ADS)

    Yoo, Gyun; Kim, Jungchan; Park, Chanha; Lee, Taehyeong; Ji, Sunkeun; Jo, Gyoyeon; Yang, Hyunjo; Yim, Donggyu; Yamamoto, Masahiro; Maruyama, Kotaro; Park, Byungjun

    2014-04-01

    Until recent device nodes, lithography has been struggling to improve its resolution limit. Even though next generation lithography technology is now facing various difficulties, several innovative resolution enhancement technologies, based on 193nm wavelength, were introduced and implemented to keep the trend of device scaling. Scanner makers keep developing state-of-the-art exposure system which guarantees higher productivity and meets a more aggressive overlay specification. "The scaling reduction of the overlay error has been a simple matter of the capability of exposure tools. However, it is clear that the scanner contributions may no longer be the majority component in total overlay performance. The ability to control correctable overlay components is paramount to achieve the desired performance.(2)" In a manufacturing fab, the overlay error, determined by a conventional overlay measurement: by using an overlay mark based on IBO and DBO, often does not represent the physical placement error in the cell area of a memory device. The mismatch may arise from the size or pitch difference between the overlay mark and the cell pattern. Pattern distortion, caused by etching or CMP, also can be a source of the mismatch. Therefore, the requirement of a direct overlay measurement in the cell pattern gradually increases in the manufacturing field, and also in the development level. In order to overcome the mismatch between conventional overlay measurement and the real placement error of layer to layer in the cell area of a memory device, we suggest an alternative overlay measurement method utilizing by design, based metrology tool. A basic concept of this method is shown in figure1. A CD-SEM measurement of the overlay error between layer 1 and 2 could be the ideal method but it takes too long time to extract a lot of data from wafer level. An E-beam based DBM tool provides high speed to cover the whole wafer with high repeatability. It is enabled by using the design as a reference for overlay measurement and a high speed scan system. In this paper, we have demonstrated that direct overlay measurement in the cell area can distinguish the mismatch exactly, instead of using overlay mark. This experiment was carried out for several critical layer in DRAM and Flash memory, using DBM(Design Based Metrology) tool, NGR2170™.

  19. Toward refined estimates of ambient PM2.5 exposure: Evaluation of a physical outdoor-to-indoor transport model

    NASA Astrophysics Data System (ADS)

    Hodas, Natasha; Meng, Qingyu; Lunden, Melissa M.; Turpin, Barbara J.

    2014-02-01

    Because people spend the majority of their time indoors, the variable efficiency with which ambient PM2.5 penetrates and persists indoors is a source of error in epidemiologic studies that use PM2.5 concentrations measured at central-site monitors as surrogates for ambient PM2.5 exposure. To reduce this error, practical methods to model indoor concentrations of ambient PM2.5 are needed. Toward this goal, we evaluated and refined an outdoor-to-indoor transport model using measured indoor and outdoor PM2.5 species concentrations and air exchange rates from the Relationships of Indoor, Outdoor, and Personal Air Study. Herein, we present model evaluation results, discuss what data are most critical to prediction of residential exposures at the individual-subject and populations levels, and make recommendations for the application of the model in epidemiologic studies. This paper demonstrates that not accounting for certain human activities (air conditioning and heating use, opening windows) leads to bias in predicted residential PM2.5 exposures at the individual-subject level, but not the population level. The analyses presented also provide quantitative evidence that shifts in the gas-particle partitioning of ambient organics with outdoor-to-indoor transport contribute significantly to variability in indoor ambient organic carbon concentrations and suggest that methods to account for these shifts will further improve the accuracy of outdoor-to-indoor transport models.

  20. Air pollution exposure prediction approaches used in air pollution epidemiology studies.

    PubMed

    Özkaynak, Halûk; Baxter, Lisa K; Dionisio, Kathie L; Burke, Janet

    2013-01-01

    Epidemiological studies of the health effects of outdoor air pollution have traditionally relied upon surrogates of personal exposures, most commonly ambient concentration measurements from central-site monitors. However, this approach may introduce exposure prediction errors and misclassification of exposures for pollutants that are spatially heterogeneous, such as those associated with traffic emissions (e.g., carbon monoxide, elemental carbon, nitrogen oxides, and particulate matter). We review alternative air quality and human exposure metrics applied in recent air pollution health effect studies discussed during the International Society of Exposure Science 2011 conference in Baltimore, MD. Symposium presenters considered various alternative exposure metrics, including: central site or interpolated monitoring data, regional pollution levels predicted using the national scale Community Multiscale Air Quality model or from measurements combined with local-scale (AERMOD) air quality models, hybrid models that include satellite data, statistically blended modeling and measurement data, concentrations adjusted by home infiltration rates, and population-based human exposure model (Stochastic Human Exposure and Dose Simulation, and Air Pollutants Exposure models) predictions. These alternative exposure metrics were applied in epidemiological applications to health outcomes, including daily mortality and respiratory hospital admissions, daily hospital emergency department visits, daily myocardial infarctions, and daily adverse birth outcomes. This paper summarizes the research projects presented during the symposium, with full details of the work presented in individual papers in this journal issue.

  1. Influence of Precision of Emission Characteristic Parameters on Model Prediction Error of VOCs/Formaldehyde from Dry Building Material

    PubMed Central

    Wei, Wenjuan; Xiong, Jianyin; Zhang, Yinping

    2013-01-01

    Mass transfer models are useful in predicting the emissions of volatile organic compounds (VOCs) and formaldehyde from building materials in indoor environments. They are also useful for human exposure evaluation and in sustainable building design. The measurement errors in the emission characteristic parameters in these mass transfer models, i.e., the initial emittable concentration (C 0), the diffusion coefficient (D), and the partition coefficient (K), can result in errors in predicting indoor VOC and formaldehyde concentrations. These errors have not yet been quantitatively well analyzed in the literature. This paper addresses this by using modelling to assess these errors for some typical building conditions. The error in C 0, as measured in environmental chambers and applied to a reference living room in Beijing, has the largest influence on the model prediction error in indoor VOC and formaldehyde concentration, while the error in K has the least effect. A correlation between the errors in D, K, and C 0 and the error in the indoor VOC and formaldehyde concentration prediction is then derived for engineering applications. In addition, the influence of temperature on the model prediction of emissions is investigated. It shows the impact of temperature fluctuations on the prediction errors in indoor VOC and formaldehyde concentrations to be less than 7% at 23±0.5°C and less than 30% at 23±2°C. PMID:24312497

  2. ERROR IN ANNUAL AVERAGE DUE TO USE OF LESS THAN EVERYDAY MEASUREMENTS

    EPA Science Inventory

    Long term averages of the concentration of PM mass and components are of interest for determining compliance with annual averages, for developing exposure surrogated for cross-sectional epidemiologic studies of the long-term of PM, and for determination of aerosol sources by chem...

  3. Motoneuron axon pathfinding errors in zebrafish: Differential effects related to concentration and timing of nicotine exposure

    PubMed Central

    Menelaou, Evdokia; Paul, Latoya T.; Perera, Surangi N.; Svoboda, Kurt R.

    2015-01-01

    Nicotine exposure during embryonic stages of development can affect many neurodevelopmental processes. In the developing zebrafish, exposure to nicotine was reported to cause axonal pathfinding errors in the later born secondary motoneurons (SMN). These alterations in SMN axon morphology coincided with muscle degeneration at high nicotine concentrations (15–30µM). Previous work showed that the paralytic mutant zebrafish known as sofa potato, exhibited nicotine-induced effects onto SMN axons at these high concentrations but in the absence of any muscle deficits, indicating that pathfinding errors could occur independent of muscle effects. In this study, we used varying concentrations of nicotine at different developmental windows of exposure to specifically isolate its effects onto subpopulations of motoneuron axons. We found that nicotine exposure can affect SMN axon morphology in a dose-dependent manner. At low concentrations of nicotine, SMN axons exhibited pathfinding errors, in the absence of any nicotine-induced muscle abnormalities. Moreover, the nicotine exposure paradigms used affected the 3 subpopulations of SMN axons differently, but the dorsal projecting SMN axons were primarily affected. We then identified morphologically distinct pathfinding errors that best described the nicotine-induced effects on dorsal projecting SMN axons. To test whether SMN pathfinding was potentially influenced by alterations in the early born primary motoneuron (PMN), we performed dual labeling studies, where both PMN and SMN axons were simultaneously labeled with antibodies. We show that only a subset of the SMN axon pathfinding errors coincided with abnormal PMN axonal targeting in nicotine-exposed zebrafish. We conclude that nicotine exposure can exert differential effects depending on the levels of nicotine and developmental exposure window. PMID:25668718

  4. Evaluation of the Effect of Noise on the Rate of Errors and Speed of Work by the Ergonomic Test of Two-Hand Co-Ordination

    PubMed Central

    Habibi, Ehsanollah; Dehghan, Habibollah; Dehkordy, Sina Eshraghy; Maracy, Mohammad Reza

    2013-01-01

    Background: Among the most important and effective factors affecting the efficiency of the human workforce are accuracy, promptness, and ability. In the context of promoting levels and quality of productivity, the aim of this study was to investigate the effects of exposure to noise on the rate of errors, speed of work, and capability in performing manual activities. Methods: This experimental study was conducted on 96 students (52 female and 44 male) of the Isfahan Medical Science University with the average and standard deviations of age, height, and weight of 22.81 (3.04) years, 171.67 (8.51) cm, and 65.05 (13.13) kg, respectively. Sampling was conducted with a randomized block design. Along with controlling for intervening factors, a combination of sound pressure levels [65 dB (A), 85 dB (A), and 95 dB (A)] and exposure times (0, 20, and 40) were used for evaluation of precision and speed of action of the participants, in the ergonomic test of two-hand coordination. Data was analyzed by SPSS18 software using a descriptive and analytical statistical method by analysis of covariance (ANCOVA) repeated measures. Results: The results of this study showed that increasing sound pressure level from 65 to 95 dB in network ‘A’ increased the speed of work (P < 0.05). Increase in the exposure time (0 to 40 min of exposure) and gender showed no significant differences statistically in speed of work (P > 0.05). Male participants got annoyed from the noise more than females. Also, increase in sound pressure level increased the rate of error (P < 0.05). Conclusions: According to the results of this research, increasing the sound pressure level decreased efficiency and increased the errors and in exposure to sounds less than 85 dB in the beginning, the efficiency decreased initially and then increased in a mild slope. PMID:23930164

  5. Biomarkers of exposure to new and emerging tobacco delivery products.

    PubMed

    Schick, Suzaynn F; Blount, Benjamin C; Jacob, Peyton; Saliba, Najat A; Bernert, John T; El Hellani, Ahmad; Jatlow, Peter; Pappas, R Steven; Wang, Lanqing; Foulds, Jonathan; Ghosh, Arunava; Hecht, Stephen S; Gomez, John C; Martin, Jessica R; Mesaros, Clementina; Srivastava, Sanjay; St Helen, Gideon; Tarran, Robert; Lorkiewicz, Pawel K; Blair, Ian A; Kimmel, Heather L; Doerschuk, Claire M; Benowitz, Neal L; Bhatnagar, Aruni

    2017-09-01

    Accurate and reliable measurements of exposure to tobacco products are essential for identifying and confirming patterns of tobacco product use and for assessing their potential biological effects in both human populations and experimental systems. Due to the introduction of new tobacco-derived products and the development of novel ways to modify and use conventional tobacco products, precise and specific assessments of exposure to tobacco are now more important than ever. Biomarkers that were developed and validated to measure exposure to cigarettes are being evaluated to assess their use for measuring exposure to these new products. Here, we review current methods for measuring exposure to new and emerging tobacco products, such as electronic cigarettes, little cigars, water pipes, and cigarillos. Rigorously validated biomarkers specific to these new products have not yet been identified. Here, we discuss the strengths and limitations of current approaches, including whether they provide reliable exposure estimates for new and emerging products. We provide specific guidance for choosing practical and economical biomarkers for different study designs and experimental conditions. Our goal is to help both new and experienced investigators measure exposure to tobacco products accurately and avoid common experimental errors. With the identification of the capacity gaps in biomarker research on new and emerging tobacco products, we hope to provide researchers, policymakers, and funding agencies with a clear action plan for conducting and promoting research on the patterns of use and health effects of these products.

  6. Statistical perturbations in personal exposure meters caused by the human body in dynamic outdoor environments.

    PubMed

    Rodríguez, Begoña; Blas, Juan; Lorenzo, Rubén M; Fernández, Patricia; Abril, Evaristo J

    2011-04-01

    Personal exposure meters (PEM) are routinely used for the exposure assessment to radio frequency electric or magnetic fields. However, their readings are subject to errors associated with perturbations of the fields caused by the presence of the human body. This paper presents a novel analysis method for the characterization of this effect. Using ray-tracing techniques, PEM measurements have been emulated, with and without an approximation of this shadowing effect. In particular, the Global System for Mobile Communication mobile phone frequency band was chosen for its ubiquity and, specifically, we considered the case where the subject is walking outdoors in a relatively open area. These simulations have been contrasted with real PEM measurements in a 35-min walk. Results show a good agreement in terms of root mean square error and E-field cumulative distribution function (CDF), with a significant improvement when the shadowing effect is taken into account. In particular, the Kolmogorov-Smirnov (KS) test provides a P-value of 0.05 when considering the shadowing effect, versus a P-value of 10⁻¹⁴ when this effect is ignored. In addition, although the E-field levels in the absence of a human body have been found to follow a Nakagami distribution, a lognormal distribution fits the statistics of the PEM values better than the Nakagami distribution. As a conclusion, although the mean could be adjusted by using correction factors, there are also other changes in the CDF that require particular attention due to the shadowing effect because they might lead to a systematic error. Copyright © 2010 Wiley-Liss, Inc.

  7. Environmental Chemicals in Urine and Blood: Improving Methods for Creatinine and Lipid Adjustment.

    PubMed

    O'Brien, Katie M; Upson, Kristen; Cook, Nancy R; Weinberg, Clarice R

    2016-02-01

    Investigators measuring exposure biomarkers in urine typically adjust for creatinine to account for dilution-dependent sample variation in urine concentrations. Similarly, it is standard to adjust for serum lipids when measuring lipophilic chemicals in serum. However, there is controversy regarding the best approach, and existing methods may not effectively correct for measurement error. We compared adjustment methods, including novel approaches, using simulated case-control data. Using a directed acyclic graph framework, we defined six causal scenarios for epidemiologic studies of environmental chemicals measured in urine or serum. The scenarios include variables known to influence creatinine (e.g., age and hydration) or serum lipid levels (e.g., body mass index and recent fat intake). Over a range of true effect sizes, we analyzed each scenario using seven adjustment approaches and estimated the corresponding bias and confidence interval coverage across 1,000 simulated studies. For urinary biomarker measurements, our novel method, which incorporates both covariate-adjusted standardization and the inclusion of creatinine as a covariate in the regression model, had low bias and possessed 95% confidence interval coverage of nearly 95% for most simulated scenarios. For serum biomarker measurements, a similar approach involving standardization plus serum lipid level adjustment generally performed well. To control measurement error bias caused by variations in serum lipids or by urinary diluteness, we recommend improved methods for standardizing exposure concentrations across individuals.

  8. How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?

    PubMed

    Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C

    2016-10-01

    The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.

  9. Instrumentation for measuring dynamic spinal load moment exposures in the workplace.

    PubMed

    Marras, William S; Lavender, Steven A; Ferguson, Sue A; Splittstoesser, Riley E; Yang, Gang; Schabo, Pete

    2010-02-01

    Prior research has shown the load moment exposure to be one of the strongest predictors of low back disorder risk in manufacturing jobs. However, to extend these finding to the manual lifting and handling of materials in distribution centers, where the layout of the lifting task changes from one lift to the next and the lifts are highly dynamic, would be very challenging without an automated means of quantifying reach distances and item weights. The purpose of this paper is to describe the development and validation of automated instrumentation, the Moment Exposure Tracking System (METS), designed to capture the dynamic load moment exposures and spine postures used in distribution center jobs. This multiphase process started by obtaining baseline data describing the accuracy of existing manual methods for obtaining moment arms during the observation of dynamic lifting for the purposes of benchmarking the automated system. The process continued with the development and calibration of an ultrasonic system to track hand location and the development of load sensing handles that could be used to assess item weights. The final version of the system yielded an average absolute error in the load's moment arm of 4.1cm under the conditions of trunk flexion and load asymmetry. This compares well with the average absolute error of 10.9cm obtained using manual methods of measuring moment arms. With the item mass estimates being within half a kilogram, the instrumentation provides a reliable and valid means for assessing dynamic load moment exposures in dynamic distribution center lifting tasks.

  10. Examining the Error of Mis-Specifying Nonlinear Confounding Effect With Application on Accelerometer-Measured Physical Activity.

    PubMed

    Lee, Paul H

    2017-06-01

    Some confounders are nonlinearly associated with dependent variables, but they are often adjusted using a linear term. The purpose of this study was to examine the error of mis-specifying the nonlinear confounding effect. We carried out a simulation study to investigate the effect of adjusting for a nonlinear confounder in the estimation of a causal relationship between the exposure and outcome in 3 ways: using a linear term, binning into 5 equal-size categories, or using a restricted cubic spline of the confounder. Continuous, binary, and survival outcomes were simulated. We examined the confounder across varying measurement error. In addition, we performed a real data analysis examining the 3 strategies to handle the nonlinear effects of accelerometer-measured physical activity in the National Health and Nutrition Examination Survey 2003-2006 data. The mis-specification of a nonlinear confounder had little impact on causal effect estimation for continuous outcomes. For binary and survival outcomes, this mis-specification introduced bias, which could be eliminated using spline adjustment only when there is small measurement error of the confounder. Real data analysis showed that the associations between high blood pressure, high cholesterol, and diabetes and mortality adjusted for physical activity with restricted cubic spline were about 3% to 11% larger than their counterparts adjusted with a linear term. For continuous outcomes, confounders with nonlinear effects can be adjusting with a linear term. Spline adjustment should be used for binary and survival outcomes on confounders with small measurement error.

  11. An automated microphysiological assay for toxicity evaluation.

    PubMed

    Eggert, S; Alexander, F A; Wiest, J

    2015-08-01

    Screening a newly developed drug, food additive or cosmetic ingredient for toxicity is a critical preliminary step before it can move forward in the development pipeline. Due to the sometimes dire consequences when a harmful agent is overlooked, toxicologists work under strict guidelines to effectively catalogue and classify new chemical agents. Conventional assays involve long experimental hours and many manual steps that increase the probability of user error; errors that can potentially manifest as inaccurate toxicology results. Automated assays can overcome many potential mistakes that arise due to human error. In the presented work, we created and validated a novel, automated platform for a microphysiological assay that can examine cellular attributes with sensors measuring changes in cellular metabolic rate, oxygen consumption, and vitality mediated by exposure to a potentially toxic agent. The system was validated with low buffer culture medium with varied conductivities that caused changes in the measured impedance on integrated impedance electrodes.

  12. Spatial and temporal variability of fine particle composition and source types in five cities of Connecticut and Massachusetts

    PubMed Central

    Lee, Hyung Joo; Gent, Janneane F.; Leaderer, Brian P.; Koutrakis, Petros

    2011-01-01

    To protect public health from PM2.5 air pollution, it is critical to identify the source types of PM2.5 mass and chemical components associated with higher risks of adverse health outcomes. Source apportionment modeling using Positive Matrix Factorization (PMF), was used to identify PM2.5 source types and quantify the source contributions to PM2.5 in five cities of Connecticut and Massachusetts. Spatial and temporal variability of PM2.5 mass, components and source contributions were investigated. PMF analysis identified five source types: regional pollution as traced by sulfur, motor vehicle, road dust, oil combustion and sea salt. The sulfur-related regional pollution and traffic source type were major contributors to PM2.5. Due to sparse ground-level PM2.5 monitoring sites, current epidemiological studies are susceptible to exposure measurement errors. The higher correlations in concentrations and source contributions between different locations suggest less spatial variability, resulting in less exposure measurement errors. When concentrations and/or contributions were compared to regional averages, correlations were generally higher than between-site correlations. This suggests that for assigning exposures for health effects studies, using regional average concentrations or contributions from several PM2.5 monitors is more reliable than using data from the nearest central monitor. PMID:21429560

  13. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, T. S.

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmissionmore » and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less

  14. Long-term academic stress increases the late component of error processing: an ERP study.

    PubMed

    Wu, Jianhui; Yuan, Yiran; Duan, Hongxia; Qin, Shaozheng; Buchanan, Tony W; Zhang, Kan; Zhang, Liang

    2014-05-01

    Exposure to long-term stress has a variety of consequences on the brain and cognition. Few studies have examined the influence of long-term stress on event related potential (ERP) indices of error processing. The current study investigated how long-term academic stress modulates the error related negativity (Ne or ERN) and the error positivity (Pe) components of error processing. Forty-one male participants undergoing preparation for a major academic examination and 20 non-exam participants completed a Go-NoGo task while ERP measures were collected. The exam group reported higher perceived stress levels and showed increased Pe amplitude compared with the non-exam group. Participants' rating of the importance of the exam was positively associated with the amplitude of Pe, but these effects were not found for the Ne/ERN. These results suggest that long-term academic stress leads to greater motivational assessment of and higher emotional response to errors. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. SU-E-I-19: CTDI Values for All Protocols: Using the Ratio of the DLP Measured in CTDI Phantoms to the Measured Air Exposure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raterman, G; Gauntt, D

    2014-06-01

    Purpose: To propose a method other than CTDI phantom measurements for routine CT dosimetry QA. This consists of taking a series of air exposure measurements and calculating a factor for converting from this exposure measurement to the protocol's associated head or body CTDI value using DLP. The data presented are the ratios of phantom DLP to air exposure ratios for different scanners, as well as error in the displayed CTDI. Methods: For each scanner, the CTDI is measured at all available tube voltages using both the head and body phantoms. Then, the exposure is measured using a pencil chamber inmore » air at isocenter. A ratio of phantom DLP to exposure in air for a given protocol may be calculated and used for converting a simple air dose measurement to a head or body CTDI value. For our routine QA, the exposure in air for different collimations, mAs, and kVp is measured, and displayed CTDI is recorded. Therefore, the ratio calculated may convert these exposures to CTDI values that may then be compared to the displayed CTDI for a large range of acquisition parameter combinations. Results: It was found that all scanners tend to have a ratio factor that slightly increases with kVp. Also, Philips scanners appear to have less of a dependence on kVp; whereas, GE scanners have a lower ratio at lower kVp. The use of air exposure times the DLP conversion yielded CTDI values that were less than 10% different from the displayed CTDI on several scanners. Conclusion: This method may be used as a primary method for CT dosimetry QA. As a result of the ease of measurement, a dosimetry metric specific to that scanner may be calculated for a wide variety of CT protocols, which could also be used to monitor display CTDI value accuracy.« less

  16. Digital Intraoral Imaging Re-Exposure Rates of Dental Students.

    PubMed

    Senior, Anthea; Winand, Curtis; Ganatra, Seema; Lai, Hollis; Alsulfyani, Noura; Pachêco-Pereira, Camila

    2018-01-01

    A guiding principle of radiation safety is ensuring that radiation dosage is as low as possible while yielding the necessary diagnostic information. Intraoral images taken with conventional dental film have a higher re-exposure rate when taken by dental students compared to experienced staff. The aim of this study was to examine the prevalence of and reasons for re-exposure of digital intraoral images taken by third- and fourth-year dental students in a dental school clinic. At one dental school in Canada, the total number of intraoral images taken by third- and fourth-year dental students, re-exposures, and error descriptions were extracted from patient clinical records for an eight-month period (September 2015 to April 2016). The data were categorized to distinguish between digital images taken with solid-state sensors or photostimulable phosphor plates (PSP). The results showed that 9,397 intraoral images were made, and 1,064 required re-exposure. The most common error requiring re-exposure for bitewing images was an error in placement of the receptor too far mesially or distally (29% for sensors and 18% for PSP). The most common error requiring re-exposure for periapical images was inadequate capture of the periapical area (37% for sensors and 6% for PSP). A retake rate of 11% was calculated, and the common technique errors causing image deficiencies were identified. Educational intervention can now be specifically designed to reduce the retake rate and radiation dose for future patients.

  17. Adult myeloid leukaemia and radon exposure: a Bayesian model for a case-control study with error in covariates.

    PubMed

    Toti, Simona; Biggeri, Annibale; Forastiere, Francesco

    2005-06-30

    The possible association between radon exposure in dwellings and adult myeloid leukaemia had been explored in an Italian province by a case-control study. A total of 44 cases and 211 controls were selected from death certificates file. No association had been found in the original study (OR = 0.58 for > 185 vs 80 < or = Bq/cm). Here we reanalyse the data taking into account the measurement error of radon concentration and the presence of missing data. A Bayesian hierarchical model with error in covariates is proposed which allows appropriate imputation of missing values. The general conclusion of no evidence of association with radon does not change, but a negative association is not observed anymore (OR = 0.99 for > 185 vs 80 < or = Bq/cm). After adjusting for residential house radon and gamma radiation, and for the multilevel data structure, geological features of the soil is associated with adult myeloid leukaemia risk (OR = 2.14, 95 per cent Cr.I. 1.0-5.5). Copyright 2005 John Wiley & Sons, Ltd.

  18. Comparing biomarker measurements to a normal range: when ...

    EPA Pesticide Factsheets

    This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around the next step, the choice of using standard error of the mean or the calculated standard deviation to compare or predict measurement results. The National Exposure Research Laboratory’s (NERL’s) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA’s mission to protect human health and the environment. HEASD’s research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA’s strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.

  19. Toward refined estimates of ambient PM2.5 exposure: Evaluation of a physical outdoor-to-indoor transport model

    PubMed Central

    Hodas, Natasha; Meng, Qingyu; Lunden, Melissa M.; Turpin, Barbara J.

    2014-01-01

    Because people spend the majority of their time indoors, the variable efficiency with which ambient PM2.5 penetrates and persists indoors is a source of error in epidemiologic studies that use PM2.5 concentrations measured at central-site monitors as surrogates for ambient PM2.5 exposure. To reduce this error, practical methods to model indoor concentrations of ambient PM2.5 are needed. Toward this goal, we evaluated and refined an outdoor-to-indoor transport model using measured indoor and outdoor PM2.5 species concentrations and air exchange rates from the Relationships of Indoor, Outdoor, and Personal Air Study. Herein, we present model evaluation results, discuss what data are most critical to prediction of residential exposures at the individual-subject and populations levels, and make recommendations for the application of the model in epidemiologic studies. This paper demonstrates that not accounting for certain human activities (air conditioning and heating use, opening windows) leads to bias in predicted residential PM2.5 exposures at the individual-subject level, but not the population level. The analyses presented also provide quantitative evidence that shifts in the gas-particle partitioning of ambient organics with outdoor-to-indoor transport contribute significantly to variability in indoor ambient organic carbon concentrations and suggest that methods to account for these shifts will further improve the accuracy of outdoor-to-indoor transport models. PMID:25798047

  20. Measuring Electrostatic Discharge

    NASA Technical Reports Server (NTRS)

    Smith, William C.

    1987-01-01

    Apparatus measures electrostatic-discharge properties of several materials at once. Allows samples charged either by friction or by exposure to corona. By testing several samples simultaneously, apparatus eliminates errors introduced by variations among test conditions. Samples spaced so they pass at intervals under either of two retractable arms. Samples are 2 inches wide along circular path. Arm tips and voltmeter probe are 6 inches from turntable center. Servocontrolled turntable speed constant within 0.1 percent.

  1. The associations between birth outcomes and satellite-estimated maternal PM2.5 exposure in Shanghai, China

    NASA Astrophysics Data System (ADS)

    Xiao, Q.; Liu, Y.; Strickland, M. J.; Chang, H. H.; Kan, H.

    2017-12-01

    Background: Satellite remote sensing data have been employed for air pollution exposure assessment, with the intent of better characterizing exposure spatio-temproal variations. However, non-random missingness in satellite data may lead to exposure error. Objectives: We explored the differences in health effect estimates due to different exposure metrics, with and without satellite data, when analyzing the associations between maternal PM2.5 exposure and birth outcomes. Methods: We obtained birth registration records of 132,783 singleton live births during 2011-2014 in Shanghai. Trimester-specific and total pregnancy exposures were estimated from satellite PM2.5 predictions with missingness, gap-filled satellite PM2.5 predictions with complete coverage and regional average PM2.5 measurements from monitoring stations. Linear regressions estimated associations between birth weight and maternal PM2.5 exposure. Logistic regressions estimated associations between preterm birth and the first and second trimester exposure. Discrete-time models estimated third trimester and total pregnancy associations with preterm birth. Effect modifications by maternal age and parental education levels were investigated. Results: we observed statistically significant associations between maternal PM2.5 exposure during all exposure windows and adverse birth outcomes. A 10 µg/m3 increase in pregnancy PM2.5 exposure was associated with a 12.85 g (95% CI: 18.44, 7.27) decrease in birth weight for term births, and a 27% (95% CI: 20%, 36%) increase in the risk of preterm birth. Greater effects were observed between first and third trimester exposure and birth weight, as well as between first trimester exposure and preterm birth. Mothers older than 35 years and without college education tended to have higher associations with preterm birth. Conclusions: Gap-filled satellite data derived PM2.5 exposure estimates resulted in reduced exposure error and more precise health effect estimates.

  2. Recall bias in the assessment of exposure to mobile phones.

    PubMed

    Vrijheid, Martine; Armstrong, Bruce K; Bédard, Daniel; Brown, Julianne; Deltour, Isabelle; Iavarone, Ivano; Krewski, Daniel; Lagorio, Susanna; Moore, Stephen; Richardson, Lesley; Giles, Graham G; McBride, Mary; Parent, Marie-Elise; Siemiatycki, Jack; Cardis, Elisabeth

    2009-05-01

    Most studies of mobile phone use are case-control studies that rely on participants' reports of past phone use for their exposure assessment. Differential errors in recalled phone use are a major concern in such studies. INTERPHONE, a multinational case-control study of brain tumour risk and mobile phone use, included validation studies to quantify such errors and evaluate the potential for recall bias. Mobile phone records of 212 cases and 296 controls were collected from network operators in three INTERPHONE countries over an average of 2 years, and compared with mobile phone use reported at interview. The ratio of reported to recorded phone use was analysed as measure of agreement. Mean ratios were virtually the same for cases and controls: both underestimated number of calls by a factor of 0.81 and overestimated call duration by a factor of 1.4. For cases, but not controls, ratios increased with increasing time before the interview; however, these trends were based on few subjects with long-term data. Ratios increased by level of use. Random recall errors were large. In conclusion, there was little evidence for differential recall errors overall or in recent time periods. However, apparent overestimation by cases in more distant time periods could cause positive bias in estimates of disease risk associated with mobile phone use.

  3. Pollution, Health, and Avoidance Behavior: Evidence from the Ports of Los Angeles

    ERIC Educational Resources Information Center

    Moretti, Enrico; Neidell, Matthew

    2011-01-01

    A pervasive problem in estimating the costs of pollution is that optimizing individuals may compensate for increases in pollution by reducing their exposure, resulting in estimates that understate the full welfare costs. To account for this issue, measurement error, and environmental confounding, we estimate the health effects of ozone using daily…

  4. Effects of low-level sarin and cyclosarin exposure and Gulf War Illness on brain structure and function: a study at 4T.

    PubMed

    Chao, Linda L; Abadjian, Linda; Hlavin, Jennifer; Meyerhoff, Deiter J; Weiner, Michael W

    2011-12-01

    More than 100,000 US troops were potentially exposed to chemical warfare agents sarin (GB) and cyclosarin (GF) when an ammunition dump at Khamisiyah, Iraq was destroyed during the 1991 Persian Gulf War (GW). We previously found reduced total gray matter (GM) volume in 40 GW veterans with suspected GB/GF exposure relative to 40 matched, unexposed GW veterans on a 1.5T MR scanner. In this study, we reexamine the relationship between GB/GF exposure and volumetric measurements of gross neuroanatomical structures in a different cohort of GW veterans on a 4T MR scanner. Neuropsychological and magnetic resonance imaging (MRI) data from a cross sectional study on Gulf War Illness performed between 2005 and 2010 were used in this study. 4T MRI data were analyzed using automated image processing techniques that produced volumetric measurements of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF). Binary comparisons of 64 GB/GF exposed veterans and 64 'matched', unexposed veterans revealed reduced GM (p=0.03) and WM (p=0.03) volumes in the exposed veterans. Behaviorally, exposed veterans committed more errors of omission (p=0.02) and tended to have slower responses (p=0.05) than unexposed veterans on the Continuous Performance Test (CPT), a measure sustained and selective attention. Regression analyses confirmed that GB/GF exposure status predicted GM (β=-0.11, p=0.02) and WM (β=-0.14, p=0.03) volumes, and number of CPT omission errors (β=0.22, p=0.02) over and above potentially confounding demographic, clinical, and psychosocial variables. There was no dose-response relationship between estimated levels of GB/GF exposure and brain volume. However, we did find an effect of Gulf War Illness/Chronic Multisymptom Illness on both GM and WM volume in the GB/GF exposed veterans. These findings confirm previous reports by our group and others of central nervous system pathology in GW veterans with suspected exposure to low levels of GB/GF two decades after the exposure. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Overlay improvement by exposure map based mask registration optimization

    NASA Astrophysics Data System (ADS)

    Shi, Irene; Guo, Eric; Chen, Ming; Lu, Max; Li, Gordon; Li, Rivan; Tian, Eric

    2015-03-01

    Along with the increased miniaturization of semiconductor electronic devices, the design rules of advanced semiconductor devices shrink dramatically. [1] One of the main challenges of lithography step is the layer-to-layer overlay control. Furthermore, DPT (Double Patterning Technology) has been adapted for the advanced technology node like 28nm and 14nm, corresponding overlay budget becomes even tighter. [2][3] After the in-die mask registration (pattern placement) measurement is introduced, with the model analysis of a KLA SOV (sources of variation) tool, it's observed that registration difference between masks is a significant error source of wafer layer-to-layer overlay at 28nm process. [4][5] Mask registration optimization would highly improve wafer overlay performance accordingly. It was reported that a laser based registration control (RegC) process could be applied after the pattern generation or after pellicle mounting and allowed fine tuning of the mask registration. [6] In this paper we propose a novel method of mask registration correction, which can be applied before mask writing based on mask exposure map, considering the factors of mask chip layout, writing sequence, and pattern density distribution. Our experiment data show if pattern density on the mask keeps at a low level, in-die mask registration residue error in 3sigma could be always under 5nm whatever blank type and related writer POSCOR (position correction) file was applied; it proves random error induced by material or equipment would occupy relatively fixed error budget as an error source of mask registration. On the real production, comparing the mask registration difference through critical production layers, it could be revealed that registration residue error of line space layers with higher pattern density is always much larger than the one of contact hole layers with lower pattern density. Additionally, the mask registration difference between layers with similar pattern density could also achieve under 5nm performance. We assume mask registration excluding random error is mostly induced by charge accumulation during mask writing, which may be calculated from surrounding exposed pattern density. Multi-loading test mask registration result shows that with x direction writing sequence, mask registration behavior in x direction is mainly related to sequence direction, but mask registration in y direction would be highly impacted by pattern density distribution map. It proves part of mask registration error is due to charge issue from nearby environment. If exposure sequence is chip by chip for normal multi chip layout case, mask registration of both x and y direction would be impacted analogously, which has also been proved by real data. Therefore, we try to set up a simple model to predict the mask registration error based on mask exposure map, and correct it with the given POSCOR (position correction) file for advanced mask writing if needed.

  6. Propagation of resist heating mask error to wafer level

    NASA Astrophysics Data System (ADS)

    Babin, S. V.; Karklin, Linard

    2006-10-01

    As technology is approaching 45 nm and below the IC industry is experiencing a severe product yield hit due to rapidly shrinking process windows and unavoidable manufacturing process variations. Current EDA tools are unable by their nature to deliver optimized and process-centered designs that call for 'post design' localized layout optimization DFM tools. To evaluate the impact of different manufacturing process variations on final product it is important to trace and evaluate all errors through design to manufacturing flow. Photo mask is one of the critical parts of this flow, and special attention should be paid to photo mask manufacturing process and especially to mask tight CD control. Electron beam lithography (EBL) is a major technique which is used for fabrication of high-end photo masks. During the writing process, resist heating is one of the sources for mask CD variations. Electron energy is released in the mask body mainly as heat, leading to significant temperature fluctuations in local areas. The temperature fluctuations cause changes in resist sensitivity, which in turn leads to CD variations. These CD variations depend on mask writing speed, order of exposure, pattern density and its distribution. Recent measurements revealed up to 45 nm CD variation on the mask when using ZEP resist. The resist heating problem with CAR resists is significantly smaller compared to other types of resists. This is partially due to higher resist sensitivity and the lower exposure dose required. However, there is no data yet showing CD errors on the wafer induced by CAR resist heating on the mask. This effect can be amplified by high MEEF values and should be carefully evaluated at 45nm and below technology nodes where tight CD control is required. In this paper, we simulated CD variation on the mask due to resist heating; then a mask pattern with the heating error was transferred onto the wafer. So, a CD error on the wafer was evaluated subject to only one term of the mask error budget - the resist heating CD error. In simulation of exposure using a stepper, variable MEEF was considered.

  7. CCD image sensor induced error in PIV applications

    NASA Astrophysics Data System (ADS)

    Legrand, M.; Nogueira, J.; Vargas, A. A.; Ventas, R.; Rodríguez-Hidalgo, M. C.

    2014-06-01

    The readout procedure of charge-coupled device (CCD) cameras is known to generate some image degradation in different scientific imaging fields, especially in astrophysics. In the particular field of particle image velocimetry (PIV), widely extended in the scientific community, the readout procedure of the interline CCD sensor induces a bias in the registered position of particle images. This work proposes simple procedures to predict the magnitude of the associated measurement error. Generally, there are differences in the position bias for the different images of a certain particle at each PIV frame. This leads to a substantial bias error in the PIV velocity measurement (˜0.1 pixels). This is the order of magnitude that other typical PIV errors such as peak-locking may reach. Based on modern CCD technology and architecture, this work offers a description of the readout phenomenon and proposes a modeling for the CCD readout bias error magnitude. This bias, in turn, generates a velocity measurement bias error when there is an illumination difference between two successive PIV exposures. The model predictions match the experiments performed with two 12-bit-depth interline CCD cameras (MegaPlus ES 4.0/E incorporating the Kodak KAI-4000M CCD sensor with 4 megapixels). For different cameras, only two constant values are needed to fit the proposed calibration model and predict the error from the readout procedure. Tests by different researchers using different cameras would allow verification of the model, that can be used to optimize acquisition setups. Simple procedures to obtain these two calibration values are also described.

  8. Neighborhood Psychosocial Stressors, Air Pollution, and Cognitive Function among Older U.S. Adults

    PubMed Central

    Ailshire, Jennifer; Karraker, Amelia; Clarke, Philippa

    2016-01-01

    A growing number of studies have found a link between outdoor air pollution and cognitive function among older adults. Psychosocial stress is considered an important factor determining differential susceptibility to environmental hazards and older adults living in stressful neighborhoods may be particularly vulnerable to the adverse health effects of exposure to hazards such as air pollution. The objective of this study is to determine if neighborhood social stress amplifies the association between fine particulate matter air pollution (PM2.5) and poor cognitive function in older, community-dwelling adults. We use data on 779 U.S. adults ages 55 and older from the 2001/2002 wave of the Americans’ Changing Lives study. We determined annual average PM2.5 concentration in 2001 in the area of residence by linking respondents with EPA air monitoring data using census tract identifiers. Cognitive function was measured using the number of errors on the Short Portable Mental Status Questionnaire (SPMSQ). Exposure to neighborhood social stressors was measured using perceptions of disorder and decay and included subjective evaluations of neighborhood upkeep and the presence of deteriorating/abandoned buildings, trash, and empty lots. We used negative binomial regression to examine the interaction of neighborhood perceived stress and PM2.5 on the count of errors on the cognitive function assessment. We found that the association between PM2.5 and cognitive errors was stronger among older adults living in high stress neighborhoods. These findings support recent theoretical developments in environmental health and health disparities research emphasizing the synergistic effects of neighborhood social stressors and environmental hazards on residents’ health. Those living in socioeconomically disadvantaged neighborhoods, where social stressors and environmental hazards are more common, may be particularly susceptible to adverse health effects of social and physical environmental exposures. PMID:27886528

  9. Light Exposure and Eye Growth in Childhood.

    PubMed

    Read, Scott A; Collins, Michael J; Vincent, Stephen J

    2015-10-01

    The purpose of this study was to examine the relationship between objectively measured ambient light exposure and longitudinal changes in axial eye growth in childhood. A total of 101 children (41 myopes and 60 nonmyopes), 10 to 15 years of age participated in this prospective longitudinal observational study. Axial eye growth was determined from measurements of ocular optical biometry collected at four study visits over an 18-month period. Each child's mean daily light exposure was derived from two periods (each 14 days long) of objective light exposure measurements from a wrist-worn light sensor. Over the 18-month study period, a modest but statistically significant association between greater average daily light exposure and slower axial eye growth was observed (P = 0.047). Other significant predictors of axial eye growth in this population included children's refractive error group (P < 0.001), sex (P < 0.01), and age (P < 0.001). Categorized according to their objectively measured average daily light exposure and adjusting for potential confounders (age, sex, baseline axial length, parental myopia, nearwork, and physical activity), children experiencing low average daily light exposure (mean daily light exposure: 459 ± 117 lux, annual eye growth: 0.13 mm/y) exhibited significantly greater eye growth than children experiencing moderate (842 ± 109 lux, 0.060 mm/y), and high (1455 ± 317 lux, 0.065 mm/y) average daily light exposure levels (P = 0.01). In this population of children, greater daily light exposure was associated with less axial eye growth over an 18-month period. These findings support the role of light exposure in the documented association between time spent outdoors and childhood myopia.

  10. Non-health care facility anticonvulsant medication errors in the United States.

    PubMed

    DeDonato, Emily A; Spiller, Henry A; Casavant, Marcel J; Chounthirath, Thitphalak; Hodges, Nichole L; Smith, Gary A

    2018-06-01

    This study provides an epidemiological description of non-health care facility medication errors involving anticonvulsant drugs. A retrospective analysis of National Poison Data System data was conducted on non-health care facility medication errors involving anticonvulsant drugs reported to US Poison Control Centers from 2000 through 2012. During the study period, 108,446 non-health care facility medication errors involving anticonvulsant pharmaceuticals were reported to US Poison Control Centers, averaging 8342 exposures annually. The annual frequency and rate of errors increased significantly over the study period, by 96.6 and 76.7%, respectively. The rate of exposures resulting in health care facility use increased by 83.3% and the rate of exposures resulting in serious medical outcomes increased by 62.3%. In 2012, newer anticonvulsants, including felbamate, gabapentin, lamotrigine, levetiracetam, other anticonvulsants (excluding barbiturates), other types of gamma aminobutyric acid, oxcarbazepine, topiramate, and zonisamide, accounted for 67.1% of all exposures. The rate of non-health care facility anticonvulsant medication errors reported to Poison Control Centers increased during 2000-2012, resulting in more frequent health care facility use and serious medical outcomes. Newer anticonvulsants, although often considered safer and more easily tolerated, were responsible for much of this trend and should still be administered with caution.

  11. The measurement of radiation exposure of astronauts by radiochemical techniques

    NASA Technical Reports Server (NTRS)

    Brodzinski, R. L.

    1971-01-01

    The concentrations of 23 major, minor, and trace elements in the fecal samples from the Apollo 12 and 13 astronauts are reported. Most elemental excretion rates are comparable to rates reported for earlier missions. Exceptions are noted for calcium, iron, and tin. Body calcium and iron losses appear to be reduced during the Apollo 12 and 13 missions such that losses now seem to be insignificant. Refined measurements of tin excretion rates agree with normal dietary intakes. Earlier reported tin values are in error. A new passive dosimetry canister was designed which contains foils of tantalum, copper, titanium, iron, cobalt, aluminum, and scandium. By measuring the concentrations of the various products of nuclear reactions in these metals after space exposure, the characteristics of the incident cosmic particles can be determined.

  12. Comparing children's GPS tracks with geospatial proxies for exposure to junk food.

    PubMed

    Sadler, Richard C; Gilliland, Jason A

    2015-01-01

    Various geospatial techniques have been employed to estimate children's exposure to environmental cardiometabolic risk factors, including junk food. But many studies uncritically rely on exposure proxies which differ greatly from actual exposure. Misrepresentation of exposure by researchers could lead to poor decisions and ineffective policymaking. This study conducts a GIS-based analysis of GPS tracks--'activity spaces'--and 21 proxies for activity spaces (e.g. buffers, container approaches) for a sample of 526 children (ages 9-14) in London, Ontario, Canada. These measures are combined with a validated food environment database (including fast food and convenience stores) to create a series of junk food exposure estimates and quantify the errors resulting from use of different proxy methods. Results indicate that exposure proxies consistently underestimate exposure to junk foods by as much as 68%. This underestimation is important to policy development because children are exposed to more junk food than estimated using typical methods. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Inferring ultraviolet anatomical exposure patterns while distinguishing the relative contribution of radiation components

    NASA Astrophysics Data System (ADS)

    Vuilleumier, Laurent; Milon, Antoine; Bulliard, Jean-Luc; Moccozet, Laurent; Vernez, David

    2013-05-01

    Exposure to solar ultraviolet (UV) radiation is the main causative factor for skin cancer. UV exposure depends on environmental and individual factors, but individual exposure data remain scarce. While ground UV irradiance is monitored via different techniques, it is difficult to translate such observations into human UV exposure or dose because of confounding factors. A multi-disciplinary collaboration developed a model predicting the dose and distribution of UV exposure on the basis of ground irradiation and morphological data. Standard 3D computer graphics techniques were adapted to develop a simulation tool that estimates solar exposure of a virtual manikin depicted as a triangle mesh surface. The amount of solar energy received by various body locations is computed for direct, diffuse and reflected radiation separately. Dosimetric measurements obtained in field conditions were used to assess the model performance. The model predicted exposure to solar UV adequately with a symmetric mean absolute percentage error of 13% and half of the predictions within 17% range of the measurements. Using this tool, solar UV exposure patterns were investigated with respect to the relative contribution of the direct, diffuse and reflected radiation. Exposure doses for various body parts and exposure scenarios of a standing individual were assessed using erythemally-weighted UV ground irradiance data measured in 2009 at Payerne, Switzerland as input. For most anatomical sites, mean daily doses were high (typically 6.2-14.6 Standard Erythemal Dose, SED) and exceeded recommended exposure values. Direct exposure was important during specific periods (e.g. midday during summer), but contributed moderately to the annual dose, ranging from 15 to 24% for vertical and horizontal body parts, respectively. Diffuse irradiation explained about 80% of the cumulative annual exposure dose.

  14. Optimal interpolation schemes to constrain pmPM2.5 in regional modeling over the United States

    NASA Astrophysics Data System (ADS)

    Sousan, Sinan Dhia Jameel

    This thesis presents the use of data assimilation with optimal interpolation (OI) to develop atmospheric aerosol concentration estimates for the United States at high spatial and temporal resolutions. Concentration estimates are highly desirable for a wide range of applications, including visibility, climate, and human health. OI is a viable data assimilation method that can be used to improve Community Multiscale Air Quality (CMAQ) model fine particulate matter (PM2.5) estimates. PM2.5 is the mass of solid and liquid particles with diameters less than or equal to 2.5 µm suspended in the gas phase. OI was employed by combining model estimates with satellite and surface measurements. The satellite data assimilation combined 36 x 36 km aerosol concentrations from CMAQ with aerosol optical depth (AOD) measured by MODIS and AERONET over the continental United States for 2002. Posterior model concentrations generated by the OI algorithm were compared with surface PM2.5 measurements to evaluate a number of possible data assimilation parameters, including model error, observation error, and temporal averaging assumptions. Evaluation was conducted separately for six geographic U.S. regions in 2002. Variability in model error and MODIS biases limited the effectiveness of a single data assimilation system for the entire continental domain. The best combinations of four settings and three averaging schemes led to a domain-averaged improvement in fractional error from 1.2 to 0.97 and from 0.99 to 0.89 at respective IMPROVE and STN monitoring sites. For 38% of OI results, MODIS OI degraded the forward model skill due to biases and outliers in MODIS AOD. Surface data assimilation combined 36 × 36 km aerosol concentrations from the CMAQ model with surface PM2.5 measurements over the continental United States for 2002. The model error covariance matrix was constructed by using the observational method. The observation error covariance matrix included site representation that scaled the observation error by land use (i.e. urban or rural locations). In theory, urban locations should have less effect on surrounding areas than rural sites, which can be controlled using site representation error. The annual evaluations showed substantial improvements in model performance with increases in the correlation coefficient from 0.36 (prior) to 0.76 (posterior), and decreases in the fractional error from 0.43 (prior) to 0.15 (posterior). In addition, the normalized mean error decreased from 0.36 (prior) to 0.13 (posterior), and the RMSE decreased from 5.39 µg m-3 (prior) to 2.32 µg m-3 (posterior). OI decreased model bias for both large spatial areas and point locations, and could be extended to more advanced data assimilation methods. The current work will be applied to a five year (2000-2004) CMAQ simulation aimed at improving aerosol model estimates. The posterior model concentrations will be used to inform exposure studies over the U.S. that relate aerosol exposure to mortality and morbidity rates. Future improvements for the OI techniques used in the current study will include combining both surface and satellite data to improve posterior model estimates. Satellite data have high spatial and temporal resolutions in comparison to surface measurements, which are scarce but more accurate than model estimates. The satellite data are subject to noise affected by location and season of retrieval. The implementation of OI to combine satellite and surface data sets has the potential to improve posterior model estimates for locations that have no direct measurements.

  15. Education and myopia: assessing the direction of causality by mendelian randomisation

    PubMed Central

    Mountjoy, Edward; Davies, Neil M; Plotnikov, Denis; Smith, George Davey; Rodriguez, Santiago; Williams, Cathy E; Guggenheim, Jeremy A

    2018-01-01

    Abstract Objectives To determine whether more years spent in education is a causal risk factor for myopia, or whether myopia is a causal risk factor for more years in education. Design Bidirectional, two sample mendelian randomisation study. Setting Publically available genetic data from two consortiums applied to a large, independent population cohort. Genetic variants used as proxies for myopia and years of education were derived from two large genome wide association studies: 23andMe and Social Science Genetic Association Consortium (SSGAC), respectively. Participants 67 798 men and women from England, Scotland, and Wales in the UK Biobank cohort with available information for years of completed education and refractive error. Main outcome measures Mendelian randomisation analyses were performed in two directions: the first exposure was the genetic predisposition to myopia, measured with 44 genetic variants strongly associated with myopia in 23andMe, and the outcome was years in education; and the second exposure was the genetic predisposition to higher levels of education, measured with 69 genetic variants from SSGAC, and the outcome was refractive error. Results Conventional regression analyses of the observational data suggested that every additional year of education was associated with a more myopic refractive error of −0.18 dioptres/y (95% confidence interval −0.19 to −0.17; P<2e-16). Mendelian randomisation analyses suggested the true causal effect was even stronger: −0.27 dioptres/y (−0.37 to −0.17; P=4e-8). By contrast, there was little evidence to suggest myopia affected education (years in education per dioptre of refractive error −0.008 y/dioptre, 95% confidence interval −0.041 to 0.025, P=0.6). Thus, the cumulative effect of more years in education on refractive error means that a university graduate from the United Kingdom with 17 years of education would, on average, be at least −1 dioptre more myopic than someone who left school at age 16 (with 12 years of education). Myopia of this magnitude would be sufficient to necessitate the use of glasses for driving. Sensitivity analyses showed minimal evidence for genetic confounding that could have biased the causal effect estimates. Conclusions This study shows that exposure to more years in education contributes to the rising prevalence of myopia. Increasing the length of time spent in education may inadvertently increase the prevalence of myopia and potential future visual disability. PMID:29875094

  16. Bias, Confounding, and Interaction: Lions and Tigers, and Bears, Oh My!

    PubMed

    Vetter, Thomas R; Mascha, Edward J

    2017-09-01

    Epidemiologists seek to make a valid inference about the causal effect between an exposure and a disease in a specific population, using representative sample data from a specific population. Clinical researchers likewise seek to make a valid inference about the association between an intervention and outcome(s) in a specific population, based upon their randomly collected, representative sample data. Both do so by using the available data about the sample variable to make a valid estimate about its corresponding or underlying, but unknown population parameter. Random error in an experiment can be due to the natural, periodic fluctuation or variation in the accuracy or precision of virtually any data sampling technique or health measurement tool or scale. In a clinical research study, random error can be due to not only innate human variability but also purely chance. Systematic error in an experiment arises from an innate flaw in the data sampling technique or measurement instrument. In the clinical research setting, systematic error is more commonly referred to as systematic bias. The most commonly encountered types of bias in anesthesia, perioperative, critical care, and pain medicine research include recall bias, observational bias (Hawthorne effect), attrition bias, misclassification or informational bias, and selection bias. A confounding variable is a factor associated with both the exposure of interest and the outcome of interest. A confounding variable (confounding factor or confounder) is a variable that correlates (positively or negatively) with both the exposure and outcome. Confounding is typically not an issue in a randomized trial because the randomized groups are sufficiently balanced on all potential confounding variables, both observed and nonobserved. However, confounding can be a major problem with any observational (nonrandomized) study. Ignoring confounding in an observational study will often result in a "distorted" or incorrect estimate of the association or treatment effect. Interaction among variables, also known as effect modification, exists when the effect of 1 explanatory variable on the outcome depends on the particular level or value of another explanatory variable. Bias and confounding are common potential explanations for statistically significant associations between exposure and outcome when the true relationship is noncausal. Understanding interactions is vital to proper interpretation of treatment effects. These complex concepts should be consistently and appropriately considered whenever one is not only designing but also analyzing and interpreting data from a randomized trial or observational study.

  17. Spatial and temporal variability of fine particle composition and source types in five cities of Connecticut and Massachusetts.

    PubMed

    Lee, Hyung Joo; Gent, Janneane F; Leaderer, Brian P; Koutrakis, Petros

    2011-05-01

    To protect public health from PM(2.5) air pollution, it is critical to identify the source types of PM(2.5) mass and chemical components associated with higher risks of adverse health outcomes. Source apportionment modeling using Positive Matrix Factorization (PMF), was used to identify PM(2.5) source types and quantify the source contributions to PM(2.5) in five cities of Connecticut and Massachusetts. Spatial and temporal variability of PM(2.5) mass, components and source contributions were investigated. PMF analysis identified five source types: regional pollution as traced by sulfur, motor vehicle, road dust, oil combustion and sea salt. The sulfur-related regional pollution and traffic source type were major contributors to PM(2.5). Due to sparse ground-level PM(2.5) monitoring sites, current epidemiological studies are susceptible to exposure measurement errors. The higher correlations in concentrations and source contributions between different locations suggest less spatial variability, resulting in less exposure measurement errors. When concentrations and/or contributions were compared to regional averages, correlations were generally higher than between-site correlations. This suggests that for assigning exposures for health effects studies, using regional average concentrations or contributions from several PM(2.5) monitors is more reliable than using data from the nearest central monitor. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Geocoding rural addresses in a community contaminated by PFOA: a comparison of methods.

    PubMed

    Vieira, Verónica M; Howard, Gregory J; Gallagher, Lisa G; Fletcher, Tony

    2010-04-21

    Location is often an important component of exposure assessment, and positional errors in geocoding may result in exposure misclassification. In rural areas, successful geocoding to a street address is limited by rural route boxes. Communities have assigned physical street addresses to rural route boxes as part of E911 readdressing projects for improved emergency response. Our study compared automated and E911 methods for recovering and geocoding valid street addresses and assessed the impact of positional errors on exposure classification. The current study is a secondary analysis of existing data that included 135 addresses self-reported by participants of a rural community study who were exposed via public drinking water to perfluorooctanoate (PFOA) released from a DuPont facility in Parkersburg, West Virginia. We converted pre-E911 to post-E911 addresses using two methods: automated ZP4 address-correction software with the U.S. Postal Service LACS database and E911 data provided by Wood County, West Virginia. Addresses were geocoded using TeleAtlas, an online commercial service, and ArcView with StreetMap Premium North America NAVTEQ 2008 enhanced street dataset. We calculated positional errors using GPS measurements collected at each address and assessed exposure based on geocoded location in relation to public water pipes. The county E911 data converted 89% of the eligible addresses compared to 35% by ZP4 LACS. ArcView/NAVTEQ geocoded more addresses (n = 130) and with smaller median distance between geocodes and GPS coordinates (39 meters) than TeleAtlas (n = 85, 188 meters). Without E911 address conversion, 25% of the geocodes would have been more than 1000 meters from the true location. Positional errors in TeleAtlas geocoding resulted in exposure misclassification of seven addresses whereas ArcView/NAVTEQ methods did not misclassify any addresses. Although the study was limited by small numbers, our results suggest that the use of county E911 data in rural areas increases the rate of successful geocoding. Furthermore, positional accuracy of rural addresses in the study area appears to vary by geocoding method. In a large epidemiological study investigating the health effects of PFOA-contaminated public drinking water, this could potentially result in exposure misclassification if addresses are incorrectly geocoded to a street segment not serviced by public water.

  19. Home dim light melatonin onsets with measures of compliance in delayed sleep phase disorder.

    PubMed

    Burgess, Helen J; Park, Margaret; Wyatt, James K; Fogg, Louis F

    2016-06-01

    The dim light melatonin onset (DLMO) assists with the diagnosis and treatment of circadian rhythm sleep disorders. Home DLMOs are attractive for cost savings and convenience, but can be confounded by home lighting and sample timing errors. We developed a home saliva collection kit with objective measures of light exposure and sample timing. We report on our first test of the kit in a clinical population. Thirty-two participants with delayed sleep phase disorder (DSPD; 17 women, aged 18-52 years) participated in two back-to-back home and laboratory phase assessments. Most participants (66%) received at least one 30-s epoch of light >50 lux during the home phase assessments, but for only 1.5% of the time. Most participants (56%) collected every saliva sample within 5 min of the scheduled time. Eighty-three per cent of home DLMOs were not affected by light or sampling errors. The home DLMOs occurred, on average, 10.2 min before the laboratory DLMOs, and were correlated highly with the laboratory DLMOs (r = 0.93, P < 0.001). These results indicate that home saliva sampling with objective measures of light exposure and sample timing, can assist in identifying accurate home DLMOs. © 2016 European Sleep Research Society.

  20. Prevalence and Risk Factors for Refractive Errors: Korean National Health and Nutrition Examination Survey 2008-2011

    PubMed Central

    Kim, Eun Chul; Morgan, Ian G.; Kakizaki, Hirohiko; Kang, Seungbum; Jee, Donghyun

    2013-01-01

    Purpose To examine the prevalence and risk factors of refractive errors in a representative Korean population aged 20 years old or older. Methods A total of 23,392 people aged 20+ years were selected for the Korean National Health and Nutrition Survey 2008–2011, using stratified, multistage, clustered sampling. Refractive error was measured by autorefraction without cycloplegia, and interviews were performed regarding associated risk factors including gender, age, height, education level, parent's education level, economic status, light exposure time, and current smoking history. Results Of 23,392 participants, refractive errors were examined in 22,562 persons, including 21,356 subjects with phakic eyes. The overall prevalences of myopia (< -0.5 D), high myopia (< -6.0 D), and hyperopia (> 0.5 D) were 48.1% (95% confidence interval [CI], 47.4–48.8), 4.0% (CI, 3.7–4.3), and 24.2% (CI, 23.6–24.8), respectively. The prevalence of myopia sharply decreased from 78.9% (CI, 77.4–80.4) in 20–29 year olds to 16.1% (CI, 14.9–17.3) in 60–69 year olds. In multivariable logistic regression analyses restricted to subjects aged 40+ years, myopia was associated with younger age (odds ratio [OR], 0.94; 95% Confidence Interval [CI], 0.93-0.94, p < 0.001), education level of university or higher (OR, 2.31; CI, 1.97–2.71, p < 0.001), and shorter sunlight exposure time (OR, 0.84; CI, 0.76–0.93, p = 0.002). Conclusions This study provides the first representative population-based data on refractive error for Korean adults. The prevalence of myopia in Korean adults in 40+ years (34.7%) was comparable to that in other Asian countries. These results show that the younger generations in Korea are much more myopic than previous generations, and that important factors associated with this increase are increased education levels and reduced sunlight exposures. PMID:24224049

  1. Prevalence and risk factors for refractive errors: Korean National Health and Nutrition Examination Survey 2008-2011.

    PubMed

    Kim, Eun Chul; Morgan, Ian G; Kakizaki, Hirohiko; Kang, Seungbum; Jee, Donghyun

    2013-01-01

    To examine the prevalence and risk factors of refractive errors in a representative Korean population aged 20 years old or older. A total of 23,392 people aged 20+ years were selected for the Korean National Health and Nutrition Survey 2008-2011, using stratified, multistage, clustered sampling. Refractive error was measured by autorefraction without cycloplegia, and interviews were performed regarding associated risk factors including gender, age, height, education level, parent's education level, economic status, light exposure time, and current smoking history. Of 23,392 participants, refractive errors were examined in 22,562 persons, including 21,356 subjects with phakic eyes. The overall prevalences of myopia (< -0.5 D), high myopia (< -6.0 D), and hyperopia (> 0.5 D) were 48.1% (95% confidence interval [CI], 47.4-48.8), 4.0% (CI, 3.7-4.3), and 24.2% (CI, 23.6-24.8), respectively. The prevalence of myopia sharply decreased from 78.9% (CI, 77.4-80.4) in 20-29 year olds to 16.1% (CI, 14.9-17.3) in 60-69 year olds. In multivariable logistic regression analyses restricted to subjects aged 40+ years, myopia was associated with younger age (odds ratio [OR], 0.94; 95% Confidence Interval [CI], 0.93-0.94, p < 0.001), education level of university or higher (OR, 2.31; CI, 1.97-2.71, p < 0.001), and shorter sunlight exposure time (OR, 0.84; CI, 0.76-0.93, p = 0.002). This study provides the first representative population-based data on refractive error for Korean adults. The prevalence of myopia in Korean adults in 40+ years (34.7%) was comparable to that in other Asian countries. These results show that the younger generations in Korea are much more myopic than previous generations, and that important factors associated with this increase are increased education levels and reduced sunlight exposures.

  2. Exposure Time Optimization for Highly Dynamic Star Trackers

    PubMed Central

    Wei, Xinguo; Tan, Wei; Li, Jian; Zhang, Guangjun

    2014-01-01

    Under highly dynamic conditions, the star-spots on the image sensor of a star tracker move across many pixels during the exposure time, which will reduce star detection sensitivity and increase star location errors. However, this kind of effect can be compensated well by setting an appropriate exposure time. This paper focuses on how exposure time affects the star tracker under highly dynamic conditions and how to determine the most appropriate exposure time for this case. Firstly, the effect of exposure time on star detection sensitivity is analyzed by establishing the dynamic star-spot imaging model. Then the star location error is deduced based on the error analysis of the sub-pixel centroiding algorithm. Combining these analyses, the effect of exposure time on attitude accuracy is finally determined. Some simulations are carried out to validate these effects, and the results show that there are different optimal exposure times for different angular velocities of a star tracker with a given configuration. In addition, the results of night sky experiments using a real star tracker agree with the simulation results. The summarized regularities in this paper should prove helpful in the system design and dynamic performance evaluation of the highly dynamic star trackers. PMID:24618776

  3. Correlation of Head Impacts to Change in Balance Error Scoring System Scores in Division I Men's Lacrosse Players.

    PubMed

    Miyashita, Theresa L; Diakogeorgiou, Eleni; Marrie, Kaitlyn

    Investigation into the effect of cumulative subconcussive head impacts has yielded various results in the literature, with many supporting a link to neurological deficits. Little research has been conducted on men's lacrosse and associated balance deficits from head impacts. (1) Athletes will commit more errors on the postseason Balance Error Scoring System (BESS) test. (2) There will be a positive correlation to change in BESS scores and head impact exposure data. Prospective longitudinal study. Level 3. Thirty-four Division I men's lacrosse players (age, 19.59 ± 1.42 years) wore helmets instrumented with a sensor to collect head impact exposure data over the course of a competitive season. Players completed a BESS test at the start and end of the competitive season. The number of errors from pre- to postseason increased during the double-leg stance on foam ( P < 0.001), tandem stance on foam ( P = 0.009), total number of errors on a firm surface ( P = 0.042), and total number of errors on a foam surface ( P = 0.007). There were significant correlations only between the total errors on a foam surface and linear acceleration ( P = 0.038, r = 0.36), head injury criteria ( P = 0.024, r = 0.39), and Gadd Severity Index scores ( P = 0.031, r = 0.37). Changes in the total number of errors on a foam surface may be considered a sensitive measure to detect balance deficits associated with cumulative subconcussive head impacts sustained over the course of 1 lacrosse season, as measured by average linear acceleration, head injury criteria, and Gadd Severity Index scores. If there is microtrauma to the vestibular system due to repetitive subconcussive impacts, only an assessment that highly stresses the vestibular system may be able to detect these changes. Cumulative subconcussive impacts may result in neurocognitive dysfunction, including balance deficits, which are associated with an increased risk for injury. The development of a strategy to reduce total number of head impacts may curb the associated sequelae. Incorporation of a modified BESS test, firm surface only, may not be recommended as it may not detect changes due to repetitive impacts over the course of a competitive season.

  4. Brain signaling and behavioral responses induced by exposure to (56)Fe-particle radiation

    NASA Technical Reports Server (NTRS)

    Denisova, N. A.; Shukitt-Hale, B.; Rabin, B. M.; Joseph, J. A.

    2002-01-01

    Previous experiments have demonstrated that exposure to 56Fe-particle irradiation (1.5 Gy, 1 GeV) produced aging-like accelerations in neuronal and behavioral deficits. Astronauts on long-term space flights will be exposed to similar heavy-particle radiations that might have similar deleterious effects on neuronal signaling and cognitive behavior. Therefore, the present study evaluated whether radiation-induced spatial learning and memory behavioral deficits are associated with region-specific brain signaling deficits by measuring signaling molecules previously found to be essential for behavior [pre-synaptic vesicle proteins, synaptobrevin and synaptophysin, and protein kinases, calcium-dependent PRKCs (also known as PKCs) and PRKA (PRKA RIIbeta)]. The results demonstrated a significant radiation-induced increase in reference memory errors. The increases in reference memory errors were significantly negatively correlated with striatal synaptobrevin and frontal cortical synaptophysin expression. Both synaptophysin and synaptobrevin are synaptic vesicle proteins that are important in cognition. Striatal PRKA, a memory signaling molecule, was also significantly negatively correlated with reference memory errors. Overall, our findings suggest that radiation-induced pre-synaptic facilitation may contribute to some previously reported radiation-induced decrease in striatal dopamine release and for the disruption of the central dopaminergic system integrity and dopamine-mediated behavior.

  5. Brain signaling and behavioral responses induced by exposure to (56)Fe-particle radiation.

    PubMed

    Denisova, N A; Shukitt-Hale, B; Rabin, B M; Joseph, J A

    2002-12-01

    Previous experiments have demonstrated that exposure to 56Fe-particle irradiation (1.5 Gy, 1 GeV) produced aging-like accelerations in neuronal and behavioral deficits. Astronauts on long-term space flights will be exposed to similar heavy-particle radiations that might have similar deleterious effects on neuronal signaling and cognitive behavior. Therefore, the present study evaluated whether radiation-induced spatial learning and memory behavioral deficits are associated with region-specific brain signaling deficits by measuring signaling molecules previously found to be essential for behavior [pre-synaptic vesicle proteins, synaptobrevin and synaptophysin, and protein kinases, calcium-dependent PRKCs (also known as PKCs) and PRKA (PRKA RIIbeta)]. The results demonstrated a significant radiation-induced increase in reference memory errors. The increases in reference memory errors were significantly negatively correlated with striatal synaptobrevin and frontal cortical synaptophysin expression. Both synaptophysin and synaptobrevin are synaptic vesicle proteins that are important in cognition. Striatal PRKA, a memory signaling molecule, was also significantly negatively correlated with reference memory errors. Overall, our findings suggest that radiation-induced pre-synaptic facilitation may contribute to some previously reported radiation-induced decrease in striatal dopamine release and for the disruption of the central dopaminergic system integrity and dopamine-mediated behavior.

  6. Volumetric breast density measurement: sensitivity analysis of a relative physics approach

    PubMed Central

    Lau, Susie; Abdul Aziz, Yang Faridah

    2016-01-01

    Objective: To investigate the sensitivity and robustness of a volumetric breast density (VBD) measurement system to errors in the imaging physics parameters including compressed breast thickness (CBT), tube voltage (kVp), filter thickness, tube current-exposure time product (mAs), detector gain, detector offset and image noise. Methods: 3317 raw digital mammograms were processed with Volpara® (Matakina Technology Ltd, Wellington, New Zealand) to obtain fibroglandular tissue volume (FGV), breast volume (BV) and VBD. Errors in parameters including CBT, kVp, filter thickness and mAs were simulated by varying them in the Digital Imaging and Communications in Medicine (DICOM) tags of the images up to ±10% of the original values. Errors in detector gain and offset were simulated by varying them in the Volpara configuration file up to ±10% from their default values. For image noise, Gaussian noise was generated and introduced into the original images. Results: Errors in filter thickness, mAs, detector gain and offset had limited effects on FGV, BV and VBD. Significant effects in VBD were observed when CBT, kVp, detector offset and image noise were varied (p < 0.0001). Maximum shifts in the mean (1.2%) and median (1.1%) VBD of the study population occurred when CBT was varied. Conclusion: Volpara was robust to expected clinical variations, with errors in most investigated parameters giving limited changes in results, although extreme variations in CBT and kVp could lead to greater errors. Advances in knowledge: Despite Volpara's robustness, rigorous quality control is essential to keep the parameter errors within reasonable bounds. Volpara appears robust within those bounds, albeit for more advanced applications such as tracking density change over time, it remains to be seen how accurate the measures need to be. PMID:27452264

  7. Volumetric breast density measurement: sensitivity analysis of a relative physics approach.

    PubMed

    Lau, Susie; Ng, Kwan Hoong; Abdul Aziz, Yang Faridah

    2016-10-01

    To investigate the sensitivity and robustness of a volumetric breast density (VBD) measurement system to errors in the imaging physics parameters including compressed breast thickness (CBT), tube voltage (kVp), filter thickness, tube current-exposure time product (mAs), detector gain, detector offset and image noise. 3317 raw digital mammograms were processed with Volpara(®) (Matakina Technology Ltd, Wellington, New Zealand) to obtain fibroglandular tissue volume (FGV), breast volume (BV) and VBD. Errors in parameters including CBT, kVp, filter thickness and mAs were simulated by varying them in the Digital Imaging and Communications in Medicine (DICOM) tags of the images up to ±10% of the original values. Errors in detector gain and offset were simulated by varying them in the Volpara configuration file up to ±10% from their default values. For image noise, Gaussian noise was generated and introduced into the original images. Errors in filter thickness, mAs, detector gain and offset had limited effects on FGV, BV and VBD. Significant effects in VBD were observed when CBT, kVp, detector offset and image noise were varied (p < 0.0001). Maximum shifts in the mean (1.2%) and median (1.1%) VBD of the study population occurred when CBT was varied. Volpara was robust to expected clinical variations, with errors in most investigated parameters giving limited changes in results, although extreme variations in CBT and kVp could lead to greater errors. Despite Volpara's robustness, rigorous quality control is essential to keep the parameter errors within reasonable bounds. Volpara appears robust within those bounds, albeit for more advanced applications such as tracking density change over time, it remains to be seen how accurate the measures need to be.

  8. Merging Psychophysical and Psychometric Theory to Estimate Global Visual State Measures from Forced-Choices

    NASA Astrophysics Data System (ADS)

    Massof, Robert W.; Schmidt, Karen M.; Laby, Daniel M.; Kirschen, David; Meadows, David

    2013-09-01

    Visual acuity, a forced-choice psychophysical measure of visual spatial resolution, is the sine qua non of clinical visual impairment testing in ophthalmology and optometry patients with visual system disorders ranging from refractive error to retinal, optic nerve, or central visual system pathology. Visual acuity measures are standardized against a norm, but it is well known that visual acuity depends on a variety of stimulus parameters, including contrast and exposure duration. This paper asks if it is possible to estimate a single global visual state measure from visual acuity measures as a function of stimulus parameters that can represent the patient's overall visual health state with a single variable. Psychophysical theory (at the sensory level) and psychometric theory (at the decision level) are merged to identify the conditions that must be satisfied to derive a global visual state measure from parameterised visual acuity measures. A global visual state measurement model is developed and tested with forced-choice visual acuity measures from 116 subjects with no visual impairments and 560 subjects with uncorrected refractive error. The results are in agreement with the expectations of the model.

  9. Adverse childhood experiences and later life adult obesity and smoking in the United States.

    PubMed

    Rehkopf, David H; Headen, Irene; Hubbard, Alan; Deardorff, Julianna; Kesavan, Yamini; Cohen, Alison K; Patil, Divya; Ritchie, Lorrene D; Abrams, Barbara

    2016-07-01

    Prior work demonstrates associations between physical abuse, household alcohol abuse, and household mental illness early in life with obesity and smoking. Studies, however, have not generally been in nationally representative samples and have not conducted analyses to account for bias in the exposure. We used data from the 1979 U.S. National Longitudinal Survey of Youth to test associations between measures of adverse childhood experiences with obesity and smoking and used an instrumental variables approach to address potential measurement error of the exposure. Models demonstrated associations between childhood physical abuse and obesity at age 40 years (odds ratio [OR] 1.23; 95% confidence interval [CI], 1.00-1.52) and ever smoking (OR 1.83; 95% CI, 1.56-2.16), as well as associations between household alcohol abuse (OR 1.53; 95% CI, 1.31-1.79) and household mental illness (OR 1.29; 95% CI, 1.04-1.60) with ever smoking. We find no evidence of association modification by gender, socioeconomic position, or race and/or ethnicity. Instrumental variables analysis using a sibling's report of adverse childhood experiences demonstrated a relationship between household alcohol abuse and smoking, with a population attributable fraction of 17% (95% CI, 2.0%-37%) for ever smoking and 6.7% (95% CI, 1.6%-12%) for currently smoking. Findings suggest long-term impacts of childhood exposure to physical abuse, household alcohol abuse, and parental mental illness on obesity and smoking and that the association between household alcohol abuse and smoking is not solely due to measurement error. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Effects of holding time and measurement error on culturing Legionella in environmental water samples.

    PubMed

    Flanders, W Dana; Kirkland, Kimberly H; Shelton, Brian G

    2014-10-01

    Outbreaks of Legionnaires' disease require environmental testing of water samples from potentially implicated building water systems to identify the source of exposure. A previous study reports a large impact on Legionella sample results due to shipping and delays in sample processing. Specifically, this same study, without accounting for measurement error, reports more than half of shipped samples tested had Legionella levels that arbitrarily changed up or down by one or more logs, and the authors attribute this result to shipping time. Accordingly, we conducted a study to determine the effects of sample holding/shipping time on Legionella sample results while taking into account measurement error, which has previously not been addressed. We analyzed 159 samples, each split into 16 aliquots, of which one-half (8) were processed promptly after collection. The remaining half (8) were processed the following day to assess impact of holding/shipping time. A total of 2544 samples were analyzed including replicates. After accounting for inherent measurement error, we found that the effect of holding time on observed Legionella counts was small and should have no practical impact on interpretation of results. Holding samples increased the root mean squared error by only about 3-8%. Notably, for only one of 159 samples, did the average of the 8 replicate counts change by 1 log. Thus, our findings do not support the hypothesis of frequent, significant (≥= 1 log10 unit) Legionella colony count changes due to holding. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Electroinduction disk sensor of electric field strength

    NASA Astrophysics Data System (ADS)

    Biryukov, S. V.; Korolyova, M. A.

    2018-01-01

    Measurement of the level of electric fields exposure to the technical and biological objects for a long time is an urgent task. To solve this problem, the required electric field sensors with specified metrological characteristics. The aim of the study is the establishment of theoretical assumptions for the calculation of the flat electric field sensors. It is proved that the accuracy of the sensor does not exceed 3% in the spatial range 0

  12. Apparent polyploidization after gamma irradiation: pitfalls in the use of quantitative polymerase chain reaction (qPCR) for the estimation of mitochondrial and nuclear DNA gene copy numbers.

    PubMed

    Kam, Winnie W Y; Lake, Vanessa; Banos, Connie; Davies, Justin; Banati, Richard

    2013-05-30

    Quantitative polymerase chain reaction (qPCR) has been widely used to quantify changes in gene copy numbers after radiation exposure. Here, we show that gamma irradiation ranging from 10 to 100 Gy of cells and cell-free DNA samples significantly affects the measured qPCR yield, due to radiation-induced fragmentation of the DNA template and, therefore, introduces errors into the estimation of gene copy numbers. The radiation-induced DNA fragmentation and, thus, measured qPCR yield varies with temperature not only in living cells, but also in isolated DNA irradiated under cell-free conditions. In summary, the variability in measured qPCR yield from irradiated samples introduces a significant error into the estimation of both mitochondrial and nuclear gene copy numbers and may give spurious evidence for polyploidization.

  13. Astrometric Calibration and Performance of the Dark Energy Camera

    DOE PAGES

    Bernstein, G. M.; Armstrong, R.; Plazas, A. A.; ...

    2017-05-30

    We characterize the ability of the Dark Energy Camera (DECam) to perform relative astrometry across its 500 Mpix, 3more » $deg^2$ science field of view, and across 4 years of operation. This is done using internal comparisons of $~ 4 x 10^7$ measurements of high-S/N stellar images obtained in repeat visits to fields of moderate stellar density, with the telescope dithered to move the sources around the array. An empirical astrometric model includes terms for: optical distortions; stray electric fields in the CCD detectors; chromatic terms in the instrumental and atmospheric optics; shifts in CCD relative positions of up to $$\\approx 10 \\mu m$$ when the DECam temperature cycles; and low-order distortions to each exposure from changes in atmospheric refraction and telescope alignment. Errors in this astrometric model are dominated by stochastic variations with typical amplitudes of 10-30 mas (in a 30 s exposure) and $$5^{\\prime}-10^{\\prime}$$ arcmin coherence length, plausibly attributed to Kolmogorov-spectrum atmospheric turbulence. The size of these atmospheric distortions is not closely related to the seeing. Given an astrometric reference catalog at density $$\\approx 0.7$$ $$arcmin^{-2}$$, e.g. from Gaia, the typical atmospheric distortions can be interpolated to $$\\approx$$ 7 mas RMS accuracy (for 30 s exposures) with $$1^{\\prime}$$ arcmin coherence length for residual errors. Remaining detectable error contributors are 2-4 mas RMS from unmodelled stray electric fields in the devices, and another 2-4 mas RMS from focal plane shifts between camera thermal cycles. Thus the astrometric solution for a single DECam exposure is accurate to 3-6 mas ( $$\\approx$$ 0.02 pixels, or $$\\approx$$ 300 nm) on the focal plane, plus the stochastic atmospheric distortion.« less

  14. Cross-shift changes in FEV1 in relation to wood dust exposure: the implications of different exposure assessment methods

    PubMed Central

    Schlunssen, V; Sigsgaard, T; Schaumburg, I; Kromhout, H

    2004-01-01

    Background: Exposure-response analyses in occupational studies rely on the ability to distinguish workers with regard to exposures of interest. Aims: To evaluate different estimates of current average exposure in an exposure-response analysis on dust exposure and cross-shift decline in FEV1 among woodworkers. Methods: Personal dust samples (n = 2181) as well as data on lung function parameters were available for 1560 woodworkers from 54 furniture industries. The exposure to wood dust for each worker was calculated in eight different ways using individual measurements, group based exposure estimates, a weighted estimate of individual and group based exposure estimates, and predicted values from mixed models. Exposure-response relations on cross-shift changes in FEV1 and exposure estimates were explored. Results: A positive exposure-response relation between average dust exposure and cross-shift FEV1 was shown for non-smokers only and appeared to be most pronounced among pine workers. In general, the highest slope and standard error (SE) was revealed for grouping by a combination of task and factory size, the lowest slope and SE was revealed for estimates based on individual measurements, with the weighted estimate and the predicted values in between. Grouping by quintiles of average exposure for task and factory combinations revealed low slopes and high SE, despite a high contrast. Conclusion: For non-smokers, average dust exposure and cross-shift FEV1 were associated in an exposure dependent manner, especially among pine workers. This study confirms the consequences of using different exposure assessment strategies studying exposure-response relations. It is possible to optimise exposure assessment combining information from individual and group based exposure estimates, for instance by applying predicted values from mixed effects models. PMID:15377768

  15. The Effect of Exposure to High Noise Levels on the Performance and Rate of Error in Manual Activities.

    PubMed

    Khajenasiri, Farahnaz; Zamanian, Alireza; Zamanian, Zahra

    2016-03-01

    Sound is among the significant environmental factors for people's health, and it has an important role in both physical and psychological injuries, and it also affects individuals' performance and productivity. The aim of this study was to determine the effect of exposure to high noise levels on the performance and rate of error in manual activities. This was an interventional study conducted on 50 students at Shiraz University of Medical Sciences (25 males and 25 females) in which each person was considered as its own control to assess the effect of noise on her or his performance at the sound levels of 70, 90, and 110 dB by using two factors of physical features and the creation of different conditions of sound source as well as applying the Two-Arm coordination Test. The data were analyzed using SPSS version 16. Repeated measurements were used to compare the length of performance as well as the errors measured in the test. Based on the results, we found a direct and significant association between the levels of sound and the length of performance. Moreover, the participant's performance was significantly different for different sound levels (at 110 dB as opposed to 70 and 90 dB, p < 0.05 and p < 0.001, respectively). This study found that a sound level of 110 dB had an important effect on the individuals' performances, i.e., the performances were decreased.

  16. Comparison of Grouping Schemes for Exposure to Total Dust in Cement Factories in Korea.

    PubMed

    Koh, Dong-Hee; Kim, Tae-Woo; Jang, Seung Hee; Ryu, Hyang-Woo; Park, Donguk

    2015-08-01

    The purpose of this study was to evaluate grouping schemes for exposure to total dust in cement industry workers using non-repeated measurement data. In total, 2370 total dust measurements taken from nine Portland cement factories in 1995-2009 were analyzed. Various grouping schemes were generated based on work process, job, factory, or average exposure. To characterize variance components of each grouping scheme, we developed mixed-effects models with a B-spline time trend incorporated as fixed effects and a grouping variable incorporated as a random effect. Using the estimated variance components, elasticity was calculated. To compare the prediction performances of different grouping schemes, 10-fold cross-validation tests were conducted, and root mean squared errors and pooled correlation coefficients were calculated for each grouping scheme. The five exposure groups created a posteriori by ranking job and factory combinations according to average dust exposure showed the best prediction performance and highest elasticity among various grouping schemes. Our findings suggest a grouping method based on ranking of job, and factory combinations would be the optimal choice in this population. Our grouping method may aid exposure assessment efforts in similar occupational settings, minimizing the misclassification of exposures. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  17. Laser speckle technique for burner liner strain measurements

    NASA Technical Reports Server (NTRS)

    Stetson, K. A.

    1982-01-01

    Thermal and mechanical strains were measured on samples of a common material used in jet engine burner liners, which were heated from room temperature to 870 C and cooled back to 220 C, in a laboratory furnance. The physical geometry of the sample surface was recorded at selected temperatures by a set of 12 single exposure speckle-grams. Sequential pairs of specklegrams were compared in a heterodyne interferometer which give high precision measurement of differential displacements. Good speckle correlation between the first and last specklegrams is noted which allows a check on accumulate errors.

  18. Influence of Exposure Error and Effect Modification by Socioeconomic Status on the Association of Acute Cardiovascular Mortality with Particulate Matter in Phoenix

    EPA Science Inventory

    Using ZIP code-level mortality data, the association of cardiovascular mortality with PM2.5 and PM10-2.5,measured at a central monitoring site, was determined for three populations at different distances from the monitoring site but with similar numbers of d...

  19. Cycling exercise classes may be bad for your (hearing) health.

    PubMed

    Sinha, Sumi; Kozin, Elliott D; Naunheim, Matthew R; Barber, Samuel R; Wong, Kevin; Katz, Leanna W; Otero, Tiffany M N; Stefanov-Wagner, Ishmael J M; Remenschneider, Aaron K

    2017-08-01

    1) Determine feasibility of smartphone-based mobile technology to measure noise exposure; and 2) measure noise exposure in exercise spin classes. Observational Study. The SoundMeter Pro app (Faber Acoustical, Salt Lake City, UT) was installed and calibrated on iPhone and iPod devices in an audiology chamber using an external sound level meter to within 2 dBA of accuracy. Recording devices were placed in the bike cupholders of participants attending spin classes in Boston, Massachusetts (n = 17) and used to measure sound level (A-weighted) and noise dosimetry during exercise according to National Institute for Occupational Safety and Health (NIOSH) guidelines. The average length of exposure was 48.9 ± 1.2 (standard error of the mean) minutes per class. Maximum sound recorded among 17 random classes was 116.7 dBA, which was below the NIOSH instantaneous exposure guideline of 140 dBA. An average of 31.6 ± 3.8 minutes were spent at >100 dBA. This exceeds NIOSH recommendations of 15 minutes of exposure or less at 100 dBA per day. Average noise exposure for one 45-minute class was 8.95 ± 1.2 times the recommended noise exposure dose for an 8-hour workday. Preliminary data shows that randomly sampled cycling classes may have high noise levels with a potential for noise-induced hearing loss. Mobile dosimetry technology may enable users to self-monitor risk to their hearing and actively engage in noise protection measures. NA Laryngoscope, 127:1873-1877, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  20. Modeling longitudinal data, I: principles of multivariate analysis.

    PubMed

    Ravani, Pietro; Barrett, Brendan; Parfrey, Patrick

    2009-01-01

    Statistical models are used to study the relationship between exposure and disease while accounting for the potential role of other factors' impact on outcomes. This adjustment is useful to obtain unbiased estimates of true effects or to predict future outcomes. Statistical models include a systematic component and an error component. The systematic component explains the variability of the response variable as a function of the predictors and is summarized in the effect estimates (model coefficients). The error element of the model represents the variability in the data unexplained by the model and is used to build measures of precision around the point estimates (confidence intervals).

  1. Very short-term reactive forecasting of the solar ultraviolet index using an extreme learning machine integrated with the solar zenith angle.

    PubMed

    Deo, Ravinesh C; Downs, Nathan; Parisi, Alfio V; Adamowski, Jan F; Quilty, John M

    2017-05-01

    Exposure to erythemally-effective solar ultraviolet radiation (UVR) that contributes to malignant keratinocyte cancers and associated health-risk is best mitigated through innovative decision-support systems, with global solar UV index (UVI) forecast necessary to inform real-time sun-protection behaviour recommendations. It follows that the UVI forecasting models are useful tools for such decision-making. In this study, a model for computationally-efficient data-driven forecasting of diffuse and global very short-term reactive (VSTR) (10-min lead-time) UVI, enhanced by drawing on the solar zenith angle (θ s ) data, was developed using an extreme learning machine (ELM) algorithm. An ELM algorithm typically serves to address complex and ill-defined forecasting problems. UV spectroradiometer situated in Toowoomba, Australia measured daily cycles (0500-1700h) of UVI over the austral summer period. After trialling activations functions based on sine, hard limit, logarithmic and tangent sigmoid and triangular and radial basis networks for best results, an optimal ELM architecture utilising logarithmic sigmoid equation in hidden layer, with lagged combinations of θ s as the predictor data was developed. ELM's performance was evaluated using statistical metrics: correlation coefficient (r), Willmott's Index (WI), Nash-Sutcliffe efficiency coefficient (E NS ), root mean square error (RMSE), and mean absolute error (MAE) between observed and forecasted UVI. Using these metrics, the ELM model's performance was compared to that of existing methods: multivariate adaptive regression spline (MARS), M5 Model Tree, and a semi-empirical (Pro6UV) clear sky model. Based on RMSE and MAE values, the ELM model (0.255, 0.346, respectively) outperformed the MARS (0.310, 0.438) and M5 Model Tree (0.346, 0.466) models. Concurring with these metrics, the Willmott's Index for the ELM, MARS and M5 Model Tree models were 0.966, 0.942 and 0.934, respectively. About 57% of the ELM model's absolute errors were small in magnitude (±0.25), whereas the MARS and M5 Model Tree models generated 53% and 48% of such errors, respectively, indicating the latter models' errors to be distributed in larger magnitude error range. In terms of peak global UVI forecasting, with half the level of error, the ELM model outperformed MARS and M5 Model Tree. A comparison of the magnitude of hourly-cumulated errors of 10-min lead time forecasts for diffuse and global UVI highlighted ELM model's greater accuracy compared to MARS, M5 Model Tree or Pro6UV models. This confirmed the versatility of an ELM model drawing on θ s data for VSTR forecasting of UVI at near real-time horizon. When applied to the goal of enhancing expert systems, ELM-based accurate forecasts capable of reacting quickly to measured conditions can enhance real-time exposure advice for the public, mitigating the potential for solar UV-exposure-related disease. Crown Copyright © 2017. Published by Elsevier Inc. All rights reserved.

  2. Effective formation method for an aspherical microlens array based on an aperiodic moving mask during exposure.

    PubMed

    Shi, Lifang; Du, Chunlei; Dong, Xiaochun; Deng, Qiling; Luo, Xiangang

    2007-12-01

    An aperiodic mask design method for fabricating a microlens array with an aspherical profile is proposed. The nonlinear relationship between exposure doses and lens profile is considered, and the select criteria of quantization interval and fabrication range of the method are given. The mask function of a quadrangle microlens array with a hyperboloid profile used in the infrared was constructed by using this method. The microlens array can be effectively fabricated during a one time exposure process using the mask. Reactive ion etching was carried out to transfer the structure into the substrate of germanium. The measurement results indicate that the roughness is less than 10 nm (pv), and the profile error is less than 40 nm (rms).

  3. Performance evaluation of the active-flow personal DataRAM PM 2.5 mass monitor (Thermo Anderson pDR-1200) designed for continuous personal exposure measurements

    NASA Astrophysics Data System (ADS)

    Chakrabarti, Bhabesh; Fine, Philip M.; Delfino, Ralph; Sioutas, Constantinos

    The need for continuous personal monitoring for exposure to particulate matter has been demonstrated by recent health studies showing effects of PM exposure on time scales of less than a few hours. Filter-based methods cannot measure this short-term variation of PM levels, which can be quite significant considering human activity patterns. The goal of this study was to evaluate the active-flow personal DataRAM for PM 2.5 (MIE pDR-1200; Thermo Electron Corp., Franklin, MA) designed as a wearable monitor to continuously measure particle exposure. The instrument precision was found to be good (2.1%) and significantly higher than the passive pDR configuration tested previously. A comparison to other proven continuous monitors resulted in good agreement at low relative humidities. Results at higher humidity followed predictable trends and provided a correction scheme that improved the accuracy of pDR readings. The pDR response to particle size also corresponded to previously observed and theoretical errors. The active flow feature of the pDR allows collection of the sampled particles on a back-up filter. The 24-h mass measured on this filter was found to compare very well with a Federal Reference Method for PM 2.5 mass.

  4. Exposure assessment models for elemental components of particulate matter in an urban environment: A comparison of regression and random forest approaches

    NASA Astrophysics Data System (ADS)

    Brokamp, Cole; Jandarov, Roman; Rao, M. B.; LeMasters, Grace; Ryan, Patrick

    2017-02-01

    Exposure assessment for elemental components of particulate matter (PM) using land use modeling is a complex problem due to the high spatial and temporal variations in pollutant concentrations at the local scale. Land use regression (LUR) models may fail to capture complex interactions and non-linear relationships between pollutant concentrations and land use variables. The increasing availability of big spatial data and machine learning methods present an opportunity for improvement in PM exposure assessment models. In this manuscript, our objective was to develop a novel land use random forest (LURF) model and compare its accuracy and precision to a LUR model for elemental components of PM in the urban city of Cincinnati, Ohio. PM smaller than 2.5 μm (PM2.5) and eleven elemental components were measured at 24 sampling stations from the Cincinnati Childhood Allergy and Air Pollution Study (CCAAPS). Over 50 different predictors associated with transportation, physical features, community socioeconomic characteristics, greenspace, land cover, and emission point sources were used to construct LUR and LURF models. Cross validation was used to quantify and compare model performance. LURF and LUR models were created for aluminum (Al), copper (Cu), iron (Fe), potassium (K), manganese (Mn), nickel (Ni), lead (Pb), sulfur (S), silicon (Si), vanadium (V), zinc (Zn), and total PM2.5 in the CCAAPS study area. LURF utilized a more diverse and greater number of predictors than LUR and LURF models for Al, K, Mn, Pb, Si, Zn, TRAP, and PM2.5 all showed a decrease in fractional predictive error of at least 5% compared to their LUR models. LURF models for Al, Cu, Fe, K, Mn, Pb, Si, Zn, TRAP, and PM2.5 all had a cross validated fractional predictive error less than 30%. Furthermore, LUR models showed a differential exposure assessment bias and had a higher prediction error variance. Random forest and other machine learning methods may provide more accurate exposure assessment.

  5. Exposure assessment models for elemental components of particulate matter in an urban environment: A comparison of regression and random forest approaches.

    PubMed

    Brokamp, Cole; Jandarov, Roman; Rao, M B; LeMasters, Grace; Ryan, Patrick

    2017-02-01

    Exposure assessment for elemental components of particulate matter (PM) using land use modeling is a complex problem due to the high spatial and temporal variations in pollutant concentrations at the local scale. Land use regression (LUR) models may fail to capture complex interactions and non-linear relationships between pollutant concentrations and land use variables. The increasing availability of big spatial data and machine learning methods present an opportunity for improvement in PM exposure assessment models. In this manuscript, our objective was to develop a novel land use random forest (LURF) model and compare its accuracy and precision to a LUR model for elemental components of PM in the urban city of Cincinnati, Ohio. PM smaller than 2.5 μm (PM2.5) and eleven elemental components were measured at 24 sampling stations from the Cincinnati Childhood Allergy and Air Pollution Study (CCAAPS). Over 50 different predictors associated with transportation, physical features, community socioeconomic characteristics, greenspace, land cover, and emission point sources were used to construct LUR and LURF models. Cross validation was used to quantify and compare model performance. LURF and LUR models were created for aluminum (Al), copper (Cu), iron (Fe), potassium (K), manganese (Mn), nickel (Ni), lead (Pb), sulfur (S), silicon (Si), vanadium (V), zinc (Zn), and total PM2.5 in the CCAAPS study area. LURF utilized a more diverse and greater number of predictors than LUR and LURF models for Al, K, Mn, Pb, Si, Zn, TRAP, and PM2.5 all showed a decrease in fractional predictive error of at least 5% compared to their LUR models. LURF models for Al, Cu, Fe, K, Mn, Pb, Si, Zn, TRAP, and PM2.5 all had a cross validated fractional predictive error less than 30%. Furthermore, LUR models showed a differential exposure assessment bias and had a higher prediction error variance. Random forest and other machine learning methods may provide more accurate exposure assessment.

  6. Exposure assessment models for elemental components of particulate matter in an urban environment: A comparison of regression and random forest approaches

    PubMed Central

    Brokamp, Cole; Jandarov, Roman; Rao, M.B.; LeMasters, Grace; Ryan, Patrick

    2017-01-01

    Exposure assessment for elemental components of particulate matter (PM) using land use modeling is a complex problem due to the high spatial and temporal variations in pollutant concentrations at the local scale. Land use regression (LUR) models may fail to capture complex interactions and non-linear relationships between pollutant concentrations and land use variables. The increasing availability of big spatial data and machine learning methods present an opportunity for improvement in PM exposure assessment models. In this manuscript, our objective was to develop a novel land use random forest (LURF) model and compare its accuracy and precision to a LUR model for elemental components of PM in the urban city of Cincinnati, Ohio. PM smaller than 2.5 μm (PM2.5) and eleven elemental components were measured at 24 sampling stations from the Cincinnati Childhood Allergy and Air Pollution Study (CCAAPS). Over 50 different predictors associated with transportation, physical features, community socioeconomic characteristics, greenspace, land cover, and emission point sources were used to construct LUR and LURF models. Cross validation was used to quantify and compare model performance. LURF and LUR models were created for aluminum (Al), copper (Cu), iron (Fe), potassium (K), manganese (Mn), nickel (Ni), lead (Pb), sulfur (S), silicon (Si), vanadium (V), zinc (Zn), and total PM2.5 in the CCAAPS study area. LURF utilized a more diverse and greater number of predictors than LUR and LURF models for Al, K, Mn, Pb, Si, Zn, TRAP, and PM2.5 all showed a decrease in fractional predictive error of at least 5% compared to their LUR models. LURF models for Al, Cu, Fe, K, Mn, Pb, Si, Zn, TRAP, and PM2.5 all had a cross validated fractional predictive error less than 30%. Furthermore, LUR models showed a differential exposure assessment bias and had a higher prediction error variance. Random forest and other machine learning methods may provide more accurate exposure assessment. PMID:28959135

  7. TOWARD GREATER IMPLEMENTATION OF THE EXPOSOME RESEARCH PARADIGM WITHIN ENVIRONMENTAL EPIDEMIOLOGY

    PubMed Central

    Stingone, Jeanette A.; Buck Louis, Germaine M.; Nakayama, Shoji F.; Vermeulen, Roel C. H.; Kwok, Richard K.; Cui, Yuxia; Balshaw, David M.; Teitelbaum, Susan L.

    2017-01-01

    Investigating a single environmental exposure in isolation does not reflect the actual human exposure circumstance nor does it capture the multifactorial etiology of health and disease. The exposome, defined as the totality of environmental exposures from conception onward, may advance our understanding of environmental contributors to disease by more fully assessing the multitude of human exposures across the life course. Implementation into studies of human health has been limited, in part owing to theoretical and practical challenges including a lack of infrastructure to support comprehensive exposure assessment, difficulty in differentiating physiologic variation from environmentally induced changes, and the need for study designs and analytic methods that accommodate specific aspects of the exposome, such as high-dimensional exposure data and multiple windows of susceptibility. Recommendations for greater data sharing and coordination, methods development, and acknowledgment and minimization of multiple types of measurement error are offered to encourage researchers to embark on exposome research to promote the environmental health and well-being of all populations. PMID:28125387

  8. Impact of temporal upscaling and chemical transport model horizontal resolution on reducing ozone exposure misclassification

    NASA Astrophysics Data System (ADS)

    Xu, Yadong; Serre, Marc L.; Reyes, Jeanette M.; Vizuete, William

    2017-10-01

    We have developed a Bayesian Maximum Entropy (BME) framework that integrates observations from a surface monitoring network and predictions from a Chemical Transport Model (CTM) to create improved exposure estimates that can be resolved into any spatial and temporal resolution. The flexibility of the framework allows for input of data in any choice of time scales and CTM predictions of any spatial resolution with varying associated degrees of estimation error and cost in terms of implementation and computation. This study quantifies the impact on exposure estimation error due to these choices by first comparing estimations errors when BME relied on ozone concentration data either as an hourly average, the daily maximum 8-h average (DM8A), or the daily 24-h average (D24A). Our analysis found that the use of DM8A and D24A data, although less computationally intensive, reduced estimation error more when compared to the use of hourly data. This was primarily due to the poorer CTM model performance in the hourly average predicted ozone. Our second analysis compared spatial variability and estimation errors when BME relied on CTM predictions with a grid cell resolution of 12 × 12 km2 versus a coarser resolution of 36 × 36 km2. Our analysis found that integrating the finer grid resolution CTM predictions not only reduced estimation error, but also increased the spatial variability in daily ozone estimates by 5 times. This improvement was due to the improved spatial gradients and model performance found in the finer resolved CTM simulation. The integration of observational and model predictions that is permitted in a BME framework continues to be a powerful approach for improving exposure estimates of ambient air pollution. The results of this analysis demonstrate the importance of also understanding model performance variability and its implications on exposure error.

  9. Evaluation of the validity of job exposure matrix for psychosocial factors at work.

    PubMed

    Solovieva, Svetlana; Pensola, Tiina; Kausto, Johanna; Shiri, Rahman; Heliövaara, Markku; Burdorf, Alex; Husgafvel-Pursiainen, Kirsti; Viikari-Juntura, Eira

    2014-01-01

    To study the performance of a developed job exposure matrix (JEM) for the assessment of psychosocial factors at work in terms of accuracy, possible misclassification bias and predictive ability to detect known associations with depression and low back pain (LBP). We utilized two large population surveys (the Health 2000 Study and the Finnish Work and Health Surveys), one to construct the JEM and another to test matrix performance. In the first study, information on job demands, job control, monotonous work and social support at work was collected via face-to-face interviews. Job strain was operationalized based on job demands and job control using quadrant approach. In the second study, the sensitivity and specificity were estimated applying a Bayesian approach. The magnitude of misclassification error was examined by calculating the biased odds ratios as a function of the sensitivity and specificity of the JEM and fixed true prevalence and odds ratios. Finally, we adjusted for misclassification error the observed associations between JEM measures and selected health outcomes. The matrix showed a good accuracy for job control and job strain, while its performance for other exposures was relatively low. Without correction for exposure misclassification, the JEM was able to detect the association between job strain and depression in men and between monotonous work and LBP in both genders. Our results suggest that JEM more accurately identifies occupations with low control and high strain than those with high demands or low social support. Overall, the present JEM is a useful source of job-level psychosocial exposures in epidemiological studies lacking individual-level exposure information. Furthermore, we showed the applicability of a Bayesian approach in the evaluation of the performance of the JEM in a situation where, in practice, no gold standard of exposure assessment exists.

  10. Modeling individual exposures to ambient PM2.5 in the diabetes and the environment panel study (DEPS).

    PubMed

    Breen, Michael; Xu, Yadong; Schneider, Alexandra; Williams, Ronald; Devlin, Robert

    2018-06-01

    Air pollution epidemiology studies of ambient fine particulate matter (PM 2.5 ) often use outdoor concentrations as exposure surrogates, which can induce exposure error. The goal of this study was to improve ambient PM 2.5 exposure assessments for a repeated measurements study with 22 diabetic individuals in central North Carolina called the Diabetes and Environment Panel Study (DEPS) by applying the Exposure Model for Individuals (EMI), which predicts five tiers of individual-level exposure metrics for ambient PM 2.5 using outdoor concentrations, questionnaires, weather, and time-location information. Using EMI, we linked a mechanistic air exchange rate (AER) model to a mass-balance PM 2.5 infiltration model to predict residential AER (Tier 1), infiltration factors (F inf_home , Tier 2), indoor concentrations (C in , Tier 3), personal exposure factors (F pex , Tier 4), and personal exposures (E, Tier 5) for ambient PM 2.5 . We applied EMI to predict daily PM 2.5 exposure metrics (Tiers 1-5) for 174 participant-days across the 13 months of DEPS. Individual model predictions were compared to a subset of daily measurements of F pex and E (Tiers 4-5) from the DEPS participants. Model-predicted F pex and E corresponded well to daily measurements with a median difference of 14% and 23%; respectively. Daily model predictions for all 174 days showed considerable temporal and house-to-house variability of AER, F inf_home , and C in (Tiers 1-3), and person-to-person variability of F pex and E (Tiers 4-5). Our study demonstrates the capability of predicting individual-level ambient PM 2.5 exposure metrics for an epidemiological study, in support of improving risk estimation. Copyright © 2018. Published by Elsevier B.V.

  11. Consideration of measurement error when using commercial indoor radon determinations for selecting radon action levels

    USGS Publications Warehouse

    Reimer, G.M.; Szarzi, S.L.; Dolan, Michael P.

    1998-01-01

    An examination of year-long, in-home radon measurement in Colorado from commercial companies applying typical methods indicates that considerable variation in precision exists. This variation can have a substantial impact on any mitigation decisions, either voluntary or mandated by law, especially regarding property sale or exchange. Both long-term exposure (nuclear track greater than 90 days), and short-term (charcoal adsorption 4-7 days) exposure methods were used. In addition, periods of continuous monitoring with a highly calibrated alpha-scintillometer took place for accuracy calibration. The results of duplicate commercial analysis show that typical results are no better than ??25 percent with occasional outliers (up to 5 percent of all analyses) well beyond that limit. Differential seasonal measurements (winter/summer) by short-term methods provide equivalent information to single long-term measurements. Action levels in the U.S. for possible mitigation decisions should be selected so that they consider the measurement variability; specifically, they should reflect a concentration range similar to that adopted by the European Community.

  12. A machine learning calibration model using random forests to improve sensor performance for lower-cost air quality monitoring

    NASA Astrophysics Data System (ADS)

    Zimmerman, Naomi; Presto, Albert A.; Kumar, Sriniwasa P. N.; Gu, Jason; Hauryliuk, Aliaksei; Robinson, Ellis S.; Robinson, Allen L.; Subramanian, R.

    2018-01-01

    Low-cost sensing strategies hold the promise of denser air quality monitoring networks, which could significantly improve our understanding of personal air pollution exposure. Additionally, low-cost air quality sensors could be deployed to areas where limited monitoring exists. However, low-cost sensors are frequently sensitive to environmental conditions and pollutant cross-sensitivities, which have historically been poorly addressed by laboratory calibrations, limiting their utility for monitoring. In this study, we investigated different calibration models for the Real-time Affordable Multi-Pollutant (RAMP) sensor package, which measures CO, NO2, O3, and CO2. We explored three methods: (1) laboratory univariate linear regression, (2) empirical multiple linear regression, and (3) machine-learning-based calibration models using random forests (RF). Calibration models were developed for 16-19 RAMP monitors (varied by pollutant) using training and testing windows spanning August 2016 through February 2017 in Pittsburgh, PA, US. The random forest models matched (CO) or significantly outperformed (NO2, CO2, O3) the other calibration models, and their accuracy and precision were robust over time for testing windows of up to 16 weeks. Following calibration, average mean absolute error on the testing data set from the random forest models was 38 ppb for CO (14 % relative error), 10 ppm for CO2 (2 % relative error), 3.5 ppb for NO2 (29 % relative error), and 3.4 ppb for O3 (15 % relative error), and Pearson r versus the reference monitors exceeded 0.8 for most units. Model performance is explored in detail, including a quantification of model variable importance, accuracy across different concentration ranges, and performance in a range of monitoring contexts including the National Ambient Air Quality Standards (NAAQS) and the US EPA Air Sensors Guidebook recommendations of minimum data quality for personal exposure measurement. A key strength of the RF approach is that it accounts for pollutant cross-sensitivities. This highlights the importance of developing multipollutant sensor packages (as opposed to single-pollutant monitors); we determined this is especially critical for NO2 and CO2. The evaluation reveals that only the RF-calibrated sensors meet the US EPA Air Sensors Guidebook recommendations of minimum data quality for personal exposure measurement. We also demonstrate that the RF-model-calibrated sensors could detect differences in NO2 concentrations between a near-road site and a suburban site less than 1.5 km away. From this study, we conclude that combining RF models with carefully controlled state-of-the-art multipollutant sensor packages as in the RAMP monitors appears to be a very promising approach to address the poor performance that has plagued low-cost air quality sensors.

  13. Holistic approach for overlay and edge placement error to meet the 5nm technology node requirements

    NASA Astrophysics Data System (ADS)

    Mulkens, Jan; Slachter, Bram; Kubis, Michael; Tel, Wim; Hinnen, Paul; Maslow, Mark; Dillen, Harm; Ma, Eric; Chou, Kevin; Liu, Xuedong; Ren, Weiming; Hu, Xuerang; Wang, Fei; Liu, Kevin

    2018-03-01

    In this paper, we discuss the metrology methods and error budget that describe the edge placement error (EPE). EPE quantifies the pattern fidelity of a device structure made in a multi-patterning scheme. Here the pattern is the result of a sequence of lithography and etching steps, and consequently the contour of the final pattern contains error sources of the different process steps. EPE is computed by combining optical and ebeam metrology data. We show that high NA optical scatterometer can be used to densely measure in device CD and overlay errors. Large field e-beam system enables massive CD metrology which is used to characterize the local CD error. Local CD distribution needs to be characterized beyond 6 sigma, and requires high throughput e-beam system. We present in this paper the first images of a multi-beam e-beam inspection system. We discuss our holistic patterning optimization approach to understand and minimize the EPE of the final pattern. As a use case, we evaluated a 5-nm logic patterning process based on Self-Aligned-QuadruplePatterning (SAQP) using ArF lithography, combined with line cut exposures using EUV lithography.

  14. Seeing Your Error Alters My Pointing: Observing Systematic Pointing Errors Induces Sensori-Motor After-Effects

    PubMed Central

    Ronchi, Roberta; Revol, Patrice; Katayama, Masahiro; Rossetti, Yves; Farnè, Alessandro

    2011-01-01

    During the procedure of prism adaptation, subjects execute pointing movements to visual targets under a lateral optical displacement: As consequence of the discrepancy between visual and proprioceptive inputs, their visuo-motor activity is characterized by pointing errors. The perception of such final errors triggers error-correction processes that eventually result into sensori-motor compensation, opposite to the prismatic displacement (i.e., after-effects). Here we tested whether the mere observation of erroneous pointing movements, similar to those executed during prism adaptation, is sufficient to produce adaptation-like after-effects. Neurotypical participants observed, from a first-person perspective, the examiner's arm making incorrect pointing movements that systematically overshot visual targets location to the right, thus simulating a rightward optical deviation. Three classical after-effect measures (proprioceptive, visual and visual-proprioceptive shift) were recorded before and after first-person's perspective observation of pointing errors. Results showed that mere visual exposure to an arm that systematically points on the right-side of a target (i.e., without error correction) produces a leftward after-effect, which mostly affects the observer's proprioceptive estimation of her body midline. In addition, being exposed to such a constant visual error induced in the observer the illusion “to feel” the seen movement. These findings indicate that it is possible to elicit sensori-motor after-effects by mere observation of movement errors. PMID:21731649

  15. Development and content validation of performance assessments for endoscopic third ventriculostomy.

    PubMed

    Breimer, Gerben E; Haji, Faizal A; Hoving, Eelco W; Drake, James M

    2015-08-01

    This study aims to develop and establish the content validity of multiple expert rating instruments to assess performance in endoscopic third ventriculostomy (ETV), collectively called the Neuro-Endoscopic Ventriculostomy Assessment Tool (NEVAT). The important aspects of ETV were identified through a review of current literature, ETV videos, and discussion with neurosurgeons, fellows, and residents. Three assessment measures were subsequently developed: a procedure-specific checklist (CL), a CL of surgical errors, and a global rating scale (GRS). Neurosurgeons from various countries, all identified as experts in ETV, were then invited to participate in a modified Delphi survey to establish the content validity of these instruments. In each Delphi round, experts rated their agreement including each procedural step, error, and GRS item in the respective instruments on a 5-point Likert scale. Seventeen experts agreed to participate in the study and completed all Delphi rounds. After item generation, a total of 27 procedural CL items, 26 error CL items, and 9 GRS items were posed to Delphi panelists for rating. An additional 17 procedural CL items, 12 error CL items, and 1 GRS item were added by panelists. After three rounds, strong consensus (>80% agreement) was achieved on 35 procedural CL items, 29 error CL items, and 10 GRS items. Moderate consensus (50-80% agreement) was achieved on an additional 7 procedural CL items and 1 error CL item. The final procedural and error checklist contained 42 and 30 items, respectively (divided into setup, exposure, navigation, ventriculostomy, and closure). The final GRS contained 10 items. We have established the content validity of three ETV assessment measures by iterative consensus of an international expert panel. Each measure provides unique assessment information and thus can be used individually or in combination, depending on the characteristics of the learner and the purpose of the assessment. These instruments must now be evaluated in both the simulated and operative settings, to determine their construct validity and reliability. Ultimately, the measures contained in the NEVAT may prove suitable for formative assessment during ETV training and potentially as summative assessment measures during certification.

  16. An accurate filter loading correction is essential for assessing personal exposure to black carbon using an Aethalometer.

    PubMed

    Good, Nicholas; Mölter, Anna; Peel, Jennifer L; Volckens, John

    2017-07-01

    The AE51 micro-Aethalometer (microAeth) is a popular and useful tool for assessing personal exposure to particulate black carbon (BC). However, few users of the AE51 are aware that its measurements are biased low (by up to 70%) due to the accumulation of BC on the filter substrate over time; previous studies of personal black carbon exposure are likely to have suffered from this bias. Although methods to correct for bias in micro-Aethalometer measurements of particulate black carbon have been proposed, these methods have not been verified in the context of personal exposure assessment. Here, five Aethalometer loading correction equations based on published methods were evaluated. Laboratory-generated aerosols of varying black carbon content (ammonium sulfate, Aquadag and NIST diesel particulate matter) were used to assess the performance of these methods. Filters from a personal exposure assessment study were also analyzed to determine how the correction methods performed for real-world samples. Standard correction equations produced correction factors with root mean square errors of 0.10 to 0.13 and mean bias within ±0.10. An optimized correction equation is also presented, along with sampling recommendations for minimizing bias when assessing personal exposure to BC using the AE51 micro-Aethalometer.

  17. Evaluation of Specific Absorption Rate as a Dosimetric Quantity for Electromagnetic Fields Bioeffects

    PubMed Central

    Panagopoulos, Dimitris J.; Johansson, Olle; Carlo, George L.

    2013-01-01

    Purpose To evaluate SAR as a dosimetric quantity for EMF bioeffects, and identify ways for increasing the precision in EMF dosimetry and bioactivity assessment. Methods We discuss the interaction of man-made electromagnetic waves with biological matter and calculate the energy transferred to a single free ion within a cell. We analyze the physics and biology of SAR and evaluate the methods of its estimation. We discuss the experimentally observed non-linearity between electromagnetic exposure and biological effect. Results We find that: a) The energy absorbed by living matter during exposure to environmentally accounted EMFs is normally well below the thermal level. b) All existing methods for SAR estimation, especially those based upon tissue conductivity and internal electric field, have serious deficiencies. c) The only method to estimate SAR without large error is by measuring temperature increases within biological tissue, which normally are negligible for environmental EMF intensities, and thus cannot be measured. Conclusions SAR actually refers to thermal effects, while the vast majority of the recorded biological effects from man-made non-ionizing environmental radiation are non-thermal. Even if SAR could be accurately estimated for a whole tissue, organ, or body, the biological/health effect is determined by tiny amounts of energy/power absorbed by specific biomolecules, which cannot be calculated. Moreover, it depends upon field parameters not taken into account in SAR calculation. Thus, SAR should not be used as the primary dosimetric quantity, but used only as a complementary measure, always reporting the estimating method and the corresponding error. Radiation/field intensity along with additional physical parameters (such as frequency, modulation etc) which can be directly and in any case more accurately measured on the surface of biological tissues, should constitute the primary measure for EMF exposures, in spite of similar uncertainty to predict the biological effect due to non-linearity. PMID:23750202

  18. Neighborhood social stressors, fine particulate matter air pollution, and cognitive function among older U.S. adults.

    PubMed

    Ailshire, Jennifer; Karraker, Amelia; Clarke, Philippa

    2017-01-01

    A growing number of studies have found a link between outdoor air pollution and cognitive function among older adults. Psychosocial stress is considered an important factor determining differential susceptibility to environmental hazards and older adults living in stressful neighborhoods may be particularly vulnerable to the adverse health effects of exposure to hazards such as air pollution. The objective of this study is to determine if neighborhood social stress amplifies the association between fine particulate matter air pollution (PM 2.5 ) and poor cognitive function in older, community-dwelling adults. We use data on 779 U.S. adults ages 55 and older from the 2001/2002 wave of the Americans' Changing Lives study. We determined annual average PM 2.5 concentration in 2001 in the area of residence by linking respondents with EPA air monitoring data using census tract identifiers. Cognitive function was measured using the number of errors on the Short Portable Mental Status Questionnaire (SPMSQ). Exposure to neighborhood social stressors was measured using perceptions of disorder and decay and included subjective evaluations of neighborhood upkeep and the presence of deteriorating/abandoned buildings, trash, and empty lots. We used negative binomial regression to examine the interaction of neighborhood perceived stress and PM 2.5 on the count of errors on the cognitive function assessment. We found that the association between PM 2.5 and cognitive errors was stronger among older adults living in high stress neighborhoods. These findings support recent theoretical developments in environmental health and health disparities research emphasizing the synergistic effects of neighborhood social stressors and environmental hazards on residents' health. Those living in socioeconomically disadvantaged neighborhoods, where social stressors and environmental hazards are more common, may be particularly susceptible to adverse health effects of social and physical environmental exposures. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Iowa radon leukaemia study: a hierarchical population risk model for spatially correlated exposure measured with error.

    PubMed

    Smith, Brian J; Zhang, Lixun; Field, R William

    2007-11-10

    This paper presents a Bayesian model that allows for the joint prediction of county-average radon levels and estimation of the associated leukaemia risk. The methods are motivated by radon data from an epidemiologic study of residential radon in Iowa that include 2726 outdoor and indoor measurements. Prediction of county-average radon is based on a geostatistical model for the radon data which assumes an underlying continuous spatial process. In the radon model, we account for uncertainties due to incomplete spatial coverage, spatial variability, characteristic differences between homes, and detector measurement error. The predicted radon averages are, in turn, included as a covariate in Poisson models for incident cases of acute lymphocytic (ALL), acute myelogenous (AML), chronic lymphocytic (CLL), and chronic myelogenous (CML) leukaemias reported to the Iowa cancer registry from 1973 to 2002. Since radon and leukaemia risk are modelled simultaneously in our approach, the resulting risk estimates accurately reflect uncertainties in the predicted radon exposure covariate. Posterior mean (95 per cent Bayesian credible interval) estimates of the relative risk associated with a 1 pCi/L increase in radon for ALL, AML, CLL, and CML are 0.91 (0.78-1.03), 1.01 (0.92-1.12), 1.06 (0.96-1.16), and 1.12 (0.98-1.27), respectively. Copyright 2007 John Wiley & Sons, Ltd.

  20. Assessing Exposure and Health Consequences of Chemicals in Drinking Water: Current State of Knowledge and Research Needs

    PubMed Central

    Kogevinas, Manolis; Cordier, Sylvaine; Templeton, Michael R.; Vermeulen, Roel; Nuckols, John R.; Nieuwenhuijsen, Mark J.; Levallois, Patrick

    2014-01-01

    Background: Safe drinking water is essential for well-being. Although microbiological contamination remains the largest cause of water-related morbidity and mortality globally, chemicals in water supplies may also cause disease, and evidence of the human health consequences is limited or lacking for many of them. Objectives: We aimed to summarize the state of knowledge, identify gaps in understanding, and provide recommendations for epidemiological research relating to chemicals occurring in drinking water. Discussion: Assessing exposure and the health consequences of chemicals in drinking water is challenging. Exposures are typically at low concentrations, measurements in water are frequently insufficient, chemicals are present in mixtures, exposure periods are usually long, multiple exposure routes may be involved, and valid biomarkers reflecting the relevant exposure period are scarce. In addition, the magnitude of the relative risks tends to be small. Conclusions: Research should include well-designed epidemiological studies covering regions with contrasting contaminant levels and sufficient sample size; comprehensive evaluation of contaminant occurrence in combination with bioassays integrating the effect of complex mixtures; sufficient numbers of measurements in water to evaluate geographical and temporal variability; detailed information on personal habits resulting in exposure (e.g., ingestion, showering, swimming, diet); collection of biological samples to measure relevant biomarkers; and advanced statistical models to estimate exposure and relative risks, considering methods to address measurement error. Last, the incorporation of molecular markers of early biological effects and genetic susceptibility is essential to understand the mechanisms of action. There is a particular knowledge gap and need to evaluate human exposure and the risks of a wide range of emerging contaminants. Citation: Villanueva CM, Kogevinas M, Cordier S, Templeton MR, Vermeulen R, Nuckols JR, Nieuwenhuijsen MJ, Levallois P. 2014. Assessing exposure and health consequences of chemicals in drinking water: current state of knowledge and research needs. Environ Health Perspect 122:213–221; http://dx.doi.org/10.1289/ehp.1206229 PMID:24380896

  1. High‐resolution trench photomosaics from image‐based modeling: Workflow and error analysis

    USGS Publications Warehouse

    Reitman, Nadine G.; Bennett, Scott E. K.; Gold, Ryan D.; Briggs, Richard; Duross, Christopher

    2015-01-01

    Photomosaics are commonly used to construct maps of paleoseismic trench exposures, but the conventional process of manually using image‐editing software is time consuming and produces undesirable artifacts and distortions. Herein, we document and evaluate the application of image‐based modeling (IBM) for creating photomosaics and 3D models of paleoseismic trench exposures, illustrated with a case‐study trench across the Wasatch fault in Alpine, Utah. Our results include a structure‐from‐motion workflow for the semiautomated creation of seamless, high‐resolution photomosaics designed for rapid implementation in a field setting. Compared with conventional manual methods, the IBM photomosaic method provides a more accurate, continuous, and detailed record of paleoseismic trench exposures in approximately half the processing time and 15%–20% of the user input time. Our error analysis quantifies the effect of the number and spatial distribution of control points on model accuracy. For this case study, an ∼87  m2 exposure of a benched trench photographed at viewing distances of 1.5–7 m yields a model with <2  cm root mean square error (rmse) with as few as six control points. Rmse decreases as more control points are implemented, but the gains in accuracy are minimal beyond 12 control points. Spreading control points throughout the target area helps to minimize error. We propose that 3D digital models and corresponding photomosaics should be standard practice in paleoseismic exposure archiving. The error analysis serves as a guide for future investigations that seek balance between speed and accuracy during photomosaic and 3D model construction.

  2. Sensitivity analysis of the near-road dispersion model RLINE - An evaluation at Detroit, Michigan

    NASA Astrophysics Data System (ADS)

    Milando, Chad W.; Batterman, Stuart A.

    2018-05-01

    The development of accurate and appropriate exposure metrics for health effect studies of traffic-related air pollutants (TRAPs) remains challenging and important given that traffic has become the dominant urban exposure source and that exposure estimates can affect estimates of associated health risk. Exposure estimates obtained using dispersion models can overcome many of the limitations of monitoring data, and such estimates have been used in several recent health studies. This study examines the sensitivity of exposure estimates produced by dispersion models to meteorological, emission and traffic allocation inputs, focusing on applications to health studies examining near-road exposures to TRAP. Daily average concentrations of CO and NOx predicted using the Research Line source model (RLINE) and a spatially and temporally resolved mobile source emissions inventory are compared to ambient measurements at near-road monitoring sites in Detroit, MI, and are used to assess the potential for exposure measurement error in cohort and population-based studies. Sensitivity of exposure estimates is assessed by comparing nominal and alternative model inputs using statistical performance evaluation metrics and three sets of receptors. The analysis shows considerable sensitivity to meteorological inputs; generally the best performance was obtained using data specific to each monitoring site. An updated emission factor database provided some improvement, particularly at near-road sites, while the use of site-specific diurnal traffic allocations did not improve performance compared to simpler default profiles. Overall, this study highlights the need for appropriate inputs, especially meteorological inputs, to dispersion models aimed at estimating near-road concentrations of TRAPs. It also highlights the potential for systematic biases that might affect analyses that use concentration predictions as exposure measures in health studies.

  3. The effects of recall errors and of selection bias in epidemiologic studies of mobile phone use and cancer risk.

    PubMed

    Vrijheid, Martine; Deltour, Isabelle; Krewski, Daniel; Sanchez, Marie; Cardis, Elisabeth

    2006-07-01

    This paper examines the effects of systematic and random errors in recall and of selection bias in case-control studies of mobile phone use and cancer. These sensitivity analyses are based on Monte-Carlo computer simulations and were carried out within the INTERPHONE Study, an international collaborative case-control study in 13 countries. Recall error scenarios simulated plausible values of random and systematic, non-differential and differential recall errors in amount of mobile phone use reported by study subjects. Plausible values for the recall error were obtained from validation studies. Selection bias scenarios assumed varying selection probabilities for cases and controls, mobile phone users, and non-users. Where possible these selection probabilities were based on existing information from non-respondents in INTERPHONE. Simulations used exposure distributions based on existing INTERPHONE data and assumed varying levels of the true risk of brain cancer related to mobile phone use. Results suggest that random recall errors of plausible levels can lead to a large underestimation in the risk of brain cancer associated with mobile phone use. Random errors were found to have larger impact than plausible systematic errors. Differential errors in recall had very little additional impact in the presence of large random errors. Selection bias resulting from underselection of unexposed controls led to J-shaped exposure-response patterns, with risk apparently decreasing at low to moderate exposure levels. The present results, in conjunction with those of the validation studies conducted within the INTERPHONE study, will play an important role in the interpretation of existing and future case-control studies of mobile phone use and cancer risk, including the INTERPHONE study.

  4. Indoor-to-outdoor particle concentration ratio model for human exposure analysis

    NASA Astrophysics Data System (ADS)

    Lee, Jae Young; Ryu, Sung Hee; Lee, Gwangjae; Bae, Gwi-Nam

    2016-02-01

    This study presents an indoor-to-outdoor particle concentration ratio (IOR) model for improved estimates of indoor exposure levels. This model is useful in epidemiological studies with large population, because sampling indoor pollutants in all participants' house is often necessary but impractical. As a part of a study examining the association between air pollutants and atopic dermatitis in children, 16 parents agreed to measure the indoor and outdoor PM10 and PM2.5 concentrations at their homes for 48 h. Correlation analysis and multi-step multivariate linear regression analysis was performed to develop the IOR model. Temperature and floor level were found to be powerful predictors of the IOR. Despite the simplicity of the model, it demonstrated high accuracy in terms of the root mean square error (RMSE). Especially for long-term IOR estimations, the RMSE was as low as 0.064 and 0.063 for PM10 and PM2.5, respectively. When using a prediction model in an epidemiological study, understanding the consequence of the modeling error and justifying the use of the model is very important. In the last section, this paper discussed the impact of the modeling error and developed a novel methodology to justify the use of the model.

  5. Validation of automatic joint space width measurements in hand radiographs in rheumatoid arthritis.

    PubMed

    Schenk, Olga; Huo, Yinghe; Vincken, Koen L; van de Laar, Mart A; Kuper, Ina H H; Slump, Kees C H; Lafeber, Floris P J G; Bernelot Moens, Hein J

    2016-10-01

    Computerized methods promise quick, objective, and sensitive tools to quantify progression of radiological damage in rheumatoid arthritis (RA). Measurement of joint space width (JSW) in finger and wrist joints with these systems performed comparable to the Sharp-van der Heijde score (SHS). A next step toward clinical use, validation of precision and accuracy in hand joints with minimal damage, is described with a close scrutiny of sources of error. A recently developed system to measure metacarpophalangeal (MCP) and proximal interphalangeal (PIP) joints was validated in consecutive hand images of RA patients. To assess the impact of image acquisition, measurements on radiographs from a multicenter trial and from a recent prospective cohort in a single hospital were compared. Precision of the system was tested by comparing the joint space in mm in pairs of subsequent images with a short interval without progression of SHS. In case of incorrect measurements, the source of error was analyzed with a review by human experts. Accuracy was assessed by comparison with reported measurements with other systems. In the two series of radiographs, the system could automatically locate and measure 1003/1088 (92.2%) and 1143/1200 (95.3%) individual joints, respectively. In joints with a normal SHS, the average (SD) size of MCP joints was [Formula: see text] and [Formula: see text] in the two series of radiographs, and of PIP joints [Formula: see text] and [Formula: see text]. The difference in JSW between two serial radiographs with an interval of 6 to 12 months and unchanged SHS was [Formula: see text], indicating very good precision. Errors occurred more often in radiographs from the multicenter cohort than in a more recent series from a single hospital. Detailed analysis of the 55/1125 (4.9%) measurements that had a discrepant paired measurement revealed that variation in the process of image acquisition (exposure in 15% and repositioning in 57%) was a more frequent source of error than incorrect delineation by the software (25%). Various steps in the validation of an automated measurement system for JSW of MCP and PIP joints are described. The use of serial radiographs from different sources, with a short interval and limited damage, is helpful to detect sources of error. Image acquisition, in particular repositioning, is a dominant source of error.

  6. Effect of occupational silica exposure on pulmonary function.

    PubMed

    Hertzberg, Vicki Stover; Rosenman, Kenneth D; Reilly, Mary Jo; Rice, Carol H

    2002-08-01

    To assess the effect of occupational silica exposure on pulmonary function. Epidemiologic evaluation based on employee interview, plant walk-through, and information abstracted from company medical records, employment records, and industrial hygiene measurements. Drawn from 1,072 current and former hourly wage workers employed before January 1, 1986. Thirty-six individuals with radiographic evidence of parenchymal changes consistent with asbestosis or silicosis were excluded. In addition, eight individuals whose race was listed as other than white or black were excluded. Analysis of spirometry data (FVC, FEV1, FEV1/FVC) only using the test results that met American Thoracic Society criteria for reproducibility and acceptability shows decreasing percent-predicted FVC and FEV1 and decreasing FEV1/FVC in relationship to increasing silica exposure among smokers. Logistic regression analyses of abnormal FVC and abnormal FEV1 values (where abnormal is defined as < 95% confidence limit for predicted using the Knudson prediction equations) show odds ratios of 1.49 and 1.68, respectively, for occurrence of abnormal result with 40 years of exposure at the Occupational Safety and Health Administration (OSHA)-allowable level of 0.1 mg/m3. Longitudinal analyses of FVC and FEV1 measurements show a 1.6 mL/yr and 1.1 mL/yr, respectively, decline per milligram/cubic meter mean silica exposure (p = 0.011 and p = 0.001, respectively). All analyses were adjusted for weight, height, age, ethnicity, smoking status, and other silica exposures. Systematic problems leading to measurement error were possible, but would have been nondifferential in effect and not related to silica measurements. There is a consistent association between increased pulmonary function abnormalities and estimated measures of cumulative silica exposure within the current allowable OSHA regulatory level. Despite concerns about the quality control of the pulmonary function measurements use in these analyses, our results support the need to lower allowable air levels of silica and increase efforts to encourage cessation of cigarette smoking among silica-exposed workers.

  7. Environmental Chemicals in Urine and Blood: Improving Methods for Creatinine and Lipid Adjustment

    PubMed Central

    O’Brien, Katie M.; Upson, Kristen; Cook, Nancy R.; Weinberg, Clarice R.

    2015-01-01

    Background Investigators measuring exposure biomarkers in urine typically adjust for creatinine to account for dilution-dependent sample variation in urine concentrations. Similarly, it is standard to adjust for serum lipids when measuring lipophilic chemicals in serum. However, there is controversy regarding the best approach, and existing methods may not effectively correct for measurement error. Objectives We compared adjustment methods, including novel approaches, using simulated case–control data. Methods Using a directed acyclic graph framework, we defined six causal scenarios for epidemiologic studies of environmental chemicals measured in urine or serum. The scenarios include variables known to influence creatinine (e.g., age and hydration) or serum lipid levels (e.g., body mass index and recent fat intake). Over a range of true effect sizes, we analyzed each scenario using seven adjustment approaches and estimated the corresponding bias and confidence interval coverage across 1,000 simulated studies. Results For urinary biomarker measurements, our novel method, which incorporates both covariate-adjusted standardization and the inclusion of creatinine as a covariate in the regression model, had low bias and possessed 95% confidence interval coverage of nearly 95% for most simulated scenarios. For serum biomarker measurements, a similar approach involving standardization plus serum lipid level adjustment generally performed well. Conclusions To control measurement error bias caused by variations in serum lipids or by urinary diluteness, we recommend improved methods for standardizing exposure concentrations across individuals. Citation O’Brien KM, Upson K, Cook NR, Weinberg CR. 2016. Environmental chemicals in urine and blood: improving methods for creatinine and lipid adjustment. Environ Health Perspect 124:220–227; http://dx.doi.org/10.1289/ehp.1509693 PMID:26219104

  8. Uncoupling nicotine mediated motoneuron axonal pathfinding errors and muscle degeneration in zebrafish

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Welsh, Lillian; Tanguay, Robert L.; Svoboda, Kurt R.

    Zebrafish embryos offer a unique opportunity to investigate the mechanisms by which nicotine exposure impacts early vertebrate development. Embryos exposed to nicotine become functionally paralyzed by 42 hpf suggesting that the neuromuscular system is compromised in exposed embryos. We previously demonstrated that secondary spinal motoneurons in nicotine-exposed embryos were delayed in development and that their axons made pathfinding errors (Svoboda, K.R., Vijayaraghaven, S., Tanguay, R.L., 2002. Nicotinic receptors mediate changes in spinal motoneuron development and axonal pathfinding in embryonic zebrafish exposed to nicotine. J. Neurosci. 22, 10731-10741). In that study, we did not consider the potential role that altered skeletalmore » muscle development caused by nicotine exposure could play in contributing to the errors in spinal motoneuron axon pathfinding. In this study, we show that an alteration in skeletal muscle development occurs in tandem with alterations in spinal motoneuron development upon exposure to nicotine. The alteration in the muscle involves the binding of nicotine to the muscle-specific AChRs. The nicotine-induced alteration in muscle development does not occur in the zebrafish mutant (sofa potato, [sop]), which lacks muscle-specific AChRs. Even though muscle development is unaffected by nicotine exposure in sop mutants, motoneuron axonal pathfinding errors still occur in these mutants, indicating a direct effect of nicotine exposure on nervous system development.« less

  9. Comparison of Highly Resolved Model-Based Exposure ...

    EPA Pesticide Factsheets

    Human exposure to air pollution in many studies is represented by ambient concentrations from space-time kriging of observed values. Space-time kriging techniques based on a limited number of ambient monitors may fail to capture the concentration from local sources. Further, because people spend more time indoors, using ambient concentration to represent exposure may cause error. To quantify the associated exposure error, we computed a series of six different hourly-based exposure metrics at 16,095 Census blocks of three Counties in North Carolina for CO, NOx, PM2.5, and elemental carbon (EC) during 2012. These metrics include ambient background concentration from space-time ordinary kriging (STOK), ambient on-road concentration from the Research LINE source dispersion model (R-LINE), a hybrid concentration combining STOK and R-LINE, and their associated indoor concentrations from an indoor infiltration mass balance model. Using a hybrid-based indoor concentration as the standard, the comparison showed that outdoor STOK metrics yielded large error at both population (67% to 93%) and individual level (average bias between −10% to 95%). For pollutants with significant contribution from on-road emission (EC and NOx), the on-road based indoor metric performs the best at the population level (error less than 52%). At the individual level, however, the STOK-based indoor concentration performs the best (average bias below 30%). For PM2.5, due to the relatively low co

  10. Overdose problem associated with treatment planning software for high energy photons in response of Panama's accident.

    PubMed

    Attalla, Ehab M; Lotayef, Mohamed M; Khalil, Ehab M; El-Hosiny, Hesham A; Nazmy, Mohamed S

    2007-06-01

    The purpose of this study was to quantify dose distribution errors by comparing actual dose measurements with the calculated values done by the software. To evaluate the outcome of radiation overexposure related to Panama's accident and in response to ensure that the treatment planning systems (T.P.S.) are being operated in accordance with the appropriate quality assurance programme, we studied the central axis and pripheral depth dose data using complex field shaped with blocks to quantify dose distribution errors. Multidata T.P.S. software versions 2.35 and 2.40 and Helax T.P.S. software version 5.1 B were assesed. The calculated data of the software treatment planning systems were verified by comparing these data with the actual dose measurements for open and blocked high energy photon fields (Co-60, 6MV & 18MV photons). Close calculated and measured results were obtained for the 2-D (Multidata) and 3-D treatment planning (TMS Helax). These results were correct within 1 to 2% for open fields and 0.5 to 2.5% for peripheral blocked fields. Discrepancies between calculated and measured data ranged between 13. to 36% along the central axis of complex blocked fields when normalisation point was selected at the Dmax, when the normalisation point was selected near or under the blocks, the variation between the calculated and the measured data was up to 500% difference. The present results emphasize the importance of the proper selection of the normalization point in the radiation field, as this facilitates detection of aberrant dose distribution (over exposure or under exposure).

  11. Nurse perceptions of organizational culture and its association with the culture of error reporting: a case of public sector hospitals in Pakistan.

    PubMed

    Jafree, Sara Rizvi; Zakar, Rubeena; Zakar, Muhammad Zakria; Fischer, Florian

    2016-01-05

    There is an absence of formal error tracking systems in public sector hospitals of Pakistan and also a lack of literature concerning error reporting culture in the health care sector. Nurse practitioners have front-line knowledge and rich exposure about both the organizational culture and error sharing in hospital settings. The aim of this paper was to investigate the association between organizational culture and the culture of error reporting, as perceived by nurses. The authors used the "Practice Environment Scale-Nurse Work Index Revised" to measure the six dimensions of organizational culture. Seven questions were used from the "Survey to Solicit Information about the Culture of Reporting" to measure error reporting culture in the region. Overall, 309 nurses participated in the survey, including female nurses from all designations such as supervisors, instructors, ward-heads, staff nurses and student nurses. We used SPSS 17.0 to perform a factor analysis. Furthermore, descriptive statistics, mean scores and multivariable logistic regression were used for the analysis. Three areas were ranked unfavorably by nurse respondents, including: (i) the error reporting culture, (ii) staffing and resource adequacy, and (iii) nurse foundations for quality of care. Multivariable regression results revealed that all six categories of organizational culture, including: (1) nurse manager ability, leadership and support, (2) nurse participation in hospital affairs, (3) nurse participation in governance, (4) nurse foundations of quality care, (5) nurse-coworkers relations, and (6) nurse staffing and resource adequacy, were positively associated with higher odds of error reporting culture. In addition, it was found that married nurses and nurses on permanent contract were more likely to report errors at the workplace. Public healthcare services of Pakistan can be improved through the promotion of an error reporting culture, reducing staffing and resource shortages and the development of nursing care plans.

  12. A method for the in vivo measurement of americium-241 at long times post-exposure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neton, J.W.

    1988-01-01

    This study investigated an improved method for the quantitative measurement, calibration and calculation of {sup 241}Am organ burdens in humans. The techniques developed correct for cross-talk or count-rate contributions from surrounding and adjacent organ burdens and assures for the proper assignment of activity to the lungs, liver and skeleton. In order to predict the net count-rates for the measurement geometries of the skull, liver and lung, a background prediction method was developed. This method utilizes data obtained from the measurement of a group of control subjects. Based on this data, a linear prediction equation was developed for each measurement geometry.more » In order to correct for the cross-contributions among the various deposition loci, a series of surrogate human phantom structures were measured. The results of measurements of {sup 241}Am depositions in six exposure cases have been evaluated using these new techniques and have indicated that lung burden estimates could be in error by as much as 100 percent when corrections are not made for contributions to the count-rate from other organs.« less

  13. Education and myopia: assessing the direction of causality by mendelian randomisation.

    PubMed

    Mountjoy, Edward; Davies, Neil M; Plotnikov, Denis; Smith, George Davey; Rodriguez, Santiago; Williams, Cathy E; Guggenheim, Jeremy A; Atan, Denize

    2018-06-06

    To determine whether more years spent in education is a causal risk factor for myopia, or whether myopia is a causal risk factor for more years in education. Bidirectional, two sample mendelian randomisation study. Publically available genetic data from two consortiums applied to a large, independent population cohort. Genetic variants used as proxies for myopia and years of education were derived from two large genome wide association studies: 23andMe and Social Science Genetic Association Consortium (SSGAC), respectively. 67 798 men and women from England, Scotland, and Wales in the UK Biobank cohort with available information for years of completed education and refractive error. Mendelian randomisation analyses were performed in two directions: the first exposure was the genetic predisposition to myopia, measured with 44 genetic variants strongly associated with myopia in 23andMe, and the outcome was years in education; and the second exposure was the genetic predisposition to higher levels of education, measured with 69 genetic variants from SSGAC, and the outcome was refractive error. Conventional regression analyses of the observational data suggested that every additional year of education was associated with a more myopic refractive error of -0.18 dioptres/y (95% confidence interval -0.19 to -0.17; P<2e-16). Mendelian randomisation analyses suggested the true causal effect was even stronger: -0.27 dioptres/y (-0.37 to -0.17; P=4e-8). By contrast, there was little evidence to suggest myopia affected education (years in education per dioptre of refractive error -0.008 y/dioptre, 95% confidence interval -0.041 to 0.025, P=0.6). Thus, the cumulative effect of more years in education on refractive error means that a university graduate from the United Kingdom with 17 years of education would, on average, be at least -1 dioptre more myopic than someone who left school at age 16 (with 12 years of education). Myopia of this magnitude would be sufficient to necessitate the use of glasses for driving. Sensitivity analyses showed minimal evidence for genetic confounding that could have biased the causal effect estimates. This study shows that exposure to more years in education contributes to the rising prevalence of myopia. Increasing the length of time spent in education may inadvertently increase the prevalence of myopia and potential future visual disability. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  14. Improving estimates of air pollution exposure through ubiquitous sensing technologies

    PubMed Central

    de Nazelle, Audrey; Seto, Edmund; Donaire-Gonzalez, David; Mendez, Michelle; Matamala, Jaume; Nieuwenhuijsen, Mark J; Jerrett, Michael

    2013-01-01

    Traditional methods of exposure assessment in epidemiological studies often fail to integrate important information on activity patterns, which may lead to bias, loss of statistical power or both in health effects estimates. Novel sensing technologies integrated with mobile phones offer potential to reduce exposure measurement error. We sought to demonstrate the usability and relevance of the CalFit smartphone technology to track person-level time, geographic location, and physical activity patterns for improved air pollution exposure assessment. We deployed CalFit-equipped smartphones in a free living-population of 36 subjects in Barcelona, Spain. Information obtained on physical activity and geographic location was linked to space-time air pollution mapping. For instance, we found on average travel activities accounted for 6% of people’s time and 24% of their daily inhaled NO2. Due to the large number of mobile phone users, this technology potentially provides an unobtrusive means of collecting epidemiologic exposure data at low cost. PMID:23416743

  15. Malingering in Toxic Exposure. Classification Accuracy of Reliable Digit Span and WAIS-III Digit Span Scaled Scores

    ERIC Educational Resources Information Center

    Greve, Kevin W.; Springer, Steven; Bianchini, Kevin J.; Black, F. William; Heinly, Matthew T.; Love, Jeffrey M.; Swift, Douglas A.; Ciota, Megan A.

    2007-01-01

    This study examined the sensitivity and false-positive error rate of reliable digit span (RDS) and the WAIS-III Digit Span (DS) scaled score in persons alleging toxic exposure and determined whether error rates differed from published rates in traumatic brain injury (TBI) and chronic pain (CP). Data were obtained from the files of 123 persons…

  16. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    DOE PAGES

    Li, T. S.; DePoy, D. L.; Marshall, J. L.; ...

    2016-06-01

    Here, we report that meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations inmore » the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. In conclusion, the residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less

  17. Atmospheric Dispersion Effects in Weak Lensing Measurements

    DOE PAGES

    Plazas, Andrés Alejandro; Bernstein, Gary

    2012-10-01

    The wavelength dependence of atmospheric refraction causes elongation of finite-bandwidth images along the elevation vector, which produces spurious signals in weak gravitational lensing shear measurements unless this atmospheric dispersion is calibrated and removed to high precision. Because astrometric solutions and PSF characteristics are typically calibrated from stellar images, differences between the reference stars' spectra and the galaxies' spectra will leave residual errors in both the astrometric positions (dr) and in the second moment (width) of the wavelength-averaged PSF (dv) for galaxies.We estimate the level of dv that will induce spurious weak lensing signals in PSF-corrected galaxy shapes that exceed themore » statistical errors of the DES and the LSST cosmic-shear experiments. We also estimate the dr signals that will produce unacceptable spurious distortions after stacking of exposures taken at different airmasses and hour angles. We also calculate the errors in the griz bands, and find that dispersion systematics, uncorrected, are up to 6 and 2 times larger in g and r bands,respectively, than the requirements for the DES error budget, but can be safely ignored in i and z bands. For the LSST requirements, the factors are about 30, 10, and 3 in g, r, and i bands,respectively. We find that a simple correction linear in galaxy color is accurate enough to reduce dispersion shear systematics to insignificant levels in the r band for DES and i band for LSST,but still as much as 5 times than the requirements for LSST r-band observations. More complex corrections will likely be able to reduce the systematic cosmic-shear errors below statistical errors for LSST r band. But g-band effects remain large enough that it seems likely that induced systematics will dominate the statistical errors of both surveys, and cosmic-shear measurements should rely on the redder bands.« less

  18. Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats

    USGS Publications Warehouse

    Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.

    2012-01-01

    This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.

  19. Disclosing Medical Errors to Patients: Attitudes and Practices of Physicians and Trainees

    PubMed Central

    Jones, Elizabeth W.; Wu, Barry J.; Forman-Hoffman, Valerie L.; Levi, Benjamin H.; Rosenthal, Gary E.

    2007-01-01

    BACKGROUND Disclosing errors to patients is an important part of patient care, but the prevalence of disclosure, and factors affecting it, are poorly understood. OBJECTIVE To survey physicians and trainees about their practices and attitudes regarding error disclosure to patients. DESIGN AND PARTICIPANTS Survey of faculty physicians, resident physicians, and medical students in Midwest, Mid-Atlantic, and Northeast regions of the United States. MEASUREMENTS Actual error disclosure; hypothetical error disclosure; attitudes toward disclosure; demographic factors. RESULTS Responses were received from 538 participants (response rate = 77%). Almost all faculty and residents responded that they would disclose a hypothetical error resulting in minor (97%) or major (93%) harm to a patient. However, only 41% of faculty and residents had disclosed an actual minor error (resulting in prolonged treatment or discomfort), and only 5% had disclosed an actual major error (resulting in disability or death). Moreover, 19% acknowledged not disclosing an actual minor error and 4% acknowledged not disclosing an actual major error. Experience with malpractice litigation was not associated with less actual or hypothetical error disclosure. Faculty were more likely than residents and students to disclose a hypothetical error and less concerned about possible negative consequences of disclosure. Several attitudes were associated with greater likelihood of hypothetical disclosure, including the belief that disclosure is right even if it comes at a significant personal cost. CONCLUSIONS There appears to be a gap between physicians’ attitudes and practices regarding error disclosure. Willingness to disclose errors was associated with higher training level and a variety of patient-centered attitudes, and it was not lessened by previous exposure to malpractice litigation. PMID:17473944

  20. Environmental Chemical Exposures and Autism Spectrum Disorders: A Review of the Epidemiological Evidence

    PubMed Central

    Kalkbrenner, Amy E.; Schmidt, Rebecca J.; Penlesky, Annie C.

    2016-01-01

    In the past decade, the number of epidemiological publications addressing environmental chemical exposures and autism has grown tremendously. These studies are important because it is now understood that environmental factors play a larger role in causing autism than previously thought and because they address modifiable risk factors that may open up avenues for the primary prevention of the disability associated with autism. In this review, we covered studies of autism and estimates of exposure to tobacco, air pollutants, volatile organic compounds and solvents, metals (from air, occupation, diet, dental amalgams, and thimerosal-containing vaccines), pesticides, and organic endocrine-disrupting compounds such as flame retardants, non-stick chemicals, phthalates, and bisphenol A. We included studies that had individual-level data on autism, exposure measures pertaining to pregnancy or the 1st year of life, valid comparison groups, control for confounders, and adequate sample sizes. Despite the inherent error in the measurement of many of these environmental exposures, which is likely to attenuate observed associations, some environmental exposures showed associations with autism, especially traffic-related air pollutants, some metals, and several pesticides, with suggestive trends for some volatile organic compounds (e.g., methylene chloride, trichloroethylene, and styrene) and phthalates. Whether any of these play a causal role requires further study. Given the limited scope of these publications, other environmental chemicals cannot be ruled out, but have not yet been adequately studied. Future research that addresses these and additional environmental chemicals, including their most common routes of exposures, with accurate exposure measurement pertaining to several developmental windows, is essential to guide efforts for the prevention of the neurodevelopmental damage that manifests in autism symptoms. PMID:25199954

  1. Correction of nonuniformity error of Gafchromic EBT2 and EBT3.

    PubMed

    Katsuda, Toshizo; Gotanda, Rumi; Gotanda, Tatsuhiro; Akagawa, Takuya; Tanki, Nobuyoshi; Kuwano, Tadao; Yabunaka, Kouichi

    2016-05-08

    This study investigates an X-ray dose measurement method for computed tomography using Gafchromic films. Nonuniformity of the active layer is a major problem in Gafchromic films. In radiotherapy, nonuniformity error is reduced by applying the double-exposure technique, but this is impractical in diagnostic radiology because of the heel effect. Therefore, we propose replacing the X-rays in the double-exposure technique with ultraviolet (UV)-A irradiation of Gafchromic EBT2 and EBT3. To improve the reproducibility of the scan position, Gafchromic EBT2 and EBT3 films were attached to a 3-mm-thick acrylic plate. The samples were then irradiated with a 10 W UV-A fluorescent lamp placed at a distance of 72cm for 30, 60, and 90 minutes. The profile curves were evaluated along the long and short axes of the film center, and the standard deviations of the pixel values were calculated over large areas of the films. Paired t-test was performed. UV-A irradiation exerted a significant effect on Gafchromic EBT2 (paired t-test; p = 0.0275) but not on EBT3 (paired t-test; p = 0.2785). Similarly, the homogeneity was improved in Gafchromic EBT2 but not in EBT3. Therefore, the double-exposure technique under UV-A irradiation is suitable only for EBT2 films.

  2. Cognitive deficits induced by 56Fe radiation exposure

    NASA Technical Reports Server (NTRS)

    Shukitt-Hale, B.; Casadesus, G.; Cantuti-Castelvetri, I.; Rabin, B. M.; Joseph, J. A.

    2003-01-01

    Exposing rats to particles of high energy and charge (e.g., 56Fe) disrupts neuronal systems and the behaviors mediated by them; these adverse behavioral and neuronal effects are similar to those seen in aged animals. Because cognition declines with age, and our previous study showed that radiation disrupted Morris water maze spatial learning and memory performance, the present study used an 8-arm radial maze (RAM) to further test the cognitive behavioral consequences of radiation exposure. Control rats or rats exposed to whole-body irradiation with 1.0 Gy of 1 GeV/n high-energy 56Fe particles (delivered at the alternating gradient synchrotron at Brookhaven National Laboratory) were tested nine months following exposure. Radiation adversely affected RAM performance, and the changes seen parallel those of aging. Irradiated animals entered baited arms during the first 4 choices significantly less than did controls, produced their first error sooner, and also tended to make more errors as measured by re-entries into non-baited arms. These results show that irradiation with high-energy particles produces age-like decrements in cognitive behavior that may impair the ability of astronauts to perform critical tasks during long-term space travel beyond the magnetosphere. Published by Elsevier Science Ltd on behalf of COSPAR.

  3. Invited Commentary: Beware the Test-Negative Design.

    PubMed

    Westreich, Daniel; Hudgens, Michael G

    2016-09-01

    In this issue of the Journal, Sullivan et al. (Am J Epidemiol. 2016;184(5):345-353) carefully examine the theoretical justification for use of the test-negative design, a common observational study design, in assessing the effectiveness of influenza vaccination. Using modern causal inference methods (in particular, directed acyclic graphs), they describe different threats to the validity of inferences drawn about the effect of vaccination from test-negative design studies. These threats include confounding, selection bias, and measurement error in either the exposure or the outcome. While confounding and measurement error are common in observational studies, the potential for selection bias inherent in the test-negative design brings into question the validity of inferences drawn from such studies. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  4. Lessons learnt on biases and uncertainties in personal exposure measurement surveys of radiofrequency electromagnetic fields with exposimeters.

    PubMed

    Bolte, John F B

    2016-09-01

    Personal exposure measurements of radio frequency electromagnetic fields are important for epidemiological studies and developing prediction models. Minimizing biases and uncertainties and handling spatial and temporal variability are important aspects of these measurements. This paper reviews the lessons learnt from testing the different types of exposimeters and from personal exposure measurement surveys performed between 2005 and 2015. Applying them will improve the comparability and ranking of exposure levels for different microenvironments, activities or (groups of) people, such that epidemiological studies are better capable of finding potential weak correlations with health effects. Over 20 papers have been published on how to prevent biases and minimize uncertainties due to: mechanical errors; design of hardware and software filters; anisotropy; and influence of the body. A number of biases can be corrected for by determining multiplicative correction factors. In addition a good protocol on how to wear the exposimeter, a sufficiently small sampling interval and sufficiently long measurement duration will minimize biases. Corrections to biases are possible for: non-detects through detection limit, erroneous manufacturer calibration and temporal drift. Corrections not deemed necessary, because no significant biases have been observed, are: linearity in response and resolution. Corrections difficult to perform after measurements are for: modulation/duty cycle sensitivity; out of band response aka cross talk; temperature and humidity sensitivity. Corrections not possible to perform after measurements are for: multiple signals detection in one band; flatness of response within a frequency band; anisotropy to waves of different elevation angle. An analysis of 20 microenvironmental surveys showed that early studies using exposimeters with logarithmic detectors, overestimated exposure to signals with bursts, such as in uplink signals from mobile phones and WiFi appliances. Further, the possible corrections for biases have not been fully applied. The main findings are that if the biases are not corrected for, the actual exposure will on average be underestimated. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. The reliability of eyetracking to assess attentional bias to threatening words in healthy individuals.

    PubMed

    Skinner, Ian W; Hübscher, Markus; Moseley, G Lorimer; Lee, Hopin; Wand, Benedict M; Traeger, Adrian C; Gustin, Sylvia M; McAuley, James H

    2017-08-15

    Eyetracking is commonly used to investigate attentional bias. Although some studies have investigated the internal consistency of eyetracking, data are scarce on the test-retest reliability and agreement of eyetracking to investigate attentional bias. This study reports the test-retest reliability, measurement error, and internal consistency of 12 commonly used outcome measures thought to reflect the different components of attentional bias: overall attention, early attention, and late attention. Healthy participants completed a preferential-looking eyetracking task that involved the presentation of threatening (sensory words, general threat words, and affective words) and nonthreatening words. We used intraclass correlation coefficients (ICCs) to measure test-retest reliability (ICC > .70 indicates adequate reliability). The ICCs(2, 1) ranged from -.31 to .71. Reliability varied according to the outcome measure and threat word category. Sensory words had a lower mean ICC (.08) than either affective words (.32) or general threat words (.29). A longer exposure time was associated with higher test-retest reliability. All of the outcome measures, except second-run dwell time, demonstrated low measurement error (<6%). Most of the outcome measures reported high internal consistency (α > .93). Recommendations are discussed for improving the reliability of eyetracking tasks in future research.

  6. An intelligent maximum permissible exposure meter for safety assessments of laser radiation

    NASA Astrophysics Data System (ADS)

    Corder, D. A.; Evans, D. R.; Tyrer, J. R.

    1996-09-01

    There is frequently a need to make laser power or energy density measurements when determining whether radiation from a laser system exceeds the Maximum Permissible Exposure (MPE) as defined in BS EN 60825. This can be achieved using standard commercially available laser power or energy measurement equipment, but some of these have shortcomings when used in this application. Calculations must be performed by the user to compare the measured value to the MPE. The measurement and calculation procedure appears complex to the nonexpert who may be performing the assessment. A novel approach is described which uses purpose designed hardware and software to simplify the process. The hardware is optimized for measuring the relatively low powers associated with MPEs. The software runs on a Psion Series 3a palmtop computer. This reduces the cost and size of the system yet allows graphical and numerical presentation of data. Data output to other software running on PCs is also possible, enabling the instrument to be used as part of a quality system. Throughout the measurement process the opportunity for user error has been minimized by the hardware and software design.

  7. Refractive errors among students occupying rooms lighted with incandescent or fluorescent lamps.

    PubMed

    Czepita, Damian; Gosławski, Wojciech; Mojsa, Artur

    2004-01-01

    The purpose of the study was to determine whether the development of refractive errors could be associated with exposure to light emitted by incandescent or fluorescent lamps. 3636 students were examined (1638 boys and 1998 girls, aged 6-18 years, mean age 12.1, SD 3.4). The examination included retinoscopy with cycloplegia. Myopia was defined as refractive error < or = -0.5 D, hyperopia as refractive error > or = +1.5 D, astigmatism as refractive error > 0.5 DC. Anisometropia was diagnosed when the difference in the refraction of both eyes was > 1.0 D. The children and their parents completed a questionnaire on exposure to light at home. Data were analyzed statistically with the chi2 test. P values of less than 0.05 were considered statistically significant. It was found that the use of fluorescent lamps was associated with an increase in the occurrence of hyperopia (P < 0.01). There was no association between sleeping with the light turned on and prevalence of refractive errors.

  8. Improving estimates of air pollution exposure through ubiquitous sensing technologies.

    PubMed

    de Nazelle, Audrey; Seto, Edmund; Donaire-Gonzalez, David; Mendez, Michelle; Matamala, Jaume; Nieuwenhuijsen, Mark J; Jerrett, Michael

    2013-05-01

    Traditional methods of exposure assessment in epidemiological studies often fail to integrate important information on activity patterns, which may lead to bias, loss of statistical power, or both in health effects estimates. Novel sensing technologies integrated with mobile phones offer potential to reduce exposure measurement error. We sought to demonstrate the usability and relevance of the CalFit smartphone technology to track person-level time, geographic location, and physical activity patterns for improved air pollution exposure assessment. We deployed CalFit-equipped smartphones in a free-living population of 36 subjects in Barcelona, Spain. Information obtained on physical activity and geographic location was linked to space-time air pollution mapping. We found that information from CalFit could substantially alter exposure estimates. For instance, on average travel activities accounted for 6% of people's time and 24% of their daily inhaled NO2. Due to the large number of mobile phone users, this technology potentially provides an unobtrusive means of enhancing epidemiologic exposure data at low cost. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. GSM mobile phone radiation suppresses brain glucose metabolism

    PubMed Central

    Kwon, Myoung Soo; Vorobyev, Victor; Kännälä, Sami; Laine, Matti; Rinne, Juha O; Toivonen, Tommi; Johansson, Jarkko; Teräs, Mika; Lindholm, Harri; Alanko, Tommi; Hämäläinen, Heikki

    2011-01-01

    We investigated the effects of mobile phone radiation on cerebral glucose metabolism using high-resolution positron emission tomography (PET) with the 18F-deoxyglucose (FDG) tracer. A long half-life (109 minutes) of the 18F isotope allowed a long, natural exposure condition outside the PET scanner. Thirteen young right-handed male subjects were exposed to a pulse-modulated 902.4 MHz Global System for Mobile Communications signal for 33 minutes, while performing a simple visual vigilance task. Temperature was also measured in the head region (forehead, eyes, cheeks, ear canals) during exposure. 18F-deoxyglucose PET images acquired after the exposure showed that relative cerebral metabolic rate of glucose was significantly reduced in the temporoparietal junction and anterior temporal lobe of the right hemisphere ipsilateral to the exposure. Temperature rise was also observed on the exposed side of the head, but the magnitude was very small. The exposure did not affect task performance (reaction time, error rate). Our results show that short-term mobile phone exposure can locally suppress brain energy metabolism in humans. PMID:21915135

  10. Evaluation of mobile micro-sensing devices for GPS-based personal exposure monitoring of heat and particulate matter - a matter of context

    NASA Astrophysics Data System (ADS)

    Ueberham, Maximilian; Schlink, Uwe; Weiland, Ulrike

    2017-04-01

    The application of mobile micro-sensing devices (MSDs) for human health and personal exposure monitoring (PEM) is an emerging topic of interest in urban air quality research. In the context of climate change, urban population growth and related anthropogenic activities, an increase is expected for the intensity of citizens' exposure to heat and particulate matter (PM). Therefore more focus on the small-scale perspective of spatio-temporal distribution of air quality parameters is important to complement fixed-monitoring site data. Mobile sensors for PEM are useful for both, the investigation of the local distribution of air quality and the personal exposure profiles of individuals moving within their activity spaces. An evaluation of MSDs' accuracy is crucial, before their sophisticated application in measurement campaigns. To detect variations of exposure at small scales, it is even more important to consider the accuracy of Global Positioning System (GPS) devices within different urban structure types (USTs). We present an assessment of the performance of GPS-based MSDs under indoor laboratory conditions and outdoor testing within different USTs. The aim was to evaluate the accuracy of several GPS devices and MSDs for heat and PM 2.5 in relation to reliable standard sensing devices as part of a PhD-project. The performance parameters are summary measures (mean value, standard deviation), correlation (Pearson r), difference measures (mean bias error, mean absolute error, index of agreement) and Bland-Altman plots. The MSDs have been tested in a climate chamber under constant temperature and relative humidity. For temperature MSDs reaction time was tested because of its relevance to detect temperature variations during mobile measurements. For interpretation of the results we considered the MSDs design and technology (e.g. passive vs. active ventilation). GPS-devices have been tested within low/high dense urban residential areas and low/high dense urban green areas and have been compared according to their deviation from the original test route and according to their technology (GPS, A-GPS, GSM, WLAN). In result the performance of the MSDs varies spatially and temporally. Variations mainly depend on the USTs, meteorological conditions, device design and technology. However, the sensors' variation for GPS (3-7m) temperature (1-1.3°C) and PM (800-1100 particles/cu ft) is quite stable over the whole range of value records. Difference measures can be used to consider and correct for mean errors. Furthermore we show that smartphone based GPS-tracking in combination with connected MSDs are a reliable easy-to-use method for PEM. In conclusion our evaluation underpins the applicability of MSDs in combination with GPS for PEM. We observed that especially relative changes in the environmental conditions can be well detected by the devices. Nevertheless, data quality of MSDs remains a relevant concern that needs more investigation especially for applications in citizen science. Eventually the usefulness of mobile MSDs mainly needs to be evaluated depending on the context of application.

  11. Prism adaptation by mental practice.

    PubMed

    Michel, Carine; Gaveau, Jérémie; Pozzo, Thierry; Papaxanthis, Charalambos

    2013-09-01

    The prediction of our actions and their interaction with the external environment is critical for sensorimotor adaptation. For instance, during prism exposure, which deviates laterally our visual field, we progressively correct movement errors by combining sensory feedback with forward model sensory predictions. However, very often we project our actions to the external environment without physically interacting with it (e.g., mental actions). An intriguing question is whether adaptation will occur if we imagine, instead of executing, an arm movement while wearing prisms. Here, we investigated prism adaptation during mental actions. In the first experiment, participants (n = 54) performed arm pointing movements before and after exposure to the optical device. They were equally divided into six groups according to prism exposure: Prisms-Active, Prisms-Imagery, Prisms-Stationary, Prisms-Stationary-Attention, No Conflict-Prisms-Imagery, No Prisms-Imagery. Adaptation, measured by the difference in pointing errors between pre-test and post-test, occurred only in Prisms-Active and Prisms-Imagery conditions. The second experiment confirmed the results of the first experiment and further showed that sensorimotor adaptation was mainly due to proprioceptive realignment in both Prisms-Active (n = 10) and Prisms-Imagery (n = 10) groups. In both experiments adaptation was greater following actual than imagined pointing movements. The present results are the first demonstration of prism adaptation by mental practice under prism exposure and they are discussed in terms of internal forward models and sensorimotor plasticity. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Estimating personal exposures from ambient air-pollution measures: Using meta-analysis to assess measurement error

    PubMed Central

    Holliday, Katelyn M; Avery, Christy L; Poole, Charles; McGraw, Kathleen; Williams, Ronald; Liao, Duanping; Smith, Richard L; Whitsel, Eric A

    2014-01-01

    Background Although ambient concentrations of particulate matter ≤10μm (PM10) are often used as proxies for total personal exposure, correlation (r) between ambient and personal PM10 concentrations varies. Factors underlying this variation and its effect on health outcome-PM exposure relationships remain poorly understood. Methods We conducted a random-effects meta-analysis to estimate effects of study, participant and environmental factors on r; used the estimates to impute personal exposure from ambient PM10 concentrations among 4,012 non-smoking, diabetic participants in the Women’s Health Initiative clinical trial; and then estimated the associations of ambient and imputed personal PM10 concentrations with electrocardiographic measures such as heart rate variability. Results We identified fifteen studies (in years 1990-2009) of 342 participants in five countries. The median r was 0.46 (range = 0.13 to 0.72). There was little evidence of funnel-plot asymmetry but substantial heterogeneity of r, which increased 0.05 (95% confidence interval [CI]= 0.01 to 0.09) per 10 μg/m3 increase in mean ambient PM10 concentration. Substituting imputed personal exposure for ambient PM10 concentrations shifted mean percent changes in electrocardiographic measures per 10μg/m3 increase in exposure away from the null and decreased their precision, e.g. −2.0% (95% CI= −4.6% to 0.7%) versus −7.9% (−15.9% to 0.9%) for the standard deviation of normal-to-normal RR interval duration. Conclusions Analogous distributions and heterogeneity of r in extant meta-analyses of ambient and personal PM2.5 concentrations suggest that observed shifts in mean percent change and decreases in precision may be generalizable across particle size. PMID:24220191

  13. Dietary Assessment in Food Environment Research

    PubMed Central

    Kirkpatrick, Sharon I.; Reedy, Jill; Butler, Eboneé N.; Dodd, Kevin W.; Subar, Amy F.; Thompson, Frances E.; McKinnon, Robin A.

    2015-01-01

    Context The existing evidence on food environments and diet is inconsistent, potentially due in part to heterogeneity in measures used to assess diet. The objective of this review, conducted in 2012–2013, was to examine measures of dietary intake utilized in food environment research. Evidence acquisition Included studies were published from January 2007 through June 2012 and assessed relationships between at least one food environment exposure and at least one dietary outcome. Fifty-one articles were identified using PubMed, Scopus, Web of Knowledge, and PsycINFO; references listed in the papers reviewed and relevant review articles; and the National Cancer Institute's Measures of the Food Environment website. The frequency of the use of dietary intake measures and assessment of specific dietary outcomes was examined, as were patterns of results among studies using different dietary measures. Evidence synthesis The majority of studies used brief instruments, such as screeners or one or two questions, to assess intake. Food frequency questionnaires were used in about a third of studies, one in ten used 24-hour recalls, and fewer than one in twenty used diaries. Little consideration of dietary measurement error was evident. Associations between the food environment and diet were more consistently in the expected direction in studies using less error-prone measures. Conclusions There is a tendency toward the use of brief dietary assessment instruments with low cost and burden rather than more detailed instruments that capture intake with less bias. Use of error-prone dietary measures may lead to spurious findings and reduced power to detect associations. PMID:24355678

  14. Myopia in secondary school students in Mwanza City, Tanzania: the need for a national screening programme

    PubMed Central

    Wedner, S H; Ross, D A; Todd, J; Anemona, A; Balira, R; Foster, A

    2002-01-01

    Background/aims: The prevalence of significant refractive errors and other eye diseases was measured in 2511 secondary school students aged 11–27 years in Mwanza City, Tanzania. Risk factors for myopia were explored. Methods: A questionnaire assessed the students’ socioeconomic background and exposure to near work followed by visual acuity assessment and a full eye examination. Non-cycloplegic objective and subjective refraction was done on all participants with visual acuity of worse than 6/12 in either eye without an obvious cause. Results: 154 (6.1%) students had significant refractive errors. Myopia was the leading refractive error (5.6%). Amblyopia (0.4%), strabismus (0.2%), and other treatable eye disorders were uncommon. Only 30.3% of students with significant refractive errors wore spectacles before the survey. Age, sex, ethnicity, father’s educational status, and a family history of siblings with spectacles were significant independent risk factors for myopia. Conclusion: The prevalence of uncorrected significant refractive errors is high enough to justify a regular school eye screening programme in secondary schools in Tanzania. Risk factors for myopia are similar to those reported in European, North-American, and Asian populations. PMID:12386067

  15. Partial pressure analysis in space testing

    NASA Technical Reports Server (NTRS)

    Tilford, Charles R.

    1994-01-01

    For vacuum-system or test-article analysis it is often desirable to know the species and partial pressures of the vacuum gases. Residual gas or Partial Pressure Analyzers (PPA's) are commonly used for this purpose. These are mass spectrometer-type instruments, most commonly employing quadrupole filters. These instruments can be extremely useful, but they should be used with caution. Depending on the instrument design, calibration procedures, and conditions of use, measurements made with these instruments can be accurate to within a few percent, or in error by two or more orders of magnitude. Significant sources of error can include relative gas sensitivities that differ from handbook values by an order of magnitude, changes in sensitivity with pressure by as much as two orders of magnitude, changes in sensitivity with time after exposure to chemically active gases, and the dependence of the sensitivity for one gas on the pressures of other gases. However, for most instruments, these errors can be greatly reduced with proper operating procedures and conditions of use. In this paper, data are presented illustrating performance characteristics for different instruments and gases, operating parameters are recommended to minimize some errors, and calibrations procedures are described that can detect and/or correct other errors.

  16. Is the perception of clean, humid air indeed affected by cooling the respiratory tract?

    NASA Astrophysics Data System (ADS)

    Burek, Rudolf; Polednik, Bernard; Guz, Łukasz

    2017-07-01

    The study aims at determining exposure-response relationships after short exposure to clean air and long exposure to air polluted by people. The impact of water vapor content in the indoor air on its acceptability (ACC) was assessed by the occupants after a short exposure to clean air and an hour-long exposure to increasingly polluted air. The study presents a critical analysis pertaining to the stimulation of olfactory sensations by the air enthalpy suggested in previous models and proposes a new model based on the Weber-Fechner law. Our assumption was that water vapor is the stimulus of olfactory sensations. The model was calibrated and verified in field conditions, in a mechanically ventilated and air conditioned auditorium. Measurements of the air temperature, relative humidity, velocity and CO2 content were carried out; the acceptability of air quality was assessed by 162 untrained students. The subjective assessments and the measurements of the environmental qualities allowed for determining the Weber coefficients and the threshold concentrations of water vapor, as well as for establishing the limitations of the model at short and long exposure to polluted indoor air. The results are in agreement with previous studies. The standard error equals 0.07 for immediate assessments and 0.17 for assessments after adaptation. Based on the model one can predict the ACC assessments of trained and untrained participants.

  17. Modelling of individual subject ozone exposure response kinetics.

    PubMed

    Schelegle, Edward S; Adams, William C; Walby, William F; Marion, M Susan

    2012-06-01

    A better understanding of individual subject ozone (O(3)) exposure response kinetics will provide insight into how to improve models used in the risk assessment of ambient ozone exposure. To develop a simple two compartment exposure-response model that describes individual subject decrements in forced expiratory volume in one second (FEV(1)) induced by the acute inhalation of O(3) lasting up to 8 h. FEV(1) measurements of 220 subjects who participated in 14 previously completed studies were fit to the model using both particle swarm and nonlinear least squares optimization techniques to identify three subject-specific coefficients producing minimum "global" and local errors, respectively. Observed and predicted decrements in FEV(1) of the 220 subjects were used for validation of the model. Further validation was provided by comparing the observed O(3)-induced FEV(1) decrements in an additional eight studies with predicted values obtained using model coefficients estimated from the 220 subjects used in cross validation. Overall the individual subject measured and modeled FEV(1) decrements were highly correlated (mean R(2) of 0.69 ± 0.24). In addition, it was shown that a matrix of individual subject model coefficients can be used to predict the mean and variance of group decrements in FEV(1). This modeling approach provides insight into individual subject O(3) exposure response kinetics and provides a potential starting point for improving the risk assessment of environmental O(3) exposure.

  18. Global DEM Errors Underpredict Coastal Vulnerability to Sea Level Rise and Flooding

    NASA Astrophysics Data System (ADS)

    Kulp, Scott; Strauss, Benjamin

    2016-04-01

    Elevation data based on NASA's Shuttle Radar Topography Mission (SRTM) have been widely used to evaluate threats from global sea level rise, storm surge, and coastal floods. However, SRTM data are known to include large vertical errors in densely urban or densely vegetated areas. The errors may propagate to derived land and population exposure assessments. We compare assessments based on SRTM data against references employing high-accuracy bare-earth elevation data generated from lidar data available for coastal areas of the United States. We find that both 1-arcsecond and 3-arcsecond horizontal resolution SRTM data systemically underestimate exposure across all assessed spatial scales and up to at least 10m above the high tide line. At 3m, 1-arcsecond SRTM underestimates U.S. population exposure by more than 60%, and under-predicts population exposure in 90% of coastal states, 87% of counties, and 83% of municipalities. These fractions increase with elevation, but error medians and variability fall to lower levels, with national exposure underestimated by just 24% at 10m. Results using 3-arcsecond SRTM are extremely similar. Coastal analyses based on SRTM data thus appear to greatly underestimate sea level and flood threats, especially at lower elevations. However, SRTM-based estimates may usefully be regarded as providing lower bounds to actual threats. We additionally assess the performance of NOAA's Global Land One-km Base Elevation Project (GLOBE), another publicly-available global DEM, but do not reach any definitive conclusion because of the spatial heterogeneity in its quality.

  19. Prenatal betamethasone exposure has sex specific effects in reversal learning and attention in juvenile baboons

    PubMed Central

    RODRIGUEZ, Jesse S.; ZÜRCHER, Nicole R.; KEENAN, Kathryn E.; BARTLETT, Thad Q.; NATHANIELSZ, Peter W.; NIJLAND, Mark J.

    2011-01-01

    Objective We investigated effects of three weekly courses of fetal betamethasone (βM) exposure on motivation and cognition in juvenile baboon offspring utilizing the Cambridge Neuropsychological Test Automated Battery. Study design Pregnant baboons (Papio sp.) received two injections of saline control (C) or 175 μg/kg βM 24h apart at 0.6, 0.65 and 0.7 gestation. Offspring [Female (FC), n = 7 and Male (MC), n = 6; Female (FβM), n = 7 and Male (MβM), n = 5] were studied at 2.6–3.2 years with a progressive ratio test for motivation, simple discriminations (SD) and reversals (SR) for associative learning and rule change plasticity, and an intra-dimensional/extra-dimensional (IDED) set-shifting test for attention allocation. Results βM exposure decreased motivation in both sexes. In IDED testing, FβM made more errors in the SR [mean difference of errors (FβM minus MβM) = 20.2 ± 9.9; P≤0.05], compound discrimination [mean difference of errors = 36.3 ± 17.4; P≤0.05] and compound reversal [mean difference of errors = 58 ± 23.6; P<0.05] stages as compared to the MβM offspring. Conclusion This central nervous system developmental programming adds growing concerns of long-term effects of repeated fetal synthetic glucocorticoid exposure. In summary, behavioral effects observed show sex specific differences in resilience to multiple fetal βM exposures. PMID:21411054

  20. The study of CD side to side error in line/space pattern caused by post-exposure bake effect

    NASA Astrophysics Data System (ADS)

    Huang, Jin; Guo, Eric; Ge, Haiming; Lu, Max; Wu, Yijun; Tian, Mingjing; Yan, Shichuan; Wang, Ran

    2016-10-01

    In semiconductor manufacturing, as the design rule has decreased, the ITRS roadmap requires crucial tighter critical dimension (CD) control. CD uniformity is one of the necessary parameters to assure good performance and reliable functionality of any integrated circuit (IC) [1] [2], and towards the advanced technology nodes, it is a challenge to control CD uniformity well. The study of corresponding CD Uniformity by tuning Post-Exposure bake (PEB) and develop process has some significant progress[3], but CD side to side error happening to some line/space pattern are still found in practical application, and the error has approached to over the uniformity tolerance. After details analysis, even though use several developer types, the CD side to side error has not been found significant relationship to the developing. In addition, it is impossible to correct the CD side to side error by electron beam correction as such error does not appear in all Line/Space pattern masks. In this paper the root cause of CD side to side error is analyzed and the PEB module process are optimized as a main factor for improvement of CD side to side error.

  1. A Sensitive Technique Using Atomic Force Microscopy to Measure the Low Earth Orbit Atomic Oxygen Erosion of Polymers

    NASA Technical Reports Server (NTRS)

    deGroh, Kim K.; Banks, Bruce A.; Clark, Gregory W.; Hammerstrom, Anne M.; Youngstrom, Erica E.; Kaminski, Carolyn; Fine, Elizabeth S.; Marx, Laura M.

    2001-01-01

    Polymers such as polyimide Kapton and Teflon FEP (fluorinated ethylene propylene) are commonly used spacecraft materials due to their desirable properties such as flexibility, low density, and in the case of FEP low solar absorptance and high thermal emittance. Polymers on the exterior of spacecraft in the low Earth orbit (LEO) environment are exposed to energetic atomic oxygen. Atomic oxygen erosion of polymers occurs in LEO and is a threat to spacecraft durability. It is therefore important to understand the atomic oxygen erosion yield (E, the volume loss per incident oxygen atom) of polymers being considered in spacecraft design. Because long-term space exposure data is rare and very costly, short-term exposures such as on the shuttle are often relied upon for atomic oxygen erosion determination. The most common technique for determining E is through mass loss measurements. For limited duration exposure experiments, such as shuttle experiments, the atomic oxygen fluence is often so small that mass loss measurements can not produce acceptable uncertainties. Therefore, a recession measurement technique has been developed using selective protection of polymer samples, combined with postflight atomic force microscopy (AFM) analysis, to obtain accurate erosion yields of polymers exposed to low atomic oxygen fluences. This paper discusses the procedures used for this recession depth technique along with relevant characterization issues. In particular, a polymer is salt-sprayed prior to flight, then the salt is washed off postflight and AFM is used to determine the erosion depth from the protected plateau. A small sample was salt-sprayed for AFM erosion depth analysis and flown as part of the Limited Duration Candidate Exposure (LDCE-4,-5) shuttle flight experiment on STS-51. This sample was used to study issues such as use of contact versus non-contact mode imaging for determining recession depth measurements. Error analyses were conducted and the percent probable error in the erosion yield when obtained by the mass loss and recession depth techniques has been compared. The recession depth technique is planned to be used to determine the erosion yield of 42 different polymers in the shuttle flight experiment PEACE (Polymer Erosion And Contamination Experiment) planned to fly in 2002 or 2003.

  2. Cue-induced craving in patients with cocaine use disorder predicts cognitive control deficits toward cocaine cues.

    PubMed

    DiGirolamo, Gregory J; Smelson, David; Guevremont, Nathan

    2015-08-01

    Cue-induced craving is a clinically important aspect of cocaine addiction influencing ongoing use and sobriety. However, little is known about the relationship between cue-induced craving and cognitive control toward cocaine cues. While studies suggest that cocaine users have an attentional bias toward cocaine cues, the present study extends this research by testing if cocaine use disorder patients (CDPs) can control their eye movements toward cocaine cues and whether their response varied by cue-induced craving intensity. Thirty CDPs underwent a cue exposure procedure to dichotomize them into high and low craving groups followed by a modified antisaccade task in which subjects were asked to control their eye movements toward either a cocaine or neutral drug cue by looking away from the suddenly presented cue. The relationship between breakdowns in cognitive control (as measured by eye errors) and cue-induced craving (changes in self-reported craving following cocaine cue exposure) was investigated. CDPs overall made significantly more errors toward cocaine cues compared to neutral cues, with higher cravers making significantly more errors than lower cravers even though they did not differ significantly in addiction severity, impulsivity, anxiety, or depression levels. Cue-induced craving was the only specific and significant predictor of subsequent errors toward cocaine cues. Cue-induced craving directly and specifically relates to breakdowns of cognitive control toward cocaine cues in CDPs, with higher cravers being more susceptible. Hence, it may be useful identifying high cravers and target treatment toward curbing craving to decrease the likelihood of a subsequent breakdown in control. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Study of a selection of 10 historical types of dosemeter: variation of the response to Hp(10) with photon energy and geometry of exposure.

    PubMed

    Thierry-Chef, I; Pernicka, F; Marshall, M; Cardis, E; Andreo, P

    2002-01-01

    An international collaborative study of cancer risk among workers in the nuclear industry is tinder way to estimate direetly the cancer risk following protracted low-dose exposure to ionising radiation. An essential aspect of this study is the characterisation and quantification of errors in available dose estimates. One major source of errors is dosemeter response in workplace exposure conditions. Little information is available on energy and geometry response for most of the 124 different dosemeters used historically in participating facilities. Experiments were therefore set up to assess this. using 10 dosemeter types representative of those used over time. Results show that the largest errors were associated with the response of early dosemeters to low-energy photon radiation. Good response was found with modern dosemeters. even at low energy. These results are being used to estimate errors in the response for each dosemeter type, used in the participating facilities, so that these can be taken into account in the estimates of cancer risk.

  4. Postprandial suppression of appetite is more reproducible at a group than an individual level: Implications for assessing inter-individual variability.

    PubMed

    Gonzalez, Javier T; Frampton, James; Deighton, Kevin

    2017-01-01

    Individual differences in appetite are increasingly appreciated. However, the individual day-to-day reliability of appetite measurement is currently uncharacterised. This study aimed to assess the reliability of appetite following ingestion of mixed-macronutrient liquid meals at a group and individual level. Two experiments were conducted with identical protocols other than meal energy content. During each experiment, 10 non-obese males completed four experimental trials constituting high- and low-energy trials, each performed twice. Experiment one employed 579 kJ (138 kcal) and 1776 kJ (424 kcal) liquid meals. Experiment two employed 828 (198 kcal) and 4188 kJ (1001 kcal) liquid meals. Visual analogue scales were administered to assess appetite for 60 min post-ingestion. The typical error (standard error of measurement) of appetite area under the curve was 6.2 mm⋅60 min -1 (95%CI 4.3-11.3 mm⋅60 min -1 ), 6.5 mm (95%CI 4.5-11.9 mm⋅60 min -1 ), 7.1 mm⋅60 min -1 (95%CI 4.9-12.9 mm⋅60 min -1 ) and 6.5 mm⋅60 min -1 (95%CI 4.5-11.8 mm⋅60 min -1 ) with the 579, 828, 1776 and 4188 kJ meals, respectively. A systematic bias between first and second exposure was detected for all but the 4188 kJ meal. The change in appetite with high-vs. low-energy meals did not differ at a group level between first and second exposure (mean difference: -0.97 mm⋅60 min -1 ; 95%CI -6.48-4.53 mm⋅60 min -1 ), however, ∼50% of individuals differed in their response with first vs second exposure by more than the typical error. Appetite responses are more reliable when liquid meals contain a higher-vs lower-energy content. Appetite suppression with high-vs low-energy meals is reproducible at the group- but not individual level, suggesting that multiple exposures to an intervention are required to understand true individual differences in appetite. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Modulation of error-sensitivity during a prism adaptation task in people with cerebellar degeneration

    PubMed Central

    Shadmehr, Reza; Ohminami, Shinya; Tsutsumi, Ryosuke; Shirota, Yuichiro; Shimizu, Takahiro; Tanaka, Nobuyuki; Terao, Yasuo; Tsuji, Shoji; Ugawa, Yoshikazu; Uchimura, Motoaki; Inoue, Masato; Kitazawa, Shigeru

    2015-01-01

    Cerebellar damage can profoundly impair human motor adaptation. For example, if reaching movements are perturbed abruptly, cerebellar damage impairs the ability to learn from the perturbation-induced errors. Interestingly, if the perturbation is imposed gradually over many trials, people with cerebellar damage may exhibit improved adaptation. However, this result is controversial, since the differential effects of gradual vs. abrupt protocols have not been observed in all studies. To examine this question, we recruited patients with pure cerebellar ataxia due to cerebellar cortical atrophy (n = 13) and asked them to reach to a target while viewing the scene through wedge prisms. The prisms were computer controlled, making it possible to impose the full perturbation abruptly in one trial, or build up the perturbation gradually over many trials. To control visual feedback, we employed shutter glasses that removed visual feedback during the reach, allowing us to measure trial-by-trial learning from error (termed error-sensitivity), and trial-by-trial decay of motor memory (termed forgetting). We found that the patients benefited significantly from the gradual protocol, improving their performance with respect to the abrupt protocol by exhibiting smaller errors during the exposure block, and producing larger aftereffects during the postexposure block. Trial-by-trial analysis suggested that this improvement was due to increased error-sensitivity in the gradual protocol. Therefore, cerebellar patients exhibited an improved ability to learn from error if they experienced those errors gradually. This improvement coincided with increased error-sensitivity and was present in both groups of subjects, suggesting that control of error-sensitivity may be spared despite cerebellar damage. PMID:26311179

  6. An improved maximum permissible exposure meter for safety assessments of laser radiation

    NASA Astrophysics Data System (ADS)

    Corder, D. A.; Evans, D. R.; Tyrer, J. R.

    1997-12-01

    Current interest in laser radiation safety requires demonstration that a laser system has been designed to prevent exposure to levels of laser radiation exceeding the Maximum Permissible Exposure. In some simple systems it is possible to prove this by calculation, but in most cases it is preferable to confirm calculated results with a measurement. This measurement may be made with commercially available equipment, but there are limitations with this approach. A custom designed instrument is presented in which the full range of measurement issues have been addressed. Important features of the instrument are the design and optimisation of detector heads for the measurement task, and consideration of user interface requirements. Three designs for detector head are presented, these cover the majority of common laser types. Detector heads are designed to optimise the performance of relatively low cost detector elements for this measurement task. The three detector head designs are suitable for interfacing to photodiodes, low power thermopiles and pyroelectric detectors. Design of the user interface was an important aspect of the work. A user interface which is designed for the specific application minimises the risk of user error or misinterpretation of the measurement results. A palmtop computer was used to provide an advanced user interface. User requirements were considered in order that the final implement was well matched to the task of laser radiation hazard audits.

  7. Mechanical failure probability of glasses in Earth orbit

    NASA Technical Reports Server (NTRS)

    Kinser, Donald L.; Wiedlocher, David E.

    1992-01-01

    Results of five years of earth-orbital exposure on mechanical properties of glasses indicate that radiation effects on mechanical properties of glasses, for the glasses examined, are less than the probable error of measurement. During the 5 year exposure, seven micrometeorite or space debris impacts occurred on the samples examined. These impacts were located in locations which were not subjected to effective mechanical testing, hence limited information on their influence upon mechanical strength was obtained. Combination of these results with micrometeorite and space debris impact frequency obtained by other experiments permits estimates of the failure probability of glasses exposed to mechanical loading under earth-orbit conditions. This probabilistic failure prediction is described and illustrated with examples.

  8. Quantifying the foodscape: A systematic review and meta-analysis of the validity of commercially available business data.

    PubMed

    Lebel, Alexandre; Daepp, Madeleine I G; Block, Jason P; Walker, Renée; Lalonde, Benoît; Kestens, Yan; Subramanian, S V

    2017-01-01

    This paper reviews studies of the validity of commercially available business (CAB) data on food establishments ("the foodscape"), offering a meta-analysis of characteristics associated with CAB quality and a case study evaluating the performance of commonly-used validity indicators describing the foodscape. Existing validation studies report a broad range in CAB data quality, although most studies conclude that CAB quality is "moderate" to "substantial". We conclude that current studies may underestimate the quality of CAB data. We recommend that future validation studies use density-adjusted and exposure measures to offer a more meaningful characterization of the relationship of data error with spatial exposure.

  9. Quantifying the foodscape: A systematic review and meta-analysis of the validity of commercially available business data

    PubMed Central

    Lebel, Alexandre; Daepp, Madeleine I. G.; Block, Jason P.; Walker, Renée; Lalonde, Benoît; Kestens, Yan; Subramanian, S. V.

    2017-01-01

    This paper reviews studies of the validity of commercially available business (CAB) data on food establishments (“the foodscape”), offering a meta-analysis of characteristics associated with CAB quality and a case study evaluating the performance of commonly-used validity indicators describing the foodscape. Existing validation studies report a broad range in CAB data quality, although most studies conclude that CAB quality is “moderate” to “substantial”. We conclude that current studies may underestimate the quality of CAB data. We recommend that future validation studies use density-adjusted and exposure measures to offer a more meaningful characterization of the relationship of data error with spatial exposure. PMID:28358819

  10. Estimating regression coefficients from clustered samples: Sampling errors and optimum sample allocation

    NASA Technical Reports Server (NTRS)

    Kalton, G.

    1983-01-01

    A number of surveys were conducted to study the relationship between the level of aircraft or traffic noise exposure experienced by people living in a particular area and their annoyance with it. These surveys generally employ a clustered sample design which affects the precision of the survey estimates. Regression analysis of annoyance on noise measures and other variables is often an important component of the survey analysis. Formulae are presented for estimating the standard errors of regression coefficients and ratio of regression coefficients that are applicable with a two- or three-stage clustered sample design. Using a simple cost function, they also determine the optimum allocation of the sample across the stages of the sample design for the estimation of a regression coefficient.

  11. IMPROVED SPECTROPHOTOMETRIC CALIBRATION OF THE SDSS-III BOSS QUASAR SAMPLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margala, Daniel; Kirkby, David; Dawson, Kyle

    2016-11-10

    We present a model for spectrophotometric calibration errors in observations of quasars from the third generation of the Sloan Digital Sky Survey Baryon Oscillation Spectroscopic Survey (BOSS) and describe the correction procedure we have developed and applied to this sample. Calibration errors are primarily due to atmospheric differential refraction and guiding offsets during each exposure. The corrections potentially reduce the systematics for any studies of BOSS quasars, including the measurement of baryon acoustic oscillations using the Ly α forest. Our model suggests that, on average, the observed quasar flux in BOSS is overestimated by ∼19% at 3600 Å and underestimatedmore » by ∼24% at 10,000 Å. Our corrections for the entire BOSS quasar sample are publicly available.« less

  12. Cued to Act on Impulse: More Impulsive Choice and Risky Decision Making by Women Susceptible to Overeating after Exposure to Food Stimuli.

    PubMed

    Yeomans, Martin R; Brace, Aaron

    2015-01-01

    There is increasing evidence that individual differences in tendency to overeat relate to impulsivity, possibly by increasing reactivity to food-related cues in the environment. This study tested whether acute exposure to food cues enhanced impulsive and risky responses in women classified on tendency to overeat, indexed by scores on the three factor eating questionnaire disinhibition (TFEQ-D), restraint (TFEQ-R) and hunger scales. Ninety six healthy women completed two measures of impulsive responding (delayed discounting, DDT and a Go No-Go, GNG, task) and a measure of risky decision making (the balloon analogue risk task, BART) as well as questionnaire measures of impulsive behaviour either after looking at a series of pictures of food or visually matched controls. Impulsivity (DDT) and risk-taking (BART) were both positively associated with TFEQ-D scores, but in both cases this effect was exacerbated by prior exposure to food cues. No effects of restraint were found. TFEQ-D scores were also related to more commission errors on the GNG, while restrained women were slower on the GNG, but neither effect was modified by cue exposure. Overall these data suggest that exposure to food cues act to enhance general impulsive responding in women at risk of overeating and tentatively suggest an important interaction between tendency for impulsive decision making and food cues that may help explain a key underlying risk factor for overeating.

  13. Cued to Act on Impulse: More Impulsive Choice and Risky Decision Making by Women Susceptible to Overeating after Exposure to Food Stimuli

    PubMed Central

    Yeomans, Martin R.; Brace, Aaron

    2015-01-01

    There is increasing evidence that individual differences in tendency to overeat relate to impulsivity, possibly by increasing reactivity to food-related cues in the environment. This study tested whether acute exposure to food cues enhanced impulsive and risky responses in women classified on tendency to overeat, indexed by scores on the three factor eating questionnaire disinhibition (TFEQ-D), restraint (TFEQ-R) and hunger scales. Ninety six healthy women completed two measures of impulsive responding (delayed discounting, DDT and a Go No-Go, GNG, task) and a measure of risky decision making (the balloon analogue risk task, BART) as well as questionnaire measures of impulsive behaviour either after looking at a series of pictures of food or visually matched controls. Impulsivity (DDT) and risk-taking (BART) were both positively associated with TFEQ-D scores, but in both cases this effect was exacerbated by prior exposure to food cues. No effects of restraint were found. TFEQ-D scores were also related to more commission errors on the GNG, while restrained women were slower on the GNG, but neither effect was modified by cue exposure. Overall these data suggest that exposure to food cues act to enhance general impulsive responding in women at risk of overeating and tentatively suggest an important interaction between tendency for impulsive decision making and food cues that may help explain a key underlying risk factor for overeating. PMID:26378459

  14. Predicting Aspergillus fumigatus exposure from composting facilities using a dispersion model: A conditional calibration and validation.

    PubMed

    Douglas, Philippa; Tyrrel, Sean F; Kinnersley, Robert P; Whelan, Michael; Longhurst, Philip J; Hansell, Anna L; Walsh, Kerry; Pollard, Simon J T; Drew, Gillian H

    2017-01-01

    Bioaerosols are released in elevated quantities from composting facilities and are associated with negative health effects, although dose-response relationships are unclear. Exposure levels are difficult to quantify as established sampling methods are costly, time-consuming and current data provide limited temporal and spatial information. Confidence in dispersion model outputs in this context would be advantageous to provide a more detailed exposure assessment. We present the calibration and validation of a recognised atmospheric dispersion model (ADMS) for bioaerosol exposure assessments. The model was calibrated by a trial and error optimisation of observed Aspergillus fumigatus concentrations at different locations around a composting site. Validation was performed using a second dataset of measured concentrations for a different site. The best fit between modelled and measured data was achieved when emissions were represented as a single area source, with a temperature of 29°C. Predicted bioaerosol concentrations were within an order of magnitude of measured values (1000-10,000CFU/m 3 ) at the validation site, once minor adjustments were made to reflect local differences between the sites (r 2 >0.7 at 150, 300, 500 and 600m downwind of source). Results suggest that calibrated dispersion modelling can be applied to make reasonable predictions of bioaerosol exposures at multiple sites and may be used to inform site regulation and operational management. Copyright © 2016 The Authors. Published by Elsevier GmbH.. All rights reserved.

  15. Exposure misclassification due to residential mobility during pregnancy.

    PubMed

    Hodgson, Susan; Lurz, Peter W W; Shirley, Mark D F; Bythell, Mary; Rankin, Judith

    2015-06-01

    Pregnant women are a highly mobile group, yet studies suggest exposure error due to migration in pregnancy is minimal. We aimed to investigate the impact of maternal residential mobility on exposure to environmental variables (urban fabric, roads and air pollution (PM10 and NO₂)) and socio-economic factors (deprivation) that varied spatially and temporally. We used data on residential histories for deliveries at ≥ 24 weeks gestation recorded by the Northern Congenital Abnormality Survey, 2000-2008 (n=5399) to compare: (a) exposure at conception assigned to maternal postcode at delivery versus maternal postcode at conception, and (b) exposure at conception assigned to maternal postcode at delivery versus mean exposure based on residences throughout pregnancy. In this population, 24.4% of women moved during pregnancy. Depending on the exposure variable assessed, 1-12% of women overall were assigned an exposure at delivery >1 SD different to that at conception, and 2-25% assigned an exposure at delivery >1 SD different to the mean exposure throughout pregnancy. To meaningfully explore the subtle associations between environmental exposures and health, consideration must be given to error introduced by residential mobility. Copyright © 2015 The Authors. Published by Elsevier GmbH.. All rights reserved.

  16. Evaluation of the Validity of Job Exposure Matrix for Psychosocial Factors at Work

    PubMed Central

    Solovieva, Svetlana; Pensola, Tiina; Kausto, Johanna; Shiri, Rahman; Heliövaara, Markku; Burdorf, Alex; Husgafvel-Pursiainen, Kirsti; Viikari-Juntura, Eira

    2014-01-01

    Objective To study the performance of a developed job exposure matrix (JEM) for the assessment of psychosocial factors at work in terms of accuracy, possible misclassification bias and predictive ability to detect known associations with depression and low back pain (LBP). Materials and Methods We utilized two large population surveys (the Health 2000 Study and the Finnish Work and Health Surveys), one to construct the JEM and another to test matrix performance. In the first study, information on job demands, job control, monotonous work and social support at work was collected via face-to-face interviews. Job strain was operationalized based on job demands and job control using quadrant approach. In the second study, the sensitivity and specificity were estimated applying a Bayesian approach. The magnitude of misclassification error was examined by calculating the biased odds ratios as a function of the sensitivity and specificity of the JEM and fixed true prevalence and odds ratios. Finally, we adjusted for misclassification error the observed associations between JEM measures and selected health outcomes. Results The matrix showed a good accuracy for job control and job strain, while its performance for other exposures was relatively low. Without correction for exposure misclassification, the JEM was able to detect the association between job strain and depression in men and between monotonous work and LBP in both genders. Conclusions Our results suggest that JEM more accurately identifies occupations with low control and high strain than those with high demands or low social support. Overall, the present JEM is a useful source of job-level psychosocial exposures in epidemiological studies lacking individual-level exposure information. Furthermore, we showed the applicability of a Bayesian approach in the evaluation of the performance of the JEM in a situation where, in practice, no gold standard of exposure assessment exists. PMID:25268276

  17. How Does Variability in Aragonite Saturation Proxies Impact Our Estimates of the Intensity and Duration of Exposure to Aragonite Corrosive Conditions in a Coastal Upwelling System?

    NASA Astrophysics Data System (ADS)

    Abell, J. T.; Jacobsen, J.; Bjorkstedt, E.

    2016-02-01

    Determining aragonite saturation state (Ω) in seawater requires measurement of two parameters of the carbonate system: most commonly dissolved inorganic carbon (DIC) and total alkalinity (TA). The routine measurement of DIC and TA is not always possible on frequently repeated hydrographic lines or at moored-time series that collect hydrographic data at short time intervals. In such cases a proxy can be developed that relates the saturation state as derived from one time or infrequent DIC and TA measurements (Ωmeas) to more frequently measured parameters such as dissolved oxygen (DO) and temperature (Temp). These proxies are generally based on best-fit parameterizations that utilize references values of DO and Temp and adjust linear coefficients until the error between the proxy-derived saturation state (Ωproxy) and Ωmeas is minimized. Proxies have been used to infer Ω from moored hydrographic sensors and gliders which routinely collect DO and Temp data but do not include carbonate parameter measurements. Proxies can also calculate Ω in regional oceanographic models which do not explicitly include carbonate parameters. Here we examine the variability and accuracy of Ωproxy along a near-shore hydrographic line and a moored-time series stations at Trinidad Head, CA. The saturation state is determined using proxies from different coastal regions of the California Current Large Marine Ecosystem and from different years of sampling along the hydrographic line. We then calculate the variability and error associated with the use of different proxy coefficients, the sensitivity to reference values and the inclusion of additional variables. We demonstrate how this variability affects estimates of the intensity and duration of exposure to aragonite corrosive conditions on the near-shore shelf and in the water column.

  18. Validation of automatic joint space width measurements in hand radiographs in rheumatoid arthritis

    PubMed Central

    Schenk, Olga; Huo, Yinghe; Vincken, Koen L.; van de Laar, Mart A.; Kuper, Ina H. H.; Slump, Kees C. H.; Lafeber, Floris P. J. G.; Bernelot Moens, Hein J.

    2016-01-01

    Abstract. Computerized methods promise quick, objective, and sensitive tools to quantify progression of radiological damage in rheumatoid arthritis (RA). Measurement of joint space width (JSW) in finger and wrist joints with these systems performed comparable to the Sharp–van der Heijde score (SHS). A next step toward clinical use, validation of precision and accuracy in hand joints with minimal damage, is described with a close scrutiny of sources of error. A recently developed system to measure metacarpophalangeal (MCP) and proximal interphalangeal (PIP) joints was validated in consecutive hand images of RA patients. To assess the impact of image acquisition, measurements on radiographs from a multicenter trial and from a recent prospective cohort in a single hospital were compared. Precision of the system was tested by comparing the joint space in mm in pairs of subsequent images with a short interval without progression of SHS. In case of incorrect measurements, the source of error was analyzed with a review by human experts. Accuracy was assessed by comparison with reported measurements with other systems. In the two series of radiographs, the system could automatically locate and measure 1003/1088 (92.2%) and 1143/1200 (95.3%) individual joints, respectively. In joints with a normal SHS, the average (SD) size of MCP joints was 1.7±0.2 and 1.6±0.3  mm in the two series of radiographs, and of PIP joints 1.0±0.2 and 0.9±0.2  mm. The difference in JSW between two serial radiographs with an interval of 6 to 12 months and unchanged SHS was 0.0±0.1  mm, indicating very good precision. Errors occurred more often in radiographs from the multicenter cohort than in a more recent series from a single hospital. Detailed analysis of the 55/1125 (4.9%) measurements that had a discrepant paired measurement revealed that variation in the process of image acquisition (exposure in 15% and repositioning in 57%) was a more frequent source of error than incorrect delineation by the software (25%). Various steps in the validation of an automated measurement system for JSW of MCP and PIP joints are described. The use of serial radiographs from different sources, with a short interval and limited damage, is helpful to detect sources of error. Image acquisition, in particular repositioning, is a dominant source of error. PMID:27921071

  19. Estimation of Particulate Mass and Manganese Exposure Levels among Welders

    PubMed Central

    Hobson, Angela; Seixas, Noah; Sterling, David; Racette, Brad A.

    2011-01-01

    Background: Welders are frequently exposed to Manganese (Mn), which may increase the risk of neurological impairment. Historical exposure estimates for welding-exposed workers are needed for epidemiological studies evaluating the relationship between welding and neurological or other health outcomes. The objective of this study was to develop and validate a multivariate model to estimate quantitative levels of welding fume exposures based on welding particulate mass and Mn concentrations reported in the published literature. Methods: Articles that described welding particulate and Mn exposures during field welding activities were identified through a comprehensive literature search. Summary measures of exposure and related determinants such as year of sampling, welding process performed, type of ventilation used, degree of enclosure, base metal, and location of sampling filter were extracted from each article. The natural log of the reported arithmetic mean exposure level was used as the dependent variable in model building, while the independent variables included the exposure determinants. Cross-validation was performed to aid in model selection and to evaluate the generalizability of the models. Results: A total of 33 particulate and 27 Mn means were included in the regression analysis. The final model explained 76% of the variability in the mean exposures and included welding process and degree of enclosure as predictors. There was very little change in the explained variability and root mean squared error between the final model and its cross-validation model indicating the final model is robust given the available data. Conclusions: This model may be improved with more detailed exposure determinants; however, the relatively large amount of variance explained by the final model along with the positive generalizability results of the cross-validation increases the confidence that the estimates derived from this model can be used for estimating welder exposures in absence of individual measurement data. PMID:20870928

  20. Occupational Exposure to Manganese and Fine Motor Skills in Elderly Men: Results from the Heinz Nixdorf Recall Study.

    PubMed

    Pesch, Beate; Casjens, Swaantje; Weiss, Tobias; Kendzia, Benjamin; Arendt, Marina; Eisele, Lewin; Behrens, Thomas; Ulrich, Nadin; Pundt, Noreen; Marr, Anja; Robens, Sibylle; Van Thriel, Christoph; Van Gelder, Rainer; Aschner, Michael; Moebus, Susanne; Dragano, Nico; Brüning, Thomas; Jöckel, Karl-Heinz

    2017-11-10

    Exposure to manganese (Mn) may cause movement disorders, but less is known whether the effects persist after the termination of exposure. This study investigated the association between former exposure to Mn and fine motor deficits in elderly men from an industrial area with steel production. Data on the occupational history and fine motor tests were obtained from the second follow-up of the prospective Heinz Nixdorf Recall Study (2011-2014). The study population included 1232 men (median age 68 years). Mn in blood (MnB) was determined in archived samples (2000-2003). The association between Mn exposure (working as welder or in other at-risk occupations, cumulative exposure to inhalable Mn, MnB) with various motor functions (errors in line tracing, steadiness, or aiming and tapping hits) was investigated with Poisson and logistic regression, adjusted for iron status and other covariates. Odds ratios (ORs) with 95% confidence intervals (CIs) were estimated for substantially impaired dexterity (errors >90th percentile, tapping hits <10th percentile). The median of cumulative exposure to inhalable Mn was 58 µg m-3 years in 322 men who ever worked in at-risk occupations. Although we observed a partly better motor performance of exposed workers at group level, we found fewer tapping hits in men with cumulative Mn exposure >184.8 µg m-3 years (OR 2.15, 95% CI 1.17-3.94). MnB ≥ 15 µg l-1, serum ferritin ≥ 400 µg l-1, and gamma-glutamyl transferase ≥74 U l-1 were associated with a greater number of errors in line tracing. We found evidence that exposure to inhalable Mn may carry a risk for dexterity deficits. Whether these deficits can be exclusively attributed to Mn remains to be elucidated, as airborne Mn is strongly correlated with iron in metal fumes, and high ferritin was also associated with errors in line tracing. Furthermore, hand training effects must be taken into account when testing for fine motor skills. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, T. S.; DePoy, D. L.; Marshall, J. L.

    Here, we report that meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations inmore » the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. In conclusion, the residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, T. S.; DePoy, D. L.; Marshall, J. L.

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey’s stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence ofmore » the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. The residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less

  3. VizieR Online Data Catalog: Positions of 502 Stars in Pleiades Region (Eichhorn+ 1970)

    NASA Astrophysics Data System (ADS)

    Eichhorn, H.; Googe, W. D.; Lukac, C. F.; Murphy, J. K.

    1996-01-01

    The catalog contains the positions (equinox B1900.0 and epoch B1955.0) of 502 stars in a region of about 1.5 degrees square in the Pleiades cluster, centered on Eta Tau. These coordinates have been derived from measurements of stellar images obtained with 65 exposures of various durations on 14 photographic plates with two telescopes at McCormick Observatory and Van Vleck Observatory. The plates were reduced by the plate overlap method, which resulted in a high degree of systematic accuracy in the final positions. Data in the machine version include Hertzsprung number, color index, photovisual magnitude, right ascension and declination and their standard errors, proper motion, and differences between the present position and previous works. Data for exposures, plates, and images measured, present in the published catalog, are not included in the machine version. (1 data file).

  4. The Hanford Thyroid Disease Study: an alternative view of the findings.

    PubMed

    Hoffman, F Owen; Ruttenber, A James; Apostoaei, A Iulian; Carroll, Raymond J; Greenland, Sander

    2007-02-01

    The Hanford Thyroid Disease Study (HTDS) is one of the largest and most complex epidemiologic studies of the relation between environmental exposures to I and thyroid disease. The study detected no dose-response relation using a 0.05 level for statistical significance. The results for thyroid cancer appear inconsistent with those from other studies of populations with similar exposures, and either reflect inadequate statistical power, bias, or unique relations between exposure and disease risk. In this paper, we explore these possibilities, and present evidence that the HTDS statistical power was inadequate due to complex uncertainties associated with the mathematical models and assumptions used to reconstruct individual doses. We conclude that, at the very least, the confidence intervals reported by the HTDS for thyroid cancer and other thyroid diseases are too narrow because they fail to reflect key uncertainties in the measurement-error structure. We recommend that the HTDS results be interpreted as inconclusive rather than as evidence for little or no disease risk from Hanford exposures.

  5. Comparison of three methods for evaluation of work postures in a truck assembly plant.

    PubMed

    Zare, Mohsen; Biau, Sophie; Brunet, Rene; Roquelaure, Yves

    2017-11-01

    This study compared the results of three risk assessment tools (self-reported questionnaire, observational tool, direct measurement method) for the upper limbs and back in a truck assembly plant at two cycle times (11 and 8 min). The weighted Kappa factor showed fair agreement between the observational and direct measurement method for the arm (0.39) and back (0.47). The weighted Kappa factor for these methods was poor for the neck (0) and wrist (0) but the observed proportional agreement (P o ) was 0.78 for the neck and 0.83 for the wrist. The weighted Kappa factor between questionnaire and direct measurement showed poor or slight agreement (0) for different body segments in both cycle times. The results revealed moderate agreement between the observational tool and the direct measurement method, and poor agreement between the self-reported questionnaire and direct measurement. Practitioner Summary: This study provides risk exposure measurement by different common ergonomic methods in the field. The results help to develop valid measurements and improve exposure evaluation. Hence, the ergonomist/practitioners should apply the methods with caution, or at least knowing what the issues/errors are.

  6. Air pollution exposure modeling of individuals

    EPA Science Inventory

    Air pollution epidemiology studies of ambient fine particulate matter (PM2.5) often use outdoor concentrations as exposure surrogates. These surrogates can induce exposure error since they do not account for (1) time spent indoors with ambient PM2.5 levels attenuated from outdoor...

  7. Temporal variability in urinary levels of drinking water disinfection byproducts dichloroacetic acid and trichloroacetic acid among men

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yi-Xin; Zeng, Qiang; Wang, Le

    Urinary haloacetic acids (HAAs), such as dichloroacetic acid (DCAA) and trichloroacetic acid (TCAA), have been suggested as potential biomarkers of exposure to drinking water disinfection byproducts (DBPs). However, variable exposure to and the short elimination half-lives of these biomarkers can result in considerable variability in urinary measurements, leading to exposure misclassification. Here we examined the variability of DCAA and TCAA levels in the urine among eleven men who provided urine samples on 8 days over 3 months. The urinary concentrations of DCAA and TCAA were measured by gas chromatography coupled with electron capture detection. We calculated the intraclass correlation coefficientsmore » (ICCs) to characterize the within-person and between-person variances and computed the sensitivity and specificity to assess how well single or multiple urine collections accurately determined personal 3-month average DCAA and TCAA levels. The within-person variance was much higher than the between-person variance for all three sample types (spot, first morning, and 24-h urine samples) for DCAA (ICC=0.08–0.37) and TCAA (ICC=0.09–0.23), regardless of the sampling interval. A single-spot urinary sample predicted high (top 33%) 3-month average DCAA and TCAA levels with high specificity (0.79 and 0.78, respectively) but relatively low sensitivity (0.47 and 0.50, respectively). Collecting two or three urine samples from each participant improved the classification. The poor reproducibility of the measured urinary DCAA and TCAA concentrations indicate that a single measurement may not accurately reflect individual long-term exposure. Collection of multiple urine samples from one person is an option for reducing exposure classification errors in studies exploring the effects of DBP exposure on reproductive health. - Highlights: • We evaluated the variability of DCAA and TCAA levels in the urine among men. • Urinary DCAA and TCAA levels varied greatly over a 3-month period. • Single measurement may not accurately reflect personal long-term exposure levels. • Collecting multiple samples from one person improved the exposure classification.« less

  8. Women's personal and indoor exposures to PM 2.5 in Mysore, India: Impact of domestic fuel usage

    NASA Astrophysics Data System (ADS)

    Andresen, Penny Rechkemmer; Ramachandran, Gurumurthy; Pai, Pramod; Maynard, Andrew

    In traditional societies, women are more likely to be adversely affected by exposures to fine particulates from domestic fuel combustion due to their role in the family as the primary cooks. In this study, 24-h gravimetric personal and indoor PM 2.5 exposures were measured for 15 women using kerosene and another 15 women using liquefied petroleum gas (LPG) as their main cooking fuel in Mysore, India. The women also answered a detailed questionnaire regarding their residential housing characteristics, health status, cooking practices and socioeconomic status. Repeated measurements were obtained during two seasons. The main objective of this study was to determine whether exposures to PM 2.5 differed according to fuel usage patterns. A repeated-measures general linear model (GLM) was used to analyze the data. Women using kerosene as their primary cooking fuel had significantly higher exposures. During summer, the arithmetic mean (± standard error) for kerosene users personal exposure was 111±13 and 71±15 μg m -3 for LPG users. Kerosene users had higher exposures in winter (177±21 μg m -3) compared to summer exposures. However, for LPG users there was no difference in their seasonal geometric mean exposures at 71±13 μg m -3. Indoor concentrations followed similar patterns. In summer, kerosene-using households had an arithmetic mean concentration of 98±9 μg m -3 and LPG-using households had an arithmetic mean concentration of 71±9 μg m -3. Winter concentrations were significantly higher than summer concentrations for kerosene users (155±13 μg m -3). Again, LPG users showed only slightly higher indoor concentrations (73±6 μg m -3) than kerosene users. Socioeconomic status, age, season and income were significant predictors of cooking fuel choice.

  9. Inducible DNA-repair systems in yeast: competition for lesions.

    PubMed

    Mitchel, R E; Morrison, D P

    1987-03-01

    DNA lesions may be recognized and repaired by more than one DNA-repair process. If two repair systems with different error frequencies have overlapping lesion specificity and one or both is inducible, the resulting variable competition for the lesions can change the biological consequences of these lesions. This concept was demonstrated by observing mutation in yeast cells (Saccharomyces cerevisiae) exposed to combinations of mutagens under conditions which influenced the induction of error-free recombinational repair or error-prone repair. Total mutation frequency was reduced in a manner proportional to the dose of 60Co-gamma- or 254 nm UV radiation delivered prior to or subsequent to an MNNG exposure. Suppression was greater per unit radiation dose in cells gamma-irradiated in O2 as compared to N2. A rad3 (excision-repair) mutant gave results similar to wild-type but mutation in a rad52 (rec-) mutant exposed to MNNG was not suppressed by radiation. Protein-synthesis inhibition with heat shock or cycloheximide indicated that it was the mutation due to MNNG and not that due to radiation which had changed. These results indicate that MNNG lesions are recognized by both the recombinational repair system and the inducible error-prone system, but that gamma-radiation induction of error-free recombinational repair resulted in increased competition for the lesions, thereby reducing mutation. Similarly, gamma-radiation exposure resulted in a radiation dose-dependent reduction in mutation due to MNU, EMS, ENU and 8-MOP + UVA, but no reduction in mutation due to MMS. These results suggest that the number of mutational MMS lesions recognizable by the recombinational repair system must be very small relative to those produced by the other agents. MNNG induction of the inducible error-prone systems however, did not alter mutation frequencies due to ENU or MMS exposure but, in contrast to radiation, increased the mutagenic effectiveness of EMS. These experiments demonstrate that in this lower eukaryote, mutagen exposure does not necessarily result in a fixed risk of mutation, but that the risk can be markedly influenced by a variety of external stimuli including heat shock or exposure to other mutagens.

  10. Quantifying light exposure patterns in young adult students

    NASA Astrophysics Data System (ADS)

    Alvarez, Amanda A.; Wildsoet, Christine F.

    2013-08-01

    Exposure to bright light appears to be protective against myopia in both animals (chicks, monkeys) and children, but quantitative data on human light exposure are limited. In this study, we report on a technique for quantifying light exposure using wearable sensors. Twenty-seven young adult subjects wore a light sensor continuously for two weeks during one of three seasons, and also completed questionnaires about their visual activities. Light data were analyzed with respect to refractive error and season, and the objective sensor data were compared with subjects' estimates of time spent indoors and outdoors. Subjects' estimates of time spent indoors and outdoors were in poor agreement with durations reported by the sensor data. The results of questionnaire-based studies of light exposure should thus be interpreted with caution. The role of light in refractive error development should be investigated using multiple methods such as sensors to complement questionnaires.

  11. Quantifying light exposure patterns in young adult students

    PubMed Central

    Alvarez, Amanda A.; Wildsoet, Christine F.

    2014-01-01

    Exposure to bright light appears to be protective against myopia in both animals (chicks, monkeys) and children, but quantitative data on human light exposure are limited. In this study, we report on a technique for quantifying light exposure using wearable sensors. Twenty-seven young adult subjects wore a light sensor continuously for two weeks during one of three seasons, and also completed questionnaires about their visual activities. Light data were analyzed with respect to refractive error and season, and the objective sensor data were compared with subjects’ estimates of time spent indoors and outdoors. Subjects’ estimates of time spent indoors and outdoors were in poor agreement with durations reported by the sensor data. The results of questionnaire-based studies of light exposure should thus be interpreted with caution. The role of light in refractive error development should be investigated using multiple methods such as sensors to complement questionnaires. PMID:25342873

  12. Assessment of human body influence on exposure measurements of electric field in indoor enclosures.

    PubMed

    de Miguel-Bilbao, Silvia; García, Jorge; Ramos, Victoria; Blas, Juan

    2015-02-01

    Personal exposure meters (PEMs) used for measuring exposure to electromagnetic fields (EMF) are typically used in epidemiological studies. As is well known, these measurement devices cause a perturbation of real EMF exposure levels due to the presence of the human body in the immediate proximity. This paper aims to model the alteration caused by the body shadow effect (BSE) in motion conditions and in indoor enclosures at the Wi-Fi frequency of 2.4 GHz. For this purpose, simulation techniques based on ray-tracing have been carried out, and their results have been verified experimentally. A good agreement exists between simulation and experimental results in terms of electric field (E-field) levels, and taking into account the cumulative distribution function (CDF) of the spatial distribution of amplitude. The Kolmogorov-Smirnov (KS) test provides a P-value greater than 0.05, in fact close to 1. It has been found that the influence of the presence of the human body can be characterized as an angle of shadow that depends on the dimensions of the indoor enclosure. The CDFs show that the E-field levels in indoor conditions follow a lognormal distribution in the absence of the human body and under the influence of BSE. In conclusion, the perturbation caused by BSE in PEMs readings cannot be compensated for by correction factors. Although the mean value is well adjusted, BSE causes changes in CDF that would require improvements in measurement protocols and in the design of measuring devices to subsequently avoid systematic errors. © 2014 Wiley Periodicals, Inc.

  13. Correction of nonuniformity error of Gafchromic EBT2 and EBT3

    PubMed Central

    Gotanda, Rumi; Gotanda, Tatsuhiro; Akagawa, Takuya; Tanki, Nobuyoshi; Kuwano, Tadao; Yabunaka, Kouichi

    2016-01-01

    This study investigates an X‐ray dose measurement method for computed tomography using Gafchromic films. Nonuniformity of the active layer is a major problem in Gafchromic films. In radiotherapy, nonuniformity error is reduced by applying the double‐exposure technique, but this is impractical in diagnostic radiology because of the heel effect. Therefore, we propose replacing the X‐rays in the double‐exposure technique with ultraviolet (UV)‐A irradiation of Gafchromic EBT2 and EBT3. To improve the reproducibility of the scan position, Gafchromic EBT2 and EBT3 films were attached to a 3‐mm‐thick acrylic plate. The samples were then irradiated with a 10 W UV‐A fluorescent lamp placed at a distance of 72 cm for 30, 60, and 90 minutes. The profile curves were evaluated along the long and short axes of the film center, and the standard deviations of the pixel values were calculated over large areas of the films. Paired t‐test was performed. UV‐A irradiation exerted a significant effect on Gafchromic EBT2 (paired t‐test; p=0.0275) but not on EBT3 (paired t‐test; p=0.2785). Similarly, the homogeneity was improved in Gafchromic EBT2 but not in EBT3. Therefore, the double‐exposure technique under UV‐A irradiation is suitable only for EBT2 films. PACS number(s): 87.53 Bn PMID:27167258

  14. Non-precautionary aspects of toxicology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grandjean, Philippe

    2005-09-01

    Empirical studies in toxicology aim at deciphering complex causal relationships, especially in regard to human disease etiologies. Several scientific traditions limit the usefulness of documentation from current toxicological research, in regard to decision-making based on the precautionary principle. Among non-precautionary aspects of toxicology are the focus on simplified model systems and the effects of single hazards, one by one. Thus, less attention is paid to sources of variability and uncertainty, including individual susceptibility, impacts of mixed and variable exposures, susceptible life-stages, and vulnerable communities. In emphasizing the need for confirmatory evidence, toxicology tends to penalize false positives more than falsemore » negatives. An important source of uncertainty is measurement error that results in misclassification, especially in regard to exposure assessment. Standard statistical analysis assumes that the exposure is measured without error, and imprecisions will usually result in an underestimation of the dose-effect relationship. In testing whether an effect could be considered a possible result of natural variability, a 5% limit for 'statistical significance' is usually applied, even though it may rule out many findings of causal associations, simply because the study was too small (and thus lacked statistical power) or because some imprecision or limited sensitivity of the parameters precluded a more definitive observation. These limitations may be aggravated when toxicology is influenced by vested interests. Because current toxicology overlooks the important goal of achieving a better characterization of uncertainties and their implications, research approaches should be revised and strengthened to counteract the innate ideological biases, thereby supporting our confidence in using toxicology as a main source of documentation and in using the precautionary principle as a decision procedure in the public policy arena.« less

  15. Laboratory issues: use of nutritional biomarkers.

    PubMed

    Blanck, Heidi Michels; Bowman, Barbara A; Cooper, Gerald R; Myers, Gary L; Miller, Dayton T

    2003-03-01

    Biomarkers of nutritional status provide alternative measures of dietary intake. Like the error and variation associated with dietary intake measures, the magnitude and impact of both biological (preanalytical) and laboratory (analytical) variability need to be considered when one is using biomarkers. When choosing a biomarker, it is important to understand how it relates to nutritional intake and the specific time frame of exposure it reflects as well as how it is affected by sampling and laboratory procedures. Biological sources of variation that arise from genetic and disease states of an individual affect biomarkers, but they are also affected by nonbiological sources of variation arising from specimen collection and storage, seasonality, time of day, contamination, stability and laboratory quality assurance. When choosing a laboratory for biomarker assessment, researchers should try to make sure random and systematic error is minimized by inclusion of certain techniques such as blinding of laboratory staff to disease status and including external pooled standards to which laboratory staff are blinded. In addition analytic quality control should be ensured by use of internal standards or certified materials over the entire range of possible values to control method accuracy. One must consider the effect of random laboratory error on measurement precision and also understand the method's limit of detection and the laboratory cutpoints. Choosing appropriate cutpoints and reducing error is extremely important in nutritional epidemiology where weak associations are frequent. As part of this review, serum lipids are included as an example of a biomarker whereby collaborative efforts have been put forth to both understand biological sources of variation and standardize laboratory results.

  16. The relevance of commuter and work/school exposure in an epidemiological study on traffic-related air pollution.

    PubMed

    Ragettli, Martina S; Phuleria, Harish C; Tsai, Ming-Yi; Schindler, Christian; de Nazelle, Audrey; Ducret-Stich, Regina E; Ineichen, Alex; Perez, Laura; Braun-Fahrländer, Charlotte; Probst-Hensch, Nicole; Künzli, Nino

    2015-01-01

    Exposure during transport and at non-residential locations is ignored in most epidemiological studies of traffic-related air pollution. We investigated the impact of separately estimating NO2 long-term outdoor exposures at home, work/school, and while commuting on the association between this marker of exposure and potential health outcomes. We used spatially and temporally resolved commuter route data and model-based NO2 estimates of a population sample in Basel, Switzerland, to assign individual NO2-exposure estimates of increasing complexity, namely (1) home outdoor concentration; (2) time-weighted home and work/school concentrations; and (3) time-weighted concentration incorporating home, work/school and commute. On the basis of their covariance structure, we estimated the expectable relative differences in the regression slopes between a quantitative health outcome and our measures of individual NO2 exposure using a standard measurement error model. The traditional use of home outdoor NO2 alone indicated a 12% (95% CI: 11-14%) underestimation of related health effects as compared with integrating both home and work/school outdoor concentrations. Mean contribution of commuting to total weekly exposure was small (3.2%; range 0.1-13.5%). Thus, ignoring commute in the total population may not significantly underestimate health effects as compared with the model combining home and work/school. For individuals commuting between Basel-City and Basel-Country, ignoring commute may produce, however, a significant attenuation bias of 4% (95% CI: 4-5%). Our results illustrate the importance of including work/school locations in assessments of long-term exposures to traffic-related air pollutants such as NO2. Information on individuals' commuting behavior may further improve exposure estimates, especially for subjects having lengthy commutes along major transportation routes.

  17. Neurobehavioral effects of exposure to traffic-related air pollution and transportation noise in primary schoolchildren.

    PubMed

    van Kempen, Elise; Fischer, Paul; Janssen, Nicole; Houthuijs, Danny; van Kamp, Irene; Stansfeld, Stephen; Cassee, Flemming

    2012-05-01

    Children living close to roads are exposed to both traffic noise and traffic-related air pollution. There are indications that both exposures affect cognitive functioning. So far, the effects of both exposures have only been investigated separately. To investigate the relationship between air pollution and transportation noise on the cognitive performance of primary schoolchildren in both the home and school setting. Data acquired within RANCH from 553 children (aged 9-11 years) from 24 primary schools were analysed using multilevel modelling with adjustment for a range of socio-economic and life-style factors. Exposure to NO(2) (which is in urban areas an indicator for traffic-related air pollution) at school was statistically significantly associated with a decrease in the memory span length measured during DMST (χ(2)=6.8, df=1, p=0.01). This remained after additional adjustment for transportation noise. Statistically significant associations were observed between road and air traffic noise exposure at school and the number of errors made during the 'arrow' (χ(2)=7.5, df=1, p=0.006) and 'switch' (χ(2)=4.8, df=1, p=0.028) conditions of the SAT. This remained after adjustment for NO(2). No effects of air pollution exposure or transportation noise exposure at home were observed. Combined exposure of air pollution and road traffic noise had a significant effect on the reaction times measured during the SRTT and the 'block' and the 'arrow' conditions of the SAT. Our results provide some support that prolonged exposure to traffic-related air pollution as well as to noise adversely affects cognitive functioning. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Spatiotemporal prediction of fine particulate matter using high resolution satellite images in the southeastern U.S 2003–2011

    PubMed Central

    Lee, Mihye; Kloog, Itai; Chudnovsky, Alexandra; Lyapustin, Alexei; Wang, Yujie; Melly, Steven; Coull, Brent; Koutrakis, Petros; Schwartz, Joel

    2016-01-01

    Numerous studies have demonstrated that fine particulate matter (PM2.5, particles smaller than 2.5 μm in aerodynamic diameter) is associated with adverse health outcomes. The use of ground monitoring stations of PM2.5 to assess personal exposure; however, induces measurement error. Land use regression provides spatially resolved predictions but land use terms do not vary temporally. Meanwhile, the advent of satellite-retrieved aerosol optical depth (AOD) products have made possible to predict the spatial and temporal patterns of PM2.5 exposures. In this paper, we used AOD data with other PM2.5 variables such as meteorological variables, land use regression, and spatial smoothing to predict daily concentrations of PM2.5 at a 1 km2 resolution of the southeastern United States including the seven states of Georgia, North Carolina, South Carolina, Alabama, Tennessee, Mississippi, and Florida for the years from 2003 through 2011. We divided the study area into 3 regions and applied separate mixed-effect models to calibrate AOD using ground PM2.5 measurements and other spatiotemporal predictors. Using 10-fold cross-validation, we obtained out of sample R2 values of 0.77, 0.81, and 0.70 with the square root of the mean squared prediction errors (RMSPE) of 2.89, 2.51, and 2.82 μg/m3 for regions 1, 2, and 3, respectively. The slopes of the relationships between predicted PM2.5 and held out measurements were approximately 1 indicating no bias between the observed and modeled PM2.5 concentrations. Predictions can be used in epidemiological studies investigating the effects of both acute and chronic exposures to PM2.5. Our model results will also extend the existing studies on PM2.5 which have mostly focused on urban areas due to the paucity of monitors in rural areas. PMID:26082149

  19. Spatiotemporal Prediction of Fine Particulate Matter Using High-Resolution Satellite Images in the Southeastern US 2003-2011

    NASA Technical Reports Server (NTRS)

    Lee, Mihye; Kloog, Itai; Chudnovsky, Alexandra; Lyapustin, Alexei; Wang, Yujie; Melly, Steven; Coull, Brent; Koutrakis, Petros; Schwartz, Joel

    2016-01-01

    Numerous studies have demonstrated that fine particulate matter (PM(sub 2.5), particles smaller than 2.5 micrometers in aerodynamic diameter) is associated with adverse health outcomes. The use of ground monitoring stations of PM(sub 2.5) to assess personal exposure, however, induces measurement error. Land-use regression provides spatially resolved predictions but land-use terms do not vary temporally. Meanwhile, the advent of satellite-retrieved aerosol optical depth (AOD) products have made possible to predict the spatial and temporal patterns of PM(sub 2.5) exposures. In this paper, we used AOD data with other PM(sub 2.5) variables, such as meteorological variables, land-use regression, and spatial smoothing to predict daily concentrations of PM(sub 2.5) at a 1 sq km resolution of the Southeastern United States including the seven states of Georgia, North Carolina, South Carolina, Alabama, Tennessee, Mississippi, and Florida for the years from 2003 to 2011. We divided the study area into three regions and applied separate mixed-effect models to calibrate AOD using ground PM(sub 2.5) measurements and other spatiotemporal predictors. Using 10-fold cross-validation, we obtained out of sample R2 values of 0.77, 0.81, and 0.70 with the square root of the mean squared prediction errors of 2.89, 2.51, and 2.82 cu micrograms for regions 1, 2, and 3, respectively. The slopes of the relationships between predicted PM2.5 and held out measurements were approximately 1 indicating no bias between the observed and modeled PM(sub 2.5) concentrations. Predictions can be used in epidemiological studies investigating the effects of both acute and chronic exposures to PM(sub 2.5). Our model results will also extend the existing studies on PM(sub 2.5) which have mostly focused on urban areas because of the paucity of monitors in rural areas.

  20. Do socioeconomic characteristics modify the short term association between air pollution and mortality? Evidence from a zonal time series in Hamilton, Canada

    PubMed Central

    Jerrett, M; Burnett, R; Brook, J; Kanaroglou, P; Giovis, C; Finkelstein, N; Hutchison, B

    2004-01-01

    Study objective: To assess the short term association between air pollution and mortality in different zones of an industrial city. An intra-urban study design is used to test the hypothesis that socioeconomic characteristics modify the acute health effects of ambient air pollution exposure. Design: The City of Hamilton, Canada, was divided into five zones based on proximity to fixed site air pollution monitors. Within each zone, daily counts of non-trauma mortality and air pollution estimates were combined. Generalised linear models (GLMs) were used to test mortality associations with sulphur dioxide (SO2) and with particulate air pollution measured by the coefficient of haze (CoH). Main results: Increased mortality was associated with air pollution exposure in a citywide model and in intra-urban zones with lower socioeconomic characteristics. Low educational attainment and high manufacturing employment in the zones significantly and positively modified the acute mortality effects of air pollution exposure. Discussion: Three possible explanations are proposed for the observed effect modification by education and manufacturing: (1) those in manufacturing receive higher workplace exposures that combine with ambient exposures to produce larger health effects; (2) persons with lower education are less mobile and experience less exposure measurement error, which reduces bias toward the null; or (3) manufacturing and education proxy for many social variables representing material deprivation, and poor material conditions increase susceptibility to health risks from air pollution. PMID:14684724

  1. RoboDIMM | CTIO

    Science.gov Websites

    Travel Information Questionnaire Vistor Support Questionaire Telescope Schedules Astronomer's Tools Contact Acknowledgments TS4 History ISPI ISPI Exposure Time Calculator OSIRIS Spartan Optical Imagers accuracy of 3.8% in the image size. Exposure time: the error caused by the finite exposure time is

  2. Air Pollution Exposure Modeling for Epidemiology Studies and Public Health

    EPA Science Inventory

    Air pollution epidemiology studies of ambient fine particulate matter (PM2.5) often use outdoor concentrations as exposure surrogates. These surrogates can induce exposure error since they do not account for (1) time spent indoors with ambient PM2.5 levels attenuated from outdoor...

  3. Multiple Velocity Profile Measurements in Hypersonic Flows Using Sequentially-Imaged Fluorescence Tagging

    NASA Technical Reports Server (NTRS)

    Bathel, Brett F.; Danehy, Paul M.; Inman, Jennifer A.; Jones, Stephen B.; Ivey,Christopher b.; Goyne, Christopher P.

    2010-01-01

    Nitric-oxide planar laser-induced fluorescence (NO PLIF) was used to perform velocity measurements in hypersonic flows by generating multiple tagged lines which fluoresce as they convect downstream. For each laser pulse, a single interline, progressive scan intensified CCD (charge-coupled device) camera was used to obtain two sequential images of the NO molecules that had been tagged by the laser. The CCD configuration allowed for sub-microsecond acquisition of both images, resulting in sub-microsecond temporal resolution as well as sub-mm spatial resolution (0.5-mm horizontal, 0.7-mm vertical). Determination of axial velocity was made by application of a cross-correlation analysis of the horizontal shift of individual tagged lines. A numerical study of measured velocity error due to a uniform and linearly-varying collisional rate distribution was performed. Quantification of systematic errors, the contribution of gating/exposure duration errors, and the influence of collision rate on temporal uncertainty were made. Quantification of the spatial uncertainty depended upon the signal-to-noise ratio of the acquired profiles. This velocity measurement technique has been demonstrated for two hypersonic flow experiments: (1) a reaction control system (RCS) jet on an Orion Crew Exploration Vehicle (CEV) wind tunnel model and (2) a 10-degree half-angle wedge containing a 2-mm tall, 4-mm wide cylindrical boundary layer trip. The experiments were performed at the NASA Langley Research Center's 31-Inch Mach 10 Air Tunnel.

  4. Exploration of the rapid effects of personal fine particulate matter exposure on arterial hemodynamics and vascular function during the same day.

    PubMed

    Brook, Robert D; Shin, Hwashin H; Bard, Robert L; Burnett, Richard T; Vette, Alan; Croghan, Carry; Thornburg, Jonathan; Rodes, Charles; Williams, Ron

    2011-05-01

    Levels of fine particulate matter [≤ 2.5 μm in aerodynamic diameter (PM(2.5))] are associated with alterations in arterial hemodynamics and vascular function. However, the characteristics of the same-day exposure-response relationships remain unclear. We aimed to explore the effects of personal PM(2.5) exposures within the preceding 24 hr on blood pressure (BP), heart rate (HR), brachial artery diameter (BAD), endothelial function [flow-mediated dilatation (FMD)], and nitroglycerin-mediated dilatation (NMD). Fifty-one nonsmoking subjects had up to 5 consecutive days of 24-hr personal PM(2.5) monitoring and daily cardiovascular (CV) measurements during summer and/or winter periods. The associations between integrated hour-long total personal PM(2.5) exposure (TPE) levels (continuous nephelometry among compliant subjects with low secondhand tobacco smoke exposures; n = 30) with the CV outcomes were assessed over a 24-hr period by linear mixed models. We observed the strongest associations (and smallest estimation errors) between HR and TPE recorded 1-10 hr before CV measurements. The associations were not pronounced for the other time lags (11-24 hr). The associations between TPE and FMD or BAD did not show as clear a temporal pattern. However, we found some suggestion of a negative association with FMD and a positive association with BAD related to TPE just before measurement (0-2 hr). Brief elevations in ambient TPE levels encountered during routine daily activity were associated with small increases in HR and trends toward conduit arterial vasodilatation and endothelial dysfunction within a few hours of exposure. These responses could reflect acute PM(2.5)-induced autonomic imbalance and may factor in the associated rapid increase in CV risk among susceptible individuals.

  5. The remote sensing of algae

    NASA Technical Reports Server (NTRS)

    Thorne, J. F.

    1977-01-01

    State agencies need rapid, synoptic and inexpensive methods for lake assessment to comply with the 1972 Amendments to the Federal Water Pollution Control Act. Low altitude aerial photography may be useful in providing information on algal type and quantity. Photography must be calibrated properly to remove sources of error including airlight, surface reflectance and scene-to-scene illumination differences. A 550-nm narrow wavelength band black and white photographic exposure provided a better correlation to algal biomass than either red or infrared photographic exposure. Of all the biomass parameters tested, depth-integrated chlorophyll a concentration correlated best to remote sensing data. Laboratory-measured reflectance of selected algae indicate that different taxonomic classes of algae may be discriminated on the basis of their reflectance spectra.

  6. Automatic brightness control of laser spot vision inspection system

    NASA Astrophysics Data System (ADS)

    Han, Yang; Zhang, Zhaoxia; Chen, Xiaodong; Yu, Daoyin

    2009-10-01

    The laser spot detection system aims to locate the center of the laser spot after long-distance transmission. The accuracy of positioning laser spot center depends very much on the system's ability to control brightness. In this paper, an automatic brightness control system with high-performance is designed using the device of FPGA. The brightness is controlled by combination of auto aperture (video driver) and adaptive exposure algorithm, and clear images with proper exposure are obtained under different conditions of illumination. Automatic brightness control system creates favorable conditions for positioning of the laser spot center later, and experiment results illuminate the measurement accuracy of the system has been effectively guaranteed. The average error of the spot center is within 0.5mm.

  7. Serum vitamin D levels are not altered after controlled diesel ...

    EPA Pesticide Factsheets

    Past research has suggested that exposure to urban air pollution may be associated with vitamin D deficiency in human populations. Vitamin D is widely known for its importance in bone growth/remodeling, muscle metabolism, and its ability to promote calcium absorption in the gut; deficiency in vitamin D results in the development of rickets in children and osteomalacia in adults. In the current study, we assessed whether vitamin D levels are altered under controlled exposures to a commonly measured urban air pollutant, diesel. For this study, we exposed 12 healthy volunteers to clean air and diesel exhaust (300 μg/m3) for 2 hours while undergoing intermittent exercise. Venous blood was collected before, 0 hrs post-, and 18 hrs post-exposure, and 25-hydroxyvitamin D [25(OH)D] was measured in the serum. The average baseline value of 25(OH)D (mean ± standard error) was 22.9 ± 2.5 ng/mL. Four subject’s baseline values were vitamin D deficient (30 ng/mL). Additionally, there was no significant change in the baseline values between the clean air and diesel exposures (paired t-test, p = 0.54), suggesting minimal variability in 25(OH)D over the experiment's time course. Small inductions in 25(OH)D were found following clean air exposures (12.5 ± 4.9% and a 7.1 ± 5.0% for 0 hrs post- and 18 hrs post-exposure values compared to baseline, respectively). Minimal changes in 25(OH)D were observed following diesel exhaust exposures 0 hrs (3.5 ± 5.2%) and 18 hrs followin

  8. Effects of incomplete residential histories on studies of environmental exposure with application to childhood leukaemia and background radiation.

    PubMed

    Nikkilä, Atte; Kendall, Gerald; Raitanen, Jani; Spycher, Ben; Lohi, Olli; Auvinen, Anssi

    2018-06-22

    When evaluating environmental exposures, residential exposures are often most relevant. In most countries, it is impossible to establish full residential histories. In recent publications, childhood leukaemia and background radiation have been studied with and without full residential histories. This paper investigates the consequences of lacking such full data. Data from a nationwide Finnish Case-Control study of Childhood Leukaemia and gamma rays were analysed. This included 1093 children diagnosed with leukaemia in Finland in 1990-2011. Each case was matched by gender and year of birth to three controls. Full residential histories were available. The dose estimates were based on outdoor background radiation measurements. The indoor dose rates were obtained with a dwelling type specific conversion coefficient and the individual time-weighted mean red bone marrow dose rates were calculated using age-specific indoor occupancy and the age and gender of the child. Radiation from Chernobyl fallout was included and a 2-year latency period assumed. The median separation between successive dwellings was 3.4 km and median difference in red bone marrow dose 2.9 nSv/h. The Pearson correlation between the indoor red bone marrow dose rates of successive dwellings was 0.62 (95% CI 0.60, 0.64). The odds ratio for a 10 nSv/h increase in dose rate with full residential histories was 1.01 (95% CI 0.97, 1.05). Similar odds ratios were calculated with dose rates based on only the first dwelling (1.02, 95% CI 0.99, 1.05) and only the last dwelling (1.00, 95% CI 0.98, 1.03) and for subjects who had lived only in a single dwelling (1.05, 95% CI 0.98, 1.10). Knowledge of full residential histories would always be the option of choice. However, due to the strong correlation between exposure estimates in successive dwellings and the uncertainty about the most relevant exposure period, estimation of overall exposure level from a single address is also informative. Error in dose estimation is likely to cause some degree of classical measurement error resulting in bias towards the null. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Preparing Nursing Home Data from Multiple Sites for Clinical Research – A Case Study Using Observational Health Data Sciences and Informatics

    PubMed Central

    Boyce, Richard D.; Handler, Steven M.; Karp, Jordan F.; Perera, Subashan; Reynolds, Charles F.

    2016-01-01

    Introduction: A potential barrier to nursing home research is the limited availability of research quality data in electronic form. We describe a case study of converting electronic health data from five skilled nursing facilities to a research quality longitudinal dataset by means of open-source tools produced by the Observational Health Data Sciences and Informatics (OHDSI) collaborative. Methods: The Long-Term Care Minimum Data Set (MDS), drug dispensing, and fall incident data from five SNFs were extracted, translated, and loaded into version 4 of the OHDSI common data model. Quality assurance involved identifying errors using the Achilles data characterization tool and comparing both quality measures and drug exposures in the new database for concordance with externally available sources. Findings: Records for a total 4,519 patients (95.1%) made it into the final database. Achilles identified 10 different types of errors that were addressed in the final dataset. Drug exposures based on dispensing were generally accurate when compared with medication administration data from the pharmacy services provider. Quality measures were generally concordant between the new database and Nursing Home Compare for measures with a prevalence ≥ 10%. Fall data recorded in MDS was found to be more complete than data from fall incident reports. Conclusions: The new dataset is ready to support observational research on topics of clinical importance in the nursing home including patient-level prediction of falls. The extraction, translation, and loading process enabled the use of OHDSI data characterization tools that improved the quality of the final dataset. PMID:27891528

  10. Corrigendum to ‘Evidence for shock heating and constraints on Martian surface temperatures revealed by 40Ar/ 39Ar thermochronometry of Martian meteorites’ [Geochim. Cosmochim. Acta (2010) 6900–6920

    DOE PAGES

    Cassata, William S.; Shuster, David L.; Renne, Paul R.; ...

    2014-10-23

    Here, the authors regret they have discovered errors in Eq. (3) and in a spreadsheet used to calculate cosmogenic exposure ages shown in Table 1. Eq. (3) is missing a term. The spreadsheet errors concerned an incorrect cell reference and application of Eq. (3). Correction of these errors results in ~15–20% changes to the exposure ages of all samples, minor (generally <0.2%) changes to the radioisotopic ages of some samples (those that entailed a correction for chlorine-derived 38Ar calculated based on the exposure age; see Section 3.3), and statistically insignificant changes to the inferred trapped components identified through isochron analyses.more » These modifications have no impact on the modeling, discussions, or conclusions in the paper, nor do the changes to radioisotopic ages exceed the 1 sigma uncertainties.« less

  11. Cognitive functions and cerebral oxygenation changes during acute and prolonged hypoxic exposure.

    PubMed

    Davranche, Karen; Casini, Laurence; Arnal, Pierrick J; Rupp, Thomas; Perrey, Stéphane; Verges, Samuel

    2016-10-01

    The present study aimed to assess specific cognitive processes (cognitive control and time perception) and hemodynamic correlates using functional near-infrared spectroscopy (fNIRS) during acute and prolonged high-altitude exposure. Eleven male subjects were transported via helicopter and dropped at 14 272 ft (4 350 meters) of altitude where they stayed for 4 days. Cognitive tasks, involving a conflict task and temporal bisection task, were performed at sea level the week before ascending to high altitude, the day of arrival (D0), the second (D2) and fourth (D4) day at high altitude. Cortical hemodynamic changes in the prefrontal cortex (PFC) area were monitored with fNIRS at rest and during the conflict task. Results showed that high altitude impacts information processing in terms of speed and accuracy. In the early hours of exposure (D0), participants displayed slower reaction times (RT) and decision errors were twice as high. While error rate for simple spontaneous responses remained twice that at sea level, the slow-down of RT was not detectable after 2 days at high-altitude. The larger fNIRS responses from D0 to D2 suggest that higher prefrontal activity partially counteracted cognitive performance decrements. Cognitive control, assessed through the build-up of a top-down response suppression mechanism, the early automatic response activation and the post-error adjustment were not impacted by hypoxia. However, during prolonged hypoxic exposure the temporal judgments were underestimated suggesting a slowdown of the internal clock. A decrease in cortical arousal level induced by hypoxia could consistently explain both the slowdown of the internal clock and the persistence of a higher number of errors after several days of exposure. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Probabilistic estimation of residential air exchange rates for ...

    EPA Pesticide Factsheets

    Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER measurements. An algorithm for probabilistically estimating AER was developed based on the Lawrence Berkley National Laboratory Infiltration model utilizing housing characteristics and meteorological data with adjustment for window opening behavior. The algorithm was evaluated by comparing modeled and measured AERs in four US cities (Los Angeles, CA; Detroit, MI; Elizabeth, NJ; and Houston, TX) inputting study-specific data. The impact on the modeled AER of using publically available housing data representative of the region for each city was also assessed. Finally, modeled AER based on region-specific inputs was compared with those estimated using literature-based distributions. While modeled AERs were similar in magnitude to the measured AER they were consistently lower for all cities except Houston. AERs estimated using region-specific inputs were lower than those using study-specific inputs due to differences in window opening probabilities. The algorithm produced more spatially and temporally variable AERs compared with literature-based distributions reflecting within- and between-city differences, helping reduce error in estimates of air pollutant exposure. Published in the Journal of

  13. Effects of categorization method, regression type, and variable distribution on the inflation of Type-I error rate when categorizing a confounding variable.

    PubMed

    Barnwell-Ménard, Jean-Louis; Li, Qing; Cohen, Alan A

    2015-03-15

    The loss of signal associated with categorizing a continuous variable is well known, and previous studies have demonstrated that this can lead to an inflation of Type-I error when the categorized variable is a confounder in a regression analysis estimating the effect of an exposure on an outcome. However, it is not known how the Type-I error may vary under different circumstances, including logistic versus linear regression, different distributions of the confounder, and different categorization methods. Here, we analytically quantified the effect of categorization and then performed a series of 9600 Monte Carlo simulations to estimate the Type-I error inflation associated with categorization of a confounder under different regression scenarios. We show that Type-I error is unacceptably high (>10% in most scenarios and often 100%). The only exception was when the variable categorized was a continuous mixture proxy for a genuinely dichotomous latent variable, where both the continuous proxy and the categorized variable are error-ridden proxies for the dichotomous latent variable. As expected, error inflation was also higher with larger sample size, fewer categories, and stronger associations between the confounder and the exposure or outcome. We provide online tools that can help researchers estimate the potential error inflation and understand how serious a problem this is. Copyright © 2014 John Wiley & Sons, Ltd.

  14. TU-CD-207-11: Patient-Driven Automatic Exposure Control for Dedicated Breast CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez, A; Gazi, P; Department of Radiology, UC Davis Medical Center, Sacramento, CA

    Purpose: To implement automatic exposure control (AEC) in dedicated breast CT (bCT) on a patient-specific basis using only the pre-scan scout views. Methods: Using a large cohort (N=153) of bCT data sets, the breast effective diameter (D) and width in orthogonal planes (Wa,Wb) were calculated from the reconstructed bCT image and pre-scan scout views, respectively. D, Wa, and Wb were measured at the breast center-of-mass (COM), making use of the known geometry of our bCT system. These data were then fit to a second-order polynomial “D=F(Wa,Wb)” in a least squares sense in order to provide a functional form for determiningmore » the breast diameter. The coefficient of determination (R{sup 2}) and mean percent error between the measured breast diameter and fit breast diameter were used to evaluate the overall robustness of the polynomial fit. Lastly, previously-reported bCT technique factors derived from Monte Carlo simulations were used to determine the tube current required for each breast diameter in order to match two-view mammographic dose levels. Results: F(Wa,Wb) provided fitted breast diameters in agreement with the measured breast diameters resulting in R{sup 2} values ranging from 0.908 to 0.929 and mean percent errors ranging from 3.2% to 3.7%. For all 153 bCT data sets used in this study, the fitted breast diameters ranged from 7.9 cm to 15.7 cm corresponding to tube current values ranging from 0.6 mA to 4.9 mA in order to deliver the same dose as two-view mammography in a 50% glandular breast with a 80 kV x-ray beam and 16.6 second scan time. Conclusion: The present work provides a robust framework for AEC in dedicated bCT using only the width measurements derived from the two orthogonal pre-scan scout views. Future work will investigate how these automatically chosen exposure levels affect the quality of the reconstructed image.« less

  15. Predicting Forearm Physical Exposures During Computer Work Using Self-Reports, Software-Recorded Computer Usage Patterns, and Anthropometric and Workstation Measurements.

    PubMed

    Huysmans, Maaike A; Eijckelhof, Belinda H W; Garza, Jennifer L Bruno; Coenen, Pieter; Blatter, Birgitte M; Johnson, Peter W; van Dieën, Jaap H; van der Beek, Allard J; Dennerlein, Jack T

    2017-12-15

    Alternative techniques to assess physical exposures, such as prediction models, could facilitate more efficient epidemiological assessments in future large cohort studies examining physical exposures in relation to work-related musculoskeletal symptoms. The aim of this study was to evaluate two types of models that predict arm-wrist-hand physical exposures (i.e. muscle activity, wrist postures and kinematics, and keyboard and mouse forces) during computer use, which only differed with respect to the candidate predicting variables; (i) a full set of predicting variables, including self-reported factors, software-recorded computer usage patterns, and worksite measurements of anthropometrics and workstation set-up (full models); and (ii) a practical set of predicting variables, only including the self-reported factors and software-recorded computer usage patterns, that are relatively easy to assess (practical models). Prediction models were build using data from a field study among 117 office workers who were symptom-free at the time of measurement. Arm-wrist-hand physical exposures were measured for approximately two hours while workers performed their own computer work. Each worker's anthropometry and workstation set-up were measured by an experimenter, computer usage patterns were recorded using software and self-reported factors (including individual factors, job characteristics, computer work behaviours, psychosocial factors, workstation set-up characteristics, and leisure-time activities) were collected by an online questionnaire. We determined the predictive quality of the models in terms of R2 and root mean squared (RMS) values and exposure classification agreement to low-, medium-, and high-exposure categories (in the practical model only). The full models had R2 values that ranged from 0.16 to 0.80, whereas for the practical models values ranged from 0.05 to 0.43. Interquartile ranges were not that different for the two models, indicating that only for some physical exposures the full models performed better. Relative RMS errors ranged between 5% and 19% for the full models, and between 10% and 19% for the practical model. When the predicted physical exposures were classified into low, medium, and high, classification agreement ranged from 26% to 71%. The full prediction models, based on self-reported factors, software-recorded computer usage patterns, and additional measurements of anthropometrics and workstation set-up, show a better predictive quality as compared to the practical models based on self-reported factors and recorded computer usage patterns only. However, predictive quality varied largely across different arm-wrist-hand exposure parameters. Future exploration of the relation between predicted physical exposure and symptoms is therefore only recommended for physical exposures that can be reasonably well predicted. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  16. A nonlinear lag correction algorithm for a-Si flat-panel x-ray detectors

    PubMed Central

    Starman, Jared; Star-Lack, Josh; Virshup, Gary; Shapiro, Edward; Fahrig, Rebecca

    2012-01-01

    Purpose: Detector lag, or residual signal, in a-Si flat-panel (FP) detectors can cause significant shading artifacts in cone-beam computed tomography reconstructions. To date, most correction models have assumed a linear, time-invariant (LTI) model and correct lag by deconvolution with an impulse response function (IRF). However, the lag correction is sensitive to both the exposure intensity and the technique used for determining the IRF. Even when the LTI correction that produces the minimum error is found, residual artifact remains. A new non-LTI method was developed to take into account the IRF measurement technique and exposure dependencies. Methods: First, a multiexponential (N = 4) LTI model was implemented for lag correction. Next, a non-LTI lag correction, known as the nonlinear consistent stored charge (NLCSC) method, was developed based on the LTI multiexponential method. It differs from other nonlinear lag correction algorithms in that it maintains a consistent estimate of the amount of charge stored in the FP and it does not require intimate knowledge of the semiconductor parameters specific to the FP. For the NLCSC method, all coefficients of the IRF are functions of exposure intensity. Another nonlinear lag correction method that only used an intensity weighting of the IRF was also compared. The correction algorithms were applied to step-response projection data and CT acquisitions of a large pelvic phantom and an acrylic head phantom. The authors collected rising and falling edge step-response data on a Varian 4030CB a-Si FP detector operating in dynamic gain mode at 15 fps at nine incident exposures (2.0%–92% of the detector saturation exposure). For projection data, 1st and 50th frame lag were measured before and after correction. For the CT reconstructions, five pairs of ROIs were defined and the maximum and mean signal differences within a pair were calculated for the different exposures and step-response edge techniques. Results: The LTI corrections left residual 1st and 50th frame lag up to 1.4% and 0.48%, while the NLCSC lag correction reduced 1st and 50th frame residual lags to less than 0.29% and 0.0052%. For CT reconstructions, the NLCSC lag correction gave an average error of 11 HU for the pelvic phantom and 3 HU for the head phantom, compared to 14–19 HU and 2–11 HU for the LTI corrections and 15 HU and 9 HU for the intensity weighted non-LTI algorithm. The maximum ROI error was always smallest for the NLCSC correction. The NLCSC correction was also superior to the intensity weighting algorithm. Conclusions: The NLCSC lag algorithm corrected for the exposure dependence of lag, provided superior image improvement for the pelvic phantom reconstruction, and gave similar results to the best case LTI results for the head phantom. The blurred ring artifact that is left over in the LTI corrections was better removed by the NLCSC correction in all cases. PMID:23039642

  17. Is GPS telemetry location error screening beneficial?

    USGS Publications Warehouse

    Ironside, Kirsten E.; Mattson, David J.; Arundel, Terry; Hansen, Jered R.

    2017-01-01

    The accuracy of global positioning system (GPS) locations obtained from study animals tagged with GPS monitoring devices has been a concern as to the degree it influences assessments of movement patterns, space use, and resource selection estimates. Many methods have been proposed for screening data to retain the most accurate positions for analysis, based on dilution of precision (DOP) measures, and whether the position is a two dimensional or three dimensional fix. Here we further explore the utility of these measures, by testing a Telonics GEN3 GPS collar's positional accuracy across a wide range of environmental conditions. We found the relationship between location error and fix dimension and DOP metrics extremely weak (r2adj ∼ 0.01) in our study area. Environmental factors such as topographic exposure, canopy cover, and vegetation height explained more of the variance (r2adj = 15.08%). Our field testing covered sites where sky-view was so limited it affected GPS performance to the degree fix attempts failed frequently (fix success rates ranged 0.00–100.00% over 67 sites). Screening data using PDOP did not effectively reduce the location error in the remaining dataset. Removing two dimensional fixes reduced the mean location error by 10.95 meters, but also resulted in a 54.50% data reduction. Therefore screening data under the range of conditions sampled here would reduce information on animal movement with minor improvements in accuracy and potentially introduce bias towards more open terrain and vegetation.

  18. 6.6-hour inhalation of ozone concentrations from 60 to 87 parts per billion in healthy humans.

    PubMed

    Schelegle, Edward S; Morales, Christopher A; Walby, William F; Marion, Susan; Allen, Roblee P

    2009-08-01

    Identification of the minimal ozone (O(3)) concentration and/or dose that induces measurable lung function decrements in humans is considered in the risk assessment leading to establishing an appropriate National Ambient Air Quality Standard for O(3) that protects public health. To identify and/or predict the minimal mean O(3) concentration that produces a decrement in FEV(1) and symptoms in healthy individuals completing 6.6-hour exposure protocols. Pulmonary function and subjective symptoms were measured in 31 healthy adults (18-25 yr, male and female, nonsmokers) who completed five 6.6-hour chamber exposures: filtered air and four variable hourly patterns with mean O(3) concentrations of 60, 70, 80, and 87 parts per billion (ppb). Compared with filtered air, statistically significant decrements in FEV(1) and increases in total subjective symptoms scores (P < 0.05) were measured after exposure to mean concentrations of 70, 80, and 87 ppb O(3). The mean percent change in FEV(1) (+/-standard error) at the end of each protocol was 0.80 +/- 0.90, -2.72 +/- 1.48, -5.34 +/- 1.42, -7.02 +/- 1.60, and -11.42 +/- 2.20% for exposure to filtered air and 60, 70, 80, and 87 ppb O(3), respectively. Inhalation of 70 ppb O(3) for 6.6 hours, a concentration below the current 8-hour National Ambient Air Quality Standard of 75 ppb, is sufficient to induce statistically significant decrements in FEV(1) in healthy young adults.

  19. The effects of warning cues and attention-capturing stimuli on the sustained attention to response task.

    PubMed

    Finkbeiner, Kristin M; Wilson, Kyle M; Russell, Paul N; Helton, William S

    2015-04-01

    Performance on the sustained attention to response task (SART) is often characterized by a speed-accuracy trade-off, and SART performance may be influenced by strategic factors (Head and Helton Conscious Cogn 22: 913-919, 2013). Previous research indicates a significant difference between reliable and unreliable warning cues on response times and errors (commission and omission), suggesting that SART tasks are influenced by strategic factors (Helton et al. Conscious Cogn 20: 1732-1737, 2011; Exp Brain Res 209: 401-407, 2011). With regards to warning stimuli, we chose to use cute images (exhibiting infantile features) during a SART, as previous literature indicates cute images cause participants to engage attention. If viewing cute things makes the viewer exert more attention than normal, then exposure to cute stimuli during the SART should improve performance if SART performance is a measure of perceptual coupling. Reliable warning cues were shown to reduce both response time and errors of commission, and increase errors of omission, relative to unreliable warning cues. Cuteness of the warning stimuli, however, had no significant effect on SART performance. These results suggest the importance of strategic factors in SART performance, not increased attention, and add to the growing literature which suggests the SART is not a good measure of sustained attention, vigilance or perceptual coupling.

  20. Modeling Spatial and Temporal Variability of Residential Air Exchange Rates for the Near-Road Exposures and Effects of Urban Air Pollutants Study (NEXUS)

    EPA Science Inventory

    Air pollution health studies often use outdoor concentrations as exposure surrogates. Failure to account for variability of residential infiltration of outdoor pollutants can induce exposure errors and lead to bias and incorrect confidence intervals in health effect estimates. Th...

  1. A Comparison of Exposure Control Procedures in CATS Using the GPC Model

    ERIC Educational Resources Information Center

    Leroux, Audrey J.; Dodd, Barbara G.

    2016-01-01

    The current study compares the progressive-restricted standard error (PR-SE) exposure control method with the Sympson-Hetter, randomesque, and no exposure control (maximum information) procedures using the generalized partial credit model with fixed- and variable-length CATs and two item pools. The PR-SE method administered the entire item pool…

  2. A Comparison of Exposure Control Procedures in CATs Using the 3PL Model

    ERIC Educational Resources Information Center

    Leroux, Audrey J.; Lopez, Myriam; Hembry, Ian; Dodd, Barbara G.

    2013-01-01

    This study compares the progressive-restricted standard error (PR-SE) exposure control procedure to three commonly used procedures in computerized adaptive testing, the randomesque, Sympson-Hetter (SH), and no exposure control methods. The performance of these four procedures is evaluated using the three-parameter logistic model under the…

  3. The effects of temperature and pressure on airborne exposure concentrations when performing compliance evaluations using ACGIH TLVs and OSHA PELs.

    PubMed

    Stephenson, D J; Lillquist, D R

    2001-04-01

    Occupational hygienists perform air sampling to characterize airborne contaminant emissions, assess occupational exposures, and establish allowable workplace airborne exposure concentrations. To perform these air sampling applications, occupational hygienists often compare an airborne exposure concentration to a corresponding American Conference of Governmental Industrial Hygienists (ACGIH) threshold limit value (TLV) or an Occupational Safety and Health Administration (OSHA) permissible exposure limit (PEL). To perform such comparisons, one must understand the physiological assumptions used to establish these occupational exposure limits, the relationship between a workplace airborne exposure concentration and its associated TLV or PEL, and the effect of temperature and pressure on the performance of an accurate compliance evaluation. This article illustrates the correct procedure for performing compliance evaluations using airborne exposure concentrations expressed in both parts per million and milligrams per cubic meter. In so doing, a brief discussion is given on the physiological assumptions used to establish TLVs and PELs. It is further shown how an accurate compliance evaluation is fundamentally based on comparison of a measured work site exposure dose (derived from the sampling site exposure concentration estimate) to an estimated acceptable exposure dose (derived from the occupational exposure limit concentration). In addition, this article correctly illustrates the effect that atmospheric temperature and pressure have on airborne exposure concentrations and the eventual performance of a compliance evaluation. This article also reveals that under fairly moderate conditions of temperature and pressure, 30 degrees C and 670 torr, a misunderstanding of how varying atmospheric conditions affect concentration values can lead to a 15 percent error in assessing compliance.

  4. Measurement errors in the assessment of exposure to solar ultraviolet radiation and its impact on risk estimates in epidemiological studies.

    PubMed

    Dadvand, Payam; Basagaña, Xavier; Barrera-Gómez, Jose; Diffey, Brian; Nieuwenhuijsen, Mark

    2011-07-01

    To date, many studies addressing long-term effects of ultraviolet radiation (UVR) exposure on human health have relied on a range of surrogates such as the latitude of the city of residence, ambient UVR levels, or time spent outdoors to estimate personal UVR exposure. This study aimed to differentiate the contributions of personal behaviour and ambient UVR levels on facial UVR exposure and to evaluate the impact of using UVR exposure surrogates on detecting exposure-outcome associations. Data on time-activity, holiday behaviour, and ambient UVR levels were obtained for adult (aged 25-55 years old) indoor workers in six European cities: Athens (37°N), Grenoble (45°N), Milan (45°N), Prague (50°N), Oxford (52°N), and Helsinki (60°N). Annual UVR facial exposure levels were simulated for 10,000 subjects for each city, using a behavioural UVR exposure model. Within-city variations of facial UVR exposure were three times larger than the variation between cities, mainly because of time-activity patterns. In univariate models, ambient UVR levels, latitude and time spent outdoors, each accounted for less than one fourth of the variation in facial exposure levels. Use of these surrogates to assess long-term exposure to UVR resulted in requiring more than four times more participants to achieve similar statistical power to the study that applied simulated facial exposure. Our results emphasise the importance of integrating both personal behaviour and ambient UVR levels/latitude in exposure assessment methodologies.

  5. Total Dose Effects on Error Rates in Linear Bipolar Systems

    NASA Technical Reports Server (NTRS)

    Buchner, Stephen; McMorrow, Dale; Bernard, Muriel; Roche, Nicholas; Dusseau, Laurent

    2007-01-01

    The shapes of single event transients in linear bipolar circuits are distorted by exposure to total ionizing dose radiation. Some transients become broader and others become narrower. Such distortions may affect SET system error rates in a radiation environment. If the transients are broadened by TID, the error rate could increase during the course of a mission, a possibility that has implications for hardness assurance.

  6. Interdisciplinary Coordination Reviews: A Process to Reduce Construction Costs.

    ERIC Educational Resources Information Center

    Fewell, Dennis A.

    1998-01-01

    Interdisciplinary Coordination design review is instrumental in detecting coordination errors and omissions in construction documents. Cleansing construction documents of interdisciplinary coordination errors reduces time extensions, the largest source of change orders, and limits exposure to liability claims. Improving the quality of design…

  7. Effects of exposure estimation errors on estimated exposure-response relations for PM2.5.

    PubMed

    Cox, Louis Anthony Tony

    2018-07-01

    Associations between fine particulate matter (PM2.5) exposure concentrations and a wide variety of undesirable outcomes, from autism and auto theft to elderly mortality, suicide, and violent crime, have been widely reported. Influential articles have argued that reducing National Ambient Air Quality Standards for PM2.5 is desirable to reduce these outcomes. Yet, other studies have found that reducing black smoke and other particulate matter by as much as 70% and dozens of micrograms per cubic meter has not detectably affected all-cause mortality rates even after decades, despite strong, statistically significant positive exposure concentration-response (C-R) associations between them. This paper examines whether this disconnect between association and causation might be explained in part by ignored estimation errors in estimated exposure concentrations. We use EPA air quality monitor data from the Los Angeles area of California to examine the shapes of estimated C-R functions for PM2.5 when the true C-R functions are assumed to be step functions with well-defined response thresholds. The estimated C-R functions mistakenly show risk as smoothly increasing with concentrations even well below the response thresholds, thus incorrectly predicting substantial risk reductions from reductions in concentrations that do not affect health risks. We conclude that ignored estimation errors obscure the shapes of true C-R functions, including possible thresholds, possibly leading to unrealistic predictions of the changes in risk caused by changing exposures. Instead of estimating improvements in public health per unit reduction (e.g., per 10 µg/m 3 decrease) in average PM2.5 concentrations, it may be essential to consider how interventions change the distributions of exposure concentrations. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Prompt Optical Observations of Gamma-Ray Bursts

    NASA Astrophysics Data System (ADS)

    Akerlof, Carl; Balsano, Richard; Barthelmy, Scott; Bloch, Jeff; Butterworth, Paul; Casperson, Don; Cline, Tom; Fletcher, Sandra; Frontera, Fillippo; Gisler, Galen; Heise, John; Hills, Jack; Hurley, Kevin; Kehoe, Robert; Lee, Brian; Marshall, Stuart; McKay, Tim; Pawl, Andrew; Piro, Luigi; Szymanski, John; Wren, Jim

    2000-03-01

    The Robotic Optical Transient Search Experiment (ROTSE) seeks to measure simultaneous and early afterglow optical emission from gamma-ray bursts (GRBs). A search for optical counterparts to six GRBs with localization errors of 1 deg2 or better produced no detections. The earliest limiting sensitivity is mROTSE>13.1 at 10.85 s (5 s exposure) after the gamma-ray rise, and the best limit is mROTSE>16.0 at 62 minutes (897 s exposure). These are the most stringent limits obtained for the GRB optical counterpart brightness in the first hour after the burst. Consideration of the gamma-ray fluence and peak flux for these bursts and for GRB 990123 indicates that there is not a strong positive correlation between optical flux and gamma-ray emission.

  9. Low power arcjet system spacecraft impacts

    NASA Technical Reports Server (NTRS)

    Pencil, Eric J.; Sarmiento, Charles J.; Lichtin, D. A.; Palchefsky, J. W.; Bogorad, A. L.

    1993-01-01

    Potential plume contamination of spacecraft surfaces was investigated by positioning spacecraft material samples relative to an arcjet thruster. Samples in the simulated solar array region were exposed to the cold gas arcjet plume for 40 hrs to address concerns about contamination by backstreaming diffusion pump oil. Except for one sample, no significant changes were measured in absorptance and emittance within experimental error. Concerns about surface property degradation due to electrostatic discharges led to the investigation of the discharge phenomenon of charged samples during arcjet ignition. Short duration exposure of charged samples demonstrated that potential differences are consistently and completely eliminated within the first second of exposure to a weakly ionized plume. The spark discharge mechanism was not the discharge phenomenon. The results suggest that the arcjet could act as a charge control device on spacecraft.

  10. A comparative study of two hazard handling training methods for novice drivers.

    PubMed

    Wang, Y B; Zhang, W; Salvendy, G

    2010-10-01

    The effectiveness of two hazard perception training methods, simulation-based error training (SET) and video-based guided error training (VGET), for novice drivers' hazard handling performance was tested, compared, and analyzed. Thirty-two novice drivers participated in the hazard perception training. Half of the participants were trained using SET by making errors and/or experiencing accidents while driving with a desktop simulator. The other half were trained using VGET by watching prerecorded video clips of errors and accidents that were made by other people. The two groups had exposure to equal numbers of errors for each training scenario. All the participants were tested and evaluated for hazard handling on a full cockpit driving simulator one week after training. Hazard handling performance and hazard response were measured in this transfer test. Both hazard handling performance scores and hazard response distances were significantly better for the SET group than the VGET group. Furthermore, the SET group had more metacognitive activities and intrinsic motivation. SET also seemed more effective in changing participants' confidence, but the result did not reach the significance level. SET exhibited a higher training effectiveness of hazard response and handling than VGET in the simulated transfer test. The superiority of SET might benefit from the higher levels of metacognition and intrinsic motivation during training, which was observed in the experiment. Future research should be conducted to assess whether the advantages of error training are still effective under real road conditions.

  11. The use of heterodyne speckle photogrammetry to measure high-temperature strain distributions

    NASA Technical Reports Server (NTRS)

    Stetson, K. A.

    1983-01-01

    Thermal and mechanical strains have been measured on samples of a common material used in jet engine burner liners, which were heated from room temperature to 870 C and cooled back to 220 C, in a laboratory furnace. The physical geometry of the sample surface was recorded to select temperatures by means of a set of twelve single-exposure specklegrams. Sequential pairs of specklegrams were compared in a heterodyne interferometer which allowed high-precision measurement of differential displacements. Good speckle correlation was observed between the first and last specklegrams also, which showed the durability of the surface microstructure, and permitted a check on accumulated errors. Agreement with calculated thermal expansion was to within a few hundred microstrain over a range of fourteen thousand.

  12. Weathering the storm: hurricanes and birth outcomes.

    PubMed

    Currie, Janet; Rossin-Slater, Maya

    2013-05-01

    A growing literature suggests that stressful events in pregnancy can have negative effects on birth outcomes. Some of the estimates in this literature may be affected by small samples, omitted variables, endogenous mobility in response to disasters, and errors in the measurement of gestation, as well as by a mechanical correlation between longer gestation and the probability of having been exposed. We use millions of individual birth records to examine the effects of exposure to hurricanes during pregnancy, and the sensitivity of the estimates to these econometric problems. We find that exposure to a hurricane during pregnancy increases the probability of abnormal conditions of the newborn such as being on a ventilator more than 30min and meconium aspiration syndrome (MAS). Although we are able to reproduce previous estimates of effects on birth weight and gestation, our results suggest that measured effects of stressful events on these outcomes are sensitive to specification and it is preferable to use more sensitive indicators of newborn health. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Weathering the Storm: Hurricanes and Birth Outcomes

    PubMed Central

    Currie, Janet

    2013-01-01

    A growing literature suggests that stressful events in pregnancy can have negative effects on birth outcomes. Some of the estimates in this literature may be affected by small samples, omitted variables, endogenous mobility in response to disasters, and errors in the measurement of gestation, as well as by a mechanical correlation between longer gestation and the probability of having been exposed. We use millions of individual birth records to examine the effects of exposure to hurricanes during pregnancy, and the sensitivity of the estimates to these econometric problems. We find that exposure to a hurricane during pregnancy increases the probability of abnormal conditions of the newborn such as being on a ventilator more than 30 minutes and meconium aspiration syndrome (MAS). Although we are able to reproduce previous estimates of effects on birth weight and gestation, our results suggest that measured effects of stressful events on these outcomes are sensitive to specification and it is preferable to use more sensitive indicators of newborn health. PMID:23500506

  14. Trauma exposure and endothelial function among midlife women.

    PubMed

    Thurston, Rebecca C; Barinas-Mitchell, Emma; von Känel, Roland; Chang, Yuefang; Koenen, Karestan C; Matthews, Karen A

    2018-04-01

    Trauma is a potent exposure that can have implications for health. However, little research has considered whether trauma exposure is related to endothelial function, a key process in the pathophysiology of cardiovascular disease (CVD). We tested whether exposure to traumatic experiences was related to poorer endothelial function among midlife women, independent of CVD risk factors, demographic factors, psychosocial factors, or a history of childhood abuse. In all, 272 nonsmoking perimenopausal and postmenopausal women aged 40 to 60 years without clinical CVD completed the Brief Trauma Questionnaire, the Child Trauma Questionnaire, physical measures, a blood draw, and a brachial ultrasound for assessment of brachial artery flow-mediated dilation (FMD). Relations between trauma and FMD were tested in linear regression models controlling for baseline vessel diameter, demographics, depression/anxiety, CVD risk factors, health behaviors, and, additionally, a history of childhood abuse. Over 60% of the sample had at least one traumatic exposure, and 18% had three or more exposures. A greater number of traumatic exposures was associated with lower FMD, indicating poorer endothelial function in multivariable models (beta, β [standard error, SE] -1.05 [0.40], P = 0.01). Relations between trauma exposure and FMD were particularly pronounced for three or more trauma exposures (b [SE] -1.90 [0.71], P = 0.008, relative to no exposures, multivariable). A greater number of traumatic exposures were associated with poorer endothelial function. Relations were not explained by demographics, CVD risk factors, mood/anxiety, or a by history of childhood abuse. Women with greater exposure to trauma over life maybe at elevated CVD risk.

  15. Previous Gardening Experience and Gardening Enjoyment Is Related to Vegetable Preferences and Consumption Among Low-Income Elementary School Children.

    PubMed

    Evans, Alexandra; Ranjit, Nalini; Fair, Cori N; Jennings, Rose; Warren, Judith L

    2016-10-01

    To examine if gardening experience and enjoyment are associated with vegetable exposure, preferences, and consumption of vegetables among low-income third-grade children. Cross-sectional study design, using baseline data from the Texas! Grow! Eat! Go! Twenty-eight Title I elementary schools located in different counties in Texas. Third-grade students (n = 1,326, 42% Hispanic) MAIN OUTCOME MEASURES: Gardening experience, gardening enjoyment, vegetable exposure, preference, and consumption. Random-effects regression models, adjusted for age, sex, ethnicity, and body mass index percentile of child, estimated means and standard errors of vegetable consumption, exposure, and preference by levels of gardening experience and enjoyment. Wald χ 2 tests evaluated the significance of differences in means of outcomes across levels of gardening experience and enjoyment. Children with more gardening experience had greater vegetable exposure and higher vegetable preference and consumed more vegetables compared with children who reported less gardening experience. Those who reported that they enjoyed gardening had the highest levels of vegetable exposure, preference, and consumption. Garden-based interventions can have an important and positive effect on children's vegetable consumption by increasing exposure to fun gardening experiences. Copyright © 2016 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  16. High definition video teaching module for learning neck dissection.

    PubMed

    Mendez, Adrian; Seikaly, Hadi; Ansari, Kal; Murphy, Russell; Cote, David

    2014-03-25

    Video teaching modules are proven effective tools for enhancing student competencies and technical skills in the operating room. Integration into post-graduate surgical curricula, however, continues to pose a challenge in modern surgical education. To date, video teaching modules for neck dissection have yet to be described in the literature. To develop and validate an HD video-based teaching module (HDVM) to help instruct post-graduate otolaryngology trainees in performing neck dissection. This prospective study included 6 intermediate to senior otolaryngology residents. All consented subjects first performed a control selective neck dissection. Subjects were then exposed to the video teaching module. Following a washout period, a repeat procedure was performed. Recordings of the both sets of neck dissections were de-identified and reviewed by an independent evaluator and scored using the Observational Clinical Human Reliability Assessment (OCHRA) system. In total 91 surgical errors were made prior to the HDVM and 41 after exposure, representing a 55% decrease in error occurrence. The two groups were found to be significantly different. Similarly, 66 and 24 staff takeover events occurred pre and post HDVM exposure, respectively, representing a statistically significant 64% decrease. HDVM is a useful adjunct to classical surgical training. Residents performed significantly less errors following exposure to the HD-video module. Similarly, significantly less staff takeover events occurred following exposure to the HDVM.

  17. Air Pollution Exposure Model for Individuals (EMI) in Health Studies: Evaluation for Ambient PM2.5 in Central North Carolina

    EPA Science Inventory

    Air pollution health studies of fine particulate matter (diameter ≤2.5 μm, PM2.5) often use outdoor concentrations as exposure surrogates. Failure to account for variability of indoor infiltration of ambient PM2.5 and time indoors can induce exposure errors. We developed an...

  18. Genotype-Based Association Mapping of Complex Diseases: Gene-Environment Interactions with Multiple Genetic Markers and Measurement Error in Environmental Exposures

    PubMed Central

    Lobach, Irvna; Fan, Ruzone; Carroll, Raymond T.

    2011-01-01

    With the advent of dense single nucleotide polymorphism genotyping, population-based association studies have become the major tools for identifying human disease genes and for fine gene mapping of complex traits. We develop a genotype-based approach for association analysis of case-control studies of gene-environment interactions in the case when environmental factors are measured with error and genotype data are available on multiple genetic markers. To directly use the observed genotype data, we propose two genotype-based models: genotype effect and additive effect models. Our approach offers several advantages. First, the proposed risk functions can directly incorporate the observed genotype data while modeling the linkage disequihbrium information in the regression coefficients, thus eliminating the need to infer haplotype phase. Compared with the haplotype-based approach, an estimating procedure based on the proposed methods can be much simpler and significantly faster. In addition, there is no potential risk due to haplotype phase estimation. Further, by fitting the proposed models, it is possible to analyze the risk alleles/variants of complex diseases, including their dominant or additive effects. To model measurement error, we adopt the pseudo-likelihood method by Lobach et al. [2008]. Performance of the proposed method is examined using simulation experiments. An application of our method is illustrated using a population-based case-control study of association between calcium intake with the risk of colorectal adenoma development. PMID:21031455

  19. Personal child and mother carbon monoxide exposures and kitchen levels: methods and results from a randomized trial of woodfired chimney cookstoves in Guatemala (RESPIRE).

    PubMed

    Smith, Kirk R; McCracken, John P; Thompson, Lisa; Edwards, Rufus; Shields, Kyra N; Canuz, Eduardo; Bruce, Nigel

    2010-07-01

    During the first randomized intervention trial (RESPIRE: Randomized Exposure Study of Pollution Indoors and Respiratory Effects) in air pollution epidemiology, we pioneered application of passive carbon monoxide (CO) diffusion tubes to measure long-term personal exposures to woodsmoke. Here we report on the protocols and validations of the method, trends in personal exposure for mothers and their young children, and the efficacy of the introduced improved chimney stove in reducing personal exposures and kitchen concentrations. Passive diffusion tubes originally developed for industrial hygiene applications were deployed on a quarterly basis to measure 48-hour integrated personal carbon monoxide exposures among 515 children 0-18 months of age and 532 mothers aged 15-55 years and area samples in a subsample of 77 kitchens, in households randomized into control and intervention groups. Instrument comparisons among types of passive diffusion tubes and against a continuous electrochemical CO monitor indicated that tubes responded nonlinearly to CO, and regression calibration was used to reduce this bias. Before stove introduction, the baseline arithmetic (geometric) mean 48-h child (n=270), mother (n=529) and kitchen (n=65) levels were, respectively, 3.4 (2.8), 3.4 (2.8) and 10.2 (8.4) p.p.m. The between-group analysis of the 3355 post-baseline measurements found CO levels to be significantly lower among the intervention group during the trial period: kitchen levels: -90%; mothers: -61%; and children: -52% in geometric means. No significant deterioration in stove effect was observed over the 18 months of surveillance. The reliability of these findings is strengthened by the large sample size made feasible by these unobtrusive and inexpensive tubes, measurement error reduction through instrument calibration, and a randomized, longitudinal study design. These results from the first randomized trial of improved household energy technology in a developing country and demonstrate that a simple chimney stove can substantially reduce chronic exposures to harmful indoor air pollutants among women and infants.

  20. Personal child and mother carbon monoxide exposures and kitchen levels: Methods and results from a randomized trial of woodfired chimney cookstoves in Guatemala (RESPIRE)

    PubMed Central

    SMITH, KIRK R.; McCRACKEN, JOHN P.; THOMPSON, LISA; EDWARDS, RUFUS; SHIELDS, KYRA N.; CANUZ, EDUARDO; BRUCE, NIGEL

    2015-01-01

    During the first randomized intervention trial (RESPIRE: Randomized Exposure Study of Pollution Indoors and Respiratory Effects) in air pollution epidemiology, we pioneered application of passive carbon monoxide (CO) diffusion tubes to measure long-term personal exposures to woodsmoke. Here we report on the protocols and validations of the method, trends in personal exposure for mothers and their young children, and the efficacy of the introduced improved chimney stove in reducing personal exposures and kitchen concentrations. Passive diffusion tubes originally developed for industrial hygiene applications were deployed on a quarterly basis to measure 48-hour integrated personal carbon monoxide exposures among 515 children 0–18 months of age and 532 mothers aged 15–55 years and area samples in a subsample of 77 kitchens, in households randomized into control and intervention groups. Instrument comparisons among types of passive diffusion tubes and against a continuous electrochemical CO monitor indicated that tubes responded nonlinearly to CO, and regression calibration was used to reduce this bias. Before stove introduction, the baseline arithmetic (geometric) mean 48-h child (n=270), mother (n=529) and kitchen (n=65) levels were, respectively, 3.4 (2.8), 3.4 (2.8) and 10.2 (8.4) p.p.m. The between-group analysis of the 3355 post-baseline measurements found CO levels to be significantly lower among the intervention group during the trial period: kitchen levels: −90%; mothers: −61%; and children: −52% in geometric means. No significant deterioration in stove effect was observed over the 18 months of surveillance. The reliability of these findings is strengthened by the large sample size made feasible by these unobtrusive and inexpensive tubes, measurement error reduction through instrument calibration, and a randomized, longitudinal study design. These results from the first randomized trial of improved household energy technology in a developing country and demonstrate that a simple chimney stove can substantially reduce chronic exposures to harmful indoor air pollutants among women and infants. PMID:19536077

  1. Estimation of personal PM2.5 and BC exposure by a modeling approach - Results of a panel study in Shanghai, China.

    PubMed

    Chen, Chen; Cai, Jing; Wang, Cuicui; Shi, Jingjin; Chen, Renjie; Yang, Changyuan; Li, Huichu; Lin, Zhijing; Meng, Xia; Zhao, Ang; Liu, Cong; Niu, Yue; Xia, Yongjie; Peng, Li; Zhao, Zhuohui; Chillrud, Steven; Yan, Beizhan; Kan, Haidong

    2018-06-06

    Epidemiologic studies of PM 2.5 (particulate matter with aerodynamic diameter ≤2.5 μm) and black carbon (BC) typically use ambient measurements as exposure proxies given that individual measurement is infeasible among large populations. Failure to account for variation in exposure will bias epidemiologic study results. The ability of ambient measurement as a proxy of exposure in regions with heavy pollution is untested. We aimed to investigate effects of potential determinants and to estimate PM 2.5 and BC exposure by a modeling approach. We collected 417 24 h personal PM 2.5 and 130 72 h personal BC measurements from a panel of 36 nonsmoking college students in Shanghai, China. Each participant underwent 4 rounds of three consecutive 24-h sampling sessions through December 2014 to July 2015. We applied backwards regression to construct mixed effect models incorporating all accessible variables of ambient pollution, climate and time-location information for exposure prediction. All models were evaluated by marginal R 2 and root mean square error (RMSE) from a leave-one-out-cross-validation (LOOCV) and a 10-fold cross-validation (10-fold CV). Personal PM 2.5 was 47.6% lower than ambient level, with mean (±Standard Deviation, SD) level of 39.9 (±32.1) μg/m 3 ; whereas personal BC (6.1 (±2.8) μg/m 3 ) was about one-fold higher than the corresponding ambient concentrations. Ambient levels were the most significant determinants of PM 2.5 and BC exposure. Meteorological and season indicators were also important predictors. Our final models predicted 75% of the variance in 24 h personal PM 2.5 and 72 h personal BC. LOOCV analysis showed an R 2 (RMSE) of 0.73 (0.40) for PM 2.5 and 0.66 (0.27) for BC. Ten-fold CV analysis showed a R 2 (RMSE) of 0.73 (0.41) for PM 2.5 and 0.68 (0.26) for BC. We used readily accessible data and established intuitive models that can predict PM 2.5 and BC exposure. This modeling approach can be a feasible solution for PM exposure estimation in epidemiological studies. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Stunting, adiposity, and the individual-level "dual burden" among urban lowland and rural highland Peruvian children.

    PubMed

    Pomeroy, Emma; Stock, Jay T; Stanojevic, Sanja; Miranda, J Jaime; Cole, Tim J; Wells, Jonathan C K

    2014-01-01

    The causes of the "dual burden" of stunting and obesity remain unclear, and its existence at the individual level varies between populations. We investigate whether the individual dual burden differentially affects low socioeconomic status Peruvian children from contrasting environments (urban lowlands and rural highlands), and whether tibia length can discount the possible autocorrelation between adiposity proxies and height due to height measurement error. Stature, tibia length, weight, and waist circumference were measured in children aged 3-8.5 years (n = 201). Height and body mass index (BMI) z scores were calculated using international reference data. Age-sex-specific centile curves were also calculated for height, BMI, and tibia length. Adiposity proxies (BMI z score, waist circumference-height ratio (WCHtR)) were regressed on height and also on tibia length z scores. Regression model interaction terms between site (highland vs. lowland) and height indicate that relationships between adiposity and linear growth measures differed significantly between samples (P < 0.001). Height was positively associated with BMI among urban lowland children, and more weakly with WCHtR. Among rural highland children, height was negatively associated with WCHtR but unrelated to BMI. Similar results using tibia length rather than stature indicate that stature measurement error was not a major concern. Lowland and rural highland children differ in their patterns of stunting, BMI, and WCHtR. These contrasts likely reflect environmental differences and overall environmental stress exposure. Tibia length or knee height can be used to assess the influence of measurement error in height on the relationship between stature and BMI or WCHtR. Copyright © 2014 Wiley Periodicals, Inc.

  3. Polybrominated diphenyl ether serum concentrations in a Californian population of children, their parents, and older adults: an exposure assessment study.

    PubMed

    Wu, Xiangmei May; Bennett, Deborah H; Moran, Rebecca E; Sjödin, Andreas; Jones, Richard S; Tancredi, Daniel J; Tulve, Nicolle S; Clifton, Matthew Scott; Colón, Maribel; Weathers, Walter; Hertz-Picciotto, Irva

    2015-03-14

    Polybrominated diphenyl ethers (PBDEs) are used as flame retardants in many household items. Given concerns over their potential adverse health effects, we identified predictors and evaluated temporal changes of PBDE serum concentrations. PBDE serum concentrations were measured in young children (2-8 years old; N = 67), parents of young children (<55 years old; N = 90), and older adults (≥55 years old; N = 59) in California, with concurrent floor wipe samples collected in participants' homes in 2008-2009. We also measured serum concentrations one year later in a subset of children (N = 19) and parents (N = 42). PBDE serum concentrations in children were significantly higher than in adults. Floor wipe concentration is a significant predictor of serum BDE-47, 99, 100 and 154. Positive associations were observed between the intake frequency of canned meat and serum concentrations of BDE-47, 99 and 154, between canned meat entrees and BDE-154 and 209, as well as between tuna and white fish and BDE-153. The model with the floor wipe concentration and food intake frequencies explained up to 40% of the mean square prediction error of some congeners. Lower home values and renting (vs. owning) a home were associated with higher serum concentrations of BDE-47, 99 and 100. Serum concentrations measured one year apart were strongly correlated as expected (r = 0.70-0.97) with a slight decreasing trend. Floor wipe concentration, food intake frequency, and housing characteristics can explain 12-40% of the prediction error of PBDE serum concentrations. Decreasing temporal trends should be considered when characterizing long-term exposure.

  4. [Evaluation of an Experimental Production Wireless Dose Monitoring System for Radiation Exposure Management of Medical Staff].

    PubMed

    Fujibuchi, Toshioh; Murazaki, Hiroo; Kuramoto, Taku; Umedzu, Yoshiyuki; Ishigaki, Yung

    2015-08-01

    Because of the more advanced and more complex procedures in interventional radiology, longer treatment times have become necessary. Therefore, it is important to determine the exposure doses received by operators and patients. The aim of our study was to evaluate an experimental production wireless dose monitoring system for pulse radiation in diagnostic X-ray. The energy, dose rate, and pulse fluoroscopy dependence were evaluated as the basic characteristics of this system for diagnostic X-ray using a fully digital fluoroscopy system. The error of 1 cm dose equivalent rate was less than 15% from 35.1 keV to 43.2 keV with energy correction using metal filter. It was possible to accurately measure the dose rate dependence of this system, which was highly linear until 100 μSv/h. This system showed a constant response to the pulse fluoroscopy. This system will become useful wireless dosimeter for the individual exposure management by improving the high dose rate and the energy characteristics.

  5. Seasonal and diurnal gas exchange differences in ozone-sensitive common milkweed (Asclepias syriaca L.) in relation to ozone uptake.

    PubMed

    Bergweiler, Chris; Manning, William J; Chevone, Boris I

    2008-03-01

    Stomatal conductance and net photosynthesis of common milkweed (Asclepias syriaca L.) plants in two different soil moisture regimes were directly quantified and subsequently modeled over an entire growing season. Direct measurements captured the dynamic response of stomatal conductance to changing environmental conditions throughout the day, as well as declining gas exchange and carbon assimilation throughout the growth period beyond an early summer maximum. This phenomenon was observed in plants grown both with and without supplemental soil moisture, the latter of which should theoretically mitigate against harmful physiological effects caused by exposure to ozone. Seasonally declining rates of stomatal conductance were found to be substantial and incorporated into models, making them less susceptible to the overestimations of effective exposure that are an inherent source of error in ozone exposure indices. The species-specific evidence presented here supports the integration of dynamic physiological processes into flux-based modeling approaches for the prediction of ozone injury in vegetation.

  6. Prenatal drug exposure and selective attention in preschoolers.

    PubMed

    Noland, Julia S; Singer, Lynn T; Short, Elizabeth J; Minnes, Sonia; Arendt, Robert E; Kirchner, H Lester; Bearer, Cynthia

    2005-01-01

    Deficits in sustained attention and impulsivity have previously been demonstrated in preschoolers prenatally exposed to cocaine. We assessed an additional component of attention, selective attention, in a large, poly-substance cocaine-exposed cohort of 4 year olds and their at-risk comparison group. Employing postpartum maternal report and biological assay, we assigned children to overlapping exposed and complementary control groups for maternal use of cocaine, alcohol, marijuana, and cigarettes. Maternal pregnancy use of cocaine and use of cigarettes were both associated with increased commission errors, indicative of inferior selective attention. Severity of maternal use of marijuana during pregnancy was positively correlated with omission errors, suggesting impaired sustained attention. Substance exposure effects were independent of maternal postpartum psychological distress, birth mother cognitive functioning, current caregiver functioning, other substance exposures and child concurrent verbal IQ.

  7. Beam localization in HIFU temperature measurements using thermocouples, with application to cooling by large blood vessels.

    PubMed

    Dasgupta, Subhashish; Banerjee, Rupak K; Hariharan, Prasanna; Myers, Matthew R

    2011-02-01

    Experimental studies of thermal effects in high-intensity focused ultrasound (HIFU) procedures are often performed with the aid of fine wire thermocouples positioned within tissue phantoms. Thermocouple measurements are subject to several types of error which must be accounted for before reliable inferences can be made on the basis of the measurements. Thermocouple artifact due to viscous heating is one source of error. A second is the uncertainty regarding the position of the beam relative to the target location or the thermocouple junction, due to the error in positioning the beam at the junction. This paper presents a method for determining the location of the beam relative to a fixed pair of thermocouples. The localization technique reduces the uncertainty introduced by positioning errors associated with very narrow HIFU beams. The technique is presented in the context of an investigation into the effect of blood flow through large vessels on the efficacy of HIFU procedures targeted near the vessel. Application of the beam localization method allowed conclusions regarding the effects of blood flow to be drawn from previously inconclusive (because of localization uncertainties) data. Comparison of the position-adjusted transient temperature profiles for flow rates of 0 and 400ml/min showed that blood flow can reduce temperature elevations by more than 10%, when the HIFU focus is within a 2mm distance from the vessel wall. At acoustic power levels of 17.3 and 24.8W there is a 20- to 70-fold decrease in thermal dose due to the convective cooling effect of blood flow, implying a shrinkage in lesion size. The beam-localization technique also revealed the level of thermocouple artifact as a function of sonication time, providing investigators with an indication of the quality of thermocouple data for a given exposure time. The maximum artifact was found to be double the measured temperature rise, during initial few seconds of sonication. Copyright © 2010 Elsevier B.V. All rights reserved.

  8. Variability in quartz exposure in the construction industry: implications for assessing exposure-response relations.

    PubMed

    Tjoe Nij, Evelyn; Höhr, Doris; Borm, Paul; Burstyn, Igor; Spierings, Judith; Steffens, Friso; Lumens, Mieke; Spee, Ton; Heederik, Dick

    2004-03-01

    The aims of this study were to determine implications of inter- and intraindividual variation in exposure to respirable (quartz) dust and of heterogeneity in dust characteristics for epidemiologic research in construction workers. Full-shift personal measurements (n = 67) from 34 construction workers were collected. The between-worker and day-to-day variances of quartz and respirable dust exposure were estimated using mixed models. Heterogeneity in dust characteristics was evaluated by electron microscopic analysis and electron spin resonance. A grouping strategy based on job title resulted in a 2- and 3.5-fold reduction in expected attenuation of a hypothetical exposure-response relation for respirable dust and quartz exposure, respectively, compared to an individual based approach. Material worked on explained most of the between-worker variance in respirable dust and quartz exposure. However, for risk assessment in epidemiology, grouping workers based on the materials they work on is not practical. Microscopic characterization of dust samples showed large quantities of aluminum silicates and large quantities of smaller particles, resulting in a D(50) between 1 and 2 microm. For risk analysis, job title can be used to create exposure groups, although error is introduced by the heterogeneity of dust produced by different construction workers activities and by the nonuniformity of exposure groups. A grouping scheme based on materials worked on would be superior, for both exposure and risk assessment, but is not practical when assessing past exposure. In dust from construction sites, factors are present that are capable of influencing the toxicological potency.

  9. AFRRI (Armed Forces Radiobiology Research Institute) Reports, July, August and September 1986

    DTIC Science & Technology

    1986-09-01

    detectors (LiF TLD 100s) on a cat phantom. The dosimetry indicated that the shoulders of the cat received an exposure of 4.6% of the total dose, while...radiographically determined outline of the precordium, and dosimetry measurements were made on one of the experimental animals. Isodose curves (Fig. 1) were...of 0.01-25 Gy/min and bi- lateral dose rates of 0.08-57 Gy/min can be ad- ministered with error bounds of ±5%. Dosimetry was done using tissue

  10. A Novel Approach of Understanding and Incorporating Error of Chemical Transport Models into a Geostatistical Framework

    NASA Astrophysics Data System (ADS)

    Reyes, J.; Vizuete, W.; Serre, M. L.; Xu, Y.

    2015-12-01

    The EPA employs a vast monitoring network to measure ambient PM2.5 concentrations across the United States with one of its goals being to quantify exposure within the population. However, there are several areas of the country with sparse monitoring spatially and temporally. One means to fill in these monitoring gaps is to use PM2.5 modeled estimates from Chemical Transport Models (CTMs) specifically the Community Multi-scale Air Quality (CMAQ) model. CMAQ is able to provide complete spatial coverage but is subject to systematic and random error due to model uncertainty. Due to the deterministic nature of CMAQ, often these uncertainties are not quantified. Much effort is employed to quantify the efficacy of these models through different metrics of model performance. Currently evaluation is specific to only locations with observed data. Multiyear studies across the United States are challenging because the error and model performance of CMAQ are not uniform over such large space/time domains. Error changes regionally and temporally. Because of the complex mix of species that constitute PM2.5, CMAQ error is also a function of increasing PM2.5 concentration. To address this issue we introduce a model performance evaluation for PM2.5 CMAQ that is regionalized and non-linear. This model performance evaluation leads to error quantification for each CMAQ grid. Areas and time periods of error being better qualified. The regionalized error correction approach is non-linear and is therefore more flexible at characterizing model performance than approaches that rely on linearity assumptions and assume homoscedasticity of CMAQ predictions errors. Corrected CMAQ data are then incorporated into the modern geostatistical framework of Bayesian Maximum Entropy (BME). Through cross validation it is shown that incorporating error-corrected CMAQ data leads to more accurate estimates than just using observed data by themselves.

  11. Predictors of occupational exposure to styrene and styrene‐7,8‐oxide in the reinforced plastics industry

    PubMed Central

    Serdar, B; Tornero‐Velez, R; Echeverria, D; Nylander‐French, L A; Kupper, L L; Rappaport, S M

    2006-01-01

    Objective To identify demographic and work related factors that predict blood levels of styrene and styrene‐7,8‐oxide (SO) in the fibreglass reinforced plastics (FRP) industry. Methods Personal breathing‐zone air samples and whole blood samples were collected repeatedly from 328 reinforced plastics workers in the Unuted States between 1996 and 1999. Styrene and its major metabolite SO were measured in these samples. Multivariable linear regression analyses were applied to the subject‐specific levels to explain the variation in exposure and biomarker levels. Results Exposure levels of styrene were approximately 500‐fold higher than those of SO. Exposure levels of styrene and SO varied greatly among the types of products manufactured, with an 11‐fold range of median air levels among categories for styrene and a 23‐fold range for SO. Even after stratification by job title, median exposures of styrene and SO among laminators varied 14‐ and 31‐fold across product categories. Furthermore, the relative proportions of exposures to styrene and SO varied among product categories. Multivariable regression analyses explained 70% and 63% of the variation in air levels of styrene and SO, respectively, and 72% and 34% of the variation in blood levels of styrene and SO, respectively. Overall, air levels of styrene and SO appear to have decreased substantially in this industry over the last 10–20 years in the US and were greatest among workers with the least seniority. Conclusions As levels of styrene and SO in air and blood varied among product categories in the FRP industry, use of job title as a surrogate for exposure can introduce unpredictable measurement errors and can confound the relation between exposure and health outcomes in epidemiology studies. Also, inverse relations between the intensity of exposure to styrene and SO and years on the job suggest that younger workers with little seniority are typically exposed to higher levels of styrene and SO than their coworkers. PMID:16757507

  12. Predictors of occupational exposure to styrene and styrene-7,8-oxide in the reinforced plastics industry.

    PubMed

    Serdar, B; Tornero-Velez, R; Echeverria, D; Nylander-French, L A; Kupper, L L; Rappaport, S M

    2006-10-01

    To identify demographic and work related factors that predict blood levels of styrene and styrene-7,8-oxide (SO) in the fibreglass reinforced plastics (FRP) industry. Personal breathing-zone air samples and whole blood samples were collected repeatedly from 328 reinforced plastics workers in the Unuted States between 1996 and 1999. Styrene and its major metabolite SO were measured in these samples. Multivariable linear regression analyses were applied to the subject-specific levels to explain the variation in exposure and biomarker levels. Exposure levels of styrene were approximately 500-fold higher than those of SO. Exposure levels of styrene and SO varied greatly among the types of products manufactured, with an 11-fold range of median air levels among categories for styrene and a 23-fold range for SO. Even after stratification by job title, median exposures of styrene and SO among laminators varied 14- and 31-fold across product categories. Furthermore, the relative proportions of exposures to styrene and SO varied among product categories. Multivariable regression analyses explained 70% and 63% of the variation in air levels of styrene and SO, respectively, and 72% and 34% of the variation in blood levels of styrene and SO, respectively. Overall, air levels of styrene and SO appear to have decreased substantially in this industry over the last 10-20 years in the US and were greatest among workers with the least seniority. As levels of styrene and SO in air and blood varied among product categories in the FRP industry, use of job title as a surrogate for exposure can introduce unpredictable measurement errors and can confound the relation between exposure and health outcomes in epidemiology studies. Also, inverse relations between the intensity of exposure to styrene and SO and years on the job suggest that younger workers with little seniority are typically exposed to higher levels of styrene and SO than their coworkers.

  13. Radiation in Yolo County

    NASA Astrophysics Data System (ADS)

    Dickie, H.; Colwell, K.

    2013-12-01

    In today's post-nuclear age, there are many man-made sources of radioactivity, in addition to the natural background we expect from cosmic and terrestrial origins. While all atoms possess unstable isotopes, there are few that are abundant enough, energetic enough, and have long enough half-lives to pose a signicant risk of ionizing radiation exposure. We hypothesize a decreasing relative radiation measurement (in detected counts per minute [CPM]) at nine locations that might pose occupational or environmental hazard: 1. A supermarket produce aisle (living tissue has high concentration of 40K) 2. A hospital (medical imaging uses X-rays and radioactive dyes) 3. The electronics section of a superstore (high voltage electronics have the potential to produce ionizing radiation) 4. An electrical transformer (similar reasons) 5. An antique store (some ceramics and glazes use radioisotopes that are now outlawed) 6. A gasoline pump (processing and terrestrial isotope contamination might leave a radioactive residue) 7. A fertilized eld (phosphate rock contains uranium and thorium, in addition to potassium) 8. A house (hopefully mild background, but potential radon contamination) 9. A school (should be radiologically neutral) We tested the hypothesis by measuring 100 minutes of counts on a self-assembled MightyOhmTM Geiger counter at each location. Our results show that contrary to the hypothesized ordering, the house was the most radiologically active. We present possible explanations for the observed radiation levels, as well as possible sources of measurement error, possible consequences of prolonged exposure to the measured levels, and suggestions for decreasing exposure and environmental impact.

  14. An iOS Application for Evaluating Whole-body Vibration Within a Workplace Risk Management Process.

    PubMed

    McGlothlin, James; Burgess-Limerick, R; Lynas, D

    2015-01-01

    Workplace management of whole-body vibration exposure requires systematic collection of whole-body vibration data in conjunction with the numerous variables which influence vibration amplitudes. The cost and complexity of commercially available measurement devices is an impediment to the routine collection of such data by workplaces. An iOS application (WBV) has been developed which allows an iPod Touch to be used to measure whole-body vibration exposures. The utility of the application was demonstrated by simultaneously obtaining 98 pairs of whole-body vibration measurements from both the iPod Touch application and a commercially available whole-body vibration device during the operation of a variety of vehicles and mobile plant in operation at a surface coal mine. The iOS application installed on a fifth-generation iPod Touch was shown to provide a 95% confidence of +/- 0.077 m/s(2) r.m.s. constant error for the vertical direction. Situations in which vibration levels lay within the ISO2631.1 health guidance caution zone were accurately identified, and the qualitative features of the frequency spectra were reproduced. The low cost and relative simplicity of the application has potential to facilitate its use as a screening tool to identify situations in which musculoskeletal disorders may arise as a consequence of exposure to whole-body vibration.

  15. Precision-Based Item Selection for Exposure Control in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Carroll, Ian A.

    2017-01-01

    Item exposure control is, relative to adaptive testing, a nascent concept that has emerged only in the last two to three decades on an academic basis as a practical issue in high-stakes computerized adaptive tests. This study aims to implement a new strategy in item exposure control by incorporating the standard error of the ability estimate into…

  16. High resolution multispectral photogrammetric imagery: enhancement, interpretation and evaluations

    NASA Astrophysics Data System (ADS)

    Roberts, Arthur; Haefele, Martin; Bostater, Charles; Becker, Thomas

    2007-10-01

    A variety of aerial mapping cameras were adapted and developed into simulated multiband digital photogrammetric mapping systems. Direct digital multispectral, two multiband cameras (IIS 4 band and Itek 9 band) and paired mapping and reconnaissance cameras were evaluated for digital spectral performance and photogrammetric mapping accuracy in an aquatic environment. Aerial films (24cm X 24cm format) tested were: Agfa color negative and extended red (visible and near infrared) panchromatic, and; Kodak color infrared and B&W (visible and near infrared) infrared. All films were negative processed to published standards and digitally converted at either 16 (color) or 10 (B&W) microns. Excellent precision in the digital conversions was obtained with scanning errors of less than one micron. Radiometric data conversion was undertaken using linear density conversion and centered 8 bit histogram exposure. This resulted in multiple 8 bit spectral image bands that were unaltered (not radiometrically enhanced) "optical count" conversions of film density. This provided the best film density conversion to a digital product while retaining the original film density characteristics. Data covering water depth, water quality, surface roughness, and bottom substrate were acquired using different measurement techniques as well as different techniques to locate sampling points on the imagery. Despite extensive efforts to obtain accurate ground truth data location errors, measurement errors, and variations in the correlation between water depth and remotely sensed signal persisted. These errors must be considered endemic and may not be removed through even the most elaborate sampling set up. Results indicate that multispectral photogrammetric systems offer improved feature mapping capability.

  17. Prenatal exposure to methylmercury and PCBs affects distinct stages of information processing: an event-related potential study with Inuit children.

    PubMed

    Boucher, Olivier; Bastien, Célyne H; Saint-Amour, Dave; Dewailly, Eric; Ayotte, Pierre; Jacobson, Joseph L; Jacobson, Sandra W; Muckle, Gina

    2010-08-01

    Methylmercury (MeHg) and polychlorinated biphenyls (PCBs) are seafood contaminants known for their adverse effects on neurodevelopment. This study examines the relation of developmental exposure to these contaminants to information processing assessed with event-related potentials (ERPs) in school-aged Inuit children from Nunavik (Arctic Québec). In a prospective longitudinal study on child development, exposure to contaminants was measured at birth and 11 years of age. An auditory oddball protocol was administered at 11 years to measure ERP components N1 and P3b. Multiple regression analyses were performed to examine the associations of levels of the contaminants to auditory oddball performance (mean reaction time, omission errors and false alarms) and ERP parameters (latency and amplitude) after control for potential confounding variables. A total of 118 children provided useable ERP data. Prenatal MeHg exposure was associated with slower reaction times and fewer false alarms during the oddball task. Analyses of the ERP parameters revealed that prenatal MeHg exposure was related to greater amplitude and delayed latency of the N1 wave in the target condition but not to the P3b component. MeHg effects on the N1 were stronger after control for seafood nutrients. Prenatal PCB exposure was not related to any endpoint for sample as a whole but was associated with a decrease in P3b amplitude in the subgroup of children who had been breast-fed for less than 3 months. Body burdens of MeHg and PCBs at 11 years were not related to any of the behavioural or ERP measures. These data suggest that prenatal MeHg exposure alters attentional mechanisms modulating early processing of sensory information. By contrast, prenatal PCB exposure appears to affect information processing at later stages, when the information is being consciously evaluated. These effects seem to be mitigated in children who are breast-fed for a more extended period. (c) 2010 Elsevier Inc. All rights reserved.

  18. Prenatal Exposure to Environmental Phenols: Concentrations in Amniotic Fluid and Variability in Urinary Concentrations during Pregnancy

    PubMed Central

    Wolff, Mary S.; Calafat, Antonia M.; Ye, Xiaoyun; Bausell, Rebecca; Meadows, Molly; Stone, Joanne; Slama, Rémy; Engel, Stephanie M.

    2013-01-01

    Background: Maternal urinary biomarkers are often used to assess fetal exposure to phenols and their precursors. Their effectiveness as a measure of exposure in epidemiological studies depends on their variability during pregnancy and their ability to accurately predict fetal exposure. Objectives: We assessed the relationship between urinary and amniotic fluid concentrations of nine environmental phenols, and the reproducibility of urinary concentrations, among pregnant women. Methods: Seventy-one women referred for amniocentesis were included. Maternal urine was collected at the time of the amniocentesis appointment and on two subsequent occasions. Urine and amniotic fluid were analyzed for 2,4- and 2,5-dichlorophenols, bisphenol A, benzophenone-3, triclosan, and methyl-, ethyl-, propyl-, and butylparabens using online solid phase extraction–high performance liquid chromatography–isotope dilution tandem mass spectrometry. Results: Only benzophenone-3 and propylparaben were detectable in more than half of the amniotic fluid samples; for these phenols, concentrations in amniotic fluid and maternal urine collected on the same day were positively correlated (ρ = 0.53 and 0.32, respectively). Other phenols were detected infrequently in amniotic fluid (e.g., bisphenol A was detected in only two samples). The intraclass correlation coefficients (ICCs) of urinary concentrations in samples from individual women ranged from 0.48 and 0.62 for all phenols except bisphenol A (ICC = 0.11). Conclusion: Amniotic fluid detection frequencies for most phenols were low. The reproducibility of urine measures was poor for bisphenol A, but good for the other phenols. Although a single sample may provide a reasonable estimate of exposure for some phenols, collecting multiple urine samples during pregnancy is an option to reduce exposure measurement error in studies regarding the effects of phenol prenatal exposure on health. Citation: Philippat C, Wolff MS, Calafat AM, Ye X, Bausell R, Meadows M, Stone J, Slama R, Engel SM. 2013. Prenatal exposure to environmental phenols: concentrations in amniotic fluid and variability in urinary concentrations during pregnancy. Environ Health Perspect 121:1225–1231; http://dx.doi.org/10.1289/ehp.1206335 PMID:23942273

  19. Traffic-related air pollution and spectacles use in schoolchildren

    PubMed Central

    Nieuwenhuijsen, Mark J.; Basagaña, Xavier; Alvarez-Pedrerol, Mar; Dalmau-Bueno, Albert; Cirach, Marta; Rivas, Ioar; Brunekreef, Bert; Querol, Xavier; Morgan, Ian G.; Sunyer, Jordi

    2017-01-01

    Purpose To investigate the association between exposure to traffic-related air pollution and use of spectacles (as a surrogate measure for myopia) in schoolchildren. Methods We analyzed the impact of exposure to NO2 and PM2.5 light absorbance at home (predicted by land-use regression models) and exposure to NO2 and black carbon (BC) at school (measured by monitoring campaigns) on the use of spectacles in a cohort of 2727 schoolchildren (7–10 years old) in Barcelona (2012–2015). We conducted cross-sectional analyses based on lifelong exposure to air pollution and prevalent cases of spectacles at baseline data collection campaign as well as longitudinal analyses based on incident cases of spectacles use and exposure to air pollution during the three-year period between the baseline and last data collection campaigns. Logistic regression models were developed to quantify the association between spectacles use and each of air pollutants adjusted for relevant covariates. Results An interquartile range increase in exposure to NO2 and PM2.5 absorbance at home was respectively associated with odds ratios (95% confidence intervals (CIs)) for spectacles use of 1.16 (1.03, 1.29) and 1.13 (0.99, 1.28) in cross-sectional analyses and 1.15 (1.00, 1.33) and 1.23 (1.03, 1.46) in longitudinal analyses. Similarly, odds ratio (95% CIs) of spectacles use associated with an interquartile range increase in exposures to NO2 and black carbon at school was respectively 1.32 (1.09, 1.59) and 1.13 (0.97, 1.32) in cross-sectional analyses and 1.12 (0.84, 1.50) and 1.27 (1.03, 1.56) in longitudinal analyses. These findings were robust to a range of sensitivity analyses that we conducted. Conclusion We observed increased risk of spectacles use associated with exposure to traffic-related air pollution. These findings require further confirmation by future studies applying more refined outcome measures such as quantified visual acuity and separating different types of refractive errors. PMID:28369072

  20. Measurement System Characterization in the Presence of Measurement Errors

    NASA Technical Reports Server (NTRS)

    Commo, Sean A.

    2012-01-01

    In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.

  1. Hypertension and Exposure to Noise near Airports (HYENA): study design and noise exposure assessment.

    PubMed

    Jarup, Lars; Dudley, Marie-Louise; Babisch, Wolfgang; Houthuijs, Danny; Swart, Wim; Pershagen, Göran; Bluhm, Gösta; Katsouyanni, Klea; Velonakis, Manolis; Cadum, Ennio; Vigna-Taglianti, Federica

    2005-11-01

    An increasing number of people live near airports with considerable noise and air pollution. The Hypertension and Exposure to Noise near Airports (HYENA) project aims to assess the impact of airport-related noise exposure on blood pressure (BP) and cardiovascular disease using a cross-sectional study design. We selected 6,000 persons (45-70 years of age) who had lived at least 5 years near one of six major European airports. We used modeled aircraft noise contours, aiming to maximize exposure contrast. Automated BP instruments are used to reduce observer error. We designed a standardized questionnaire to collect data on annoyance, noise disturbance, and major confounders. Cortisol in saliva was collected in a subsample of the study population (n = 500) stratified by noise exposure level. To investigate short-term noise effects on BP and possible effects on nighttime BP dipping, we measured 24-hr BP and assessed continuous night noise in another subsample (n = 200). To ensure comparability between countries, we used common noise models to assess individual noise exposure, with a resolution of 1 dB(A). Modifiers of individual exposure, such as the orientation of living and bedroom toward roads, window-opening habits, and sound insulation, were assessed by the questionnaire. For four airports, we estimated exposure to air pollution to explore modifying effects of air pollution on cardiovascular disease. The project assesses exposure to traffic-related air pollutants, primarily using data from another project funded by the European Union (APMoSPHERE, Air Pollution Modelling for Support to Policy on Health and Environmental Risks in Europe).

  2. Positional error and time-activity patterns in near-highway proximity studies: an exposure misclassification analysis

    PubMed Central

    2013-01-01

    Background The growing interest in research on the health effects of near-highway air pollutants requires an assessment of potential sources of error in exposure assignment techniques that rely on residential proximity to roadways. Methods We compared the amount of positional error in the geocoding process for three different data sources (parcels, TIGER and StreetMap USA) to a “gold standard” residential geocoding process that used ortho-photos, large multi-building parcel layouts or large multi-unit building floor plans. The potential effect of positional error for each geocoding method was assessed as part of a proximity to highway epidemiological study in the Boston area, using all participants with complete address information (N = 703). Hourly time-activity data for the most recent workday/weekday and non-workday/weekend were collected to examine time spent in five different micro-environments (inside of home, outside of home, school/work, travel on highway, and other). Analysis included examination of whether time-activity patterns were differentially distributed either by proximity to highway or across demographic groups. Results Median positional error was significantly higher in street network geocoding (StreetMap USA = 23 m; TIGER = 22 m) than parcel geocoding (8 m). When restricted to multi-building parcels and large multi-unit building parcels, all three geocoding methods had substantial positional error (parcels = 24 m; StreetMap USA = 28 m; TIGER = 37 m). Street network geocoding also differentially introduced greater amounts of positional error in the proximity to highway study in the 0–50 m proximity category. Time spent inside home on workdays/weekdays differed significantly by demographic variables (age, employment status, educational attainment, income and race). Time-activity patterns were also significantly different when stratified by proximity to highway, with those participants residing in the 0–50 m proximity category reporting significantly more time in the school/work micro-environment on workdays/weekdays than all other distance groups. Conclusions These findings indicate the potential for both differential and non-differential exposure misclassification due to geocoding error and time-activity patterns in studies of highway proximity. We also propose a multi-stage manual correction process to minimize positional error. Additional research is needed in other populations and geographic settings. PMID:24010639

  3. Positional error and time-activity patterns in near-highway proximity studies: an exposure misclassification analysis.

    PubMed

    Lane, Kevin J; Kangsen Scammell, Madeleine; Levy, Jonathan I; Fuller, Christina H; Parambi, Ron; Zamore, Wig; Mwamburi, Mkaya; Brugge, Doug

    2013-09-08

    The growing interest in research on the health effects of near-highway air pollutants requires an assessment of potential sources of error in exposure assignment techniques that rely on residential proximity to roadways. We compared the amount of positional error in the geocoding process for three different data sources (parcels, TIGER and StreetMap USA) to a "gold standard" residential geocoding process that used ortho-photos, large multi-building parcel layouts or large multi-unit building floor plans. The potential effect of positional error for each geocoding method was assessed as part of a proximity to highway epidemiological study in the Boston area, using all participants with complete address information (N = 703). Hourly time-activity data for the most recent workday/weekday and non-workday/weekend were collected to examine time spent in five different micro-environments (inside of home, outside of home, school/work, travel on highway, and other). Analysis included examination of whether time-activity patterns were differentially distributed either by proximity to highway or across demographic groups. Median positional error was significantly higher in street network geocoding (StreetMap USA = 23 m; TIGER = 22 m) than parcel geocoding (8 m). When restricted to multi-building parcels and large multi-unit building parcels, all three geocoding methods had substantial positional error (parcels = 24 m; StreetMap USA = 28 m; TIGER = 37 m). Street network geocoding also differentially introduced greater amounts of positional error in the proximity to highway study in the 0-50 m proximity category. Time spent inside home on workdays/weekdays differed significantly by demographic variables (age, employment status, educational attainment, income and race). Time-activity patterns were also significantly different when stratified by proximity to highway, with those participants residing in the 0-50 m proximity category reporting significantly more time in the school/work micro-environment on workdays/weekdays than all other distance groups. These findings indicate the potential for both differential and non-differential exposure misclassification due to geocoding error and time-activity patterns in studies of highway proximity. We also propose a multi-stage manual correction process to minimize positional error. Additional research is needed in other populations and geographic settings.

  4. How Old is Cone Crater at the Apollo 14 Landing Site?

    NASA Astrophysics Data System (ADS)

    Hiesinger, Harald; Simon, Ina; van der Bogert, Carolyn H.; Robinson, Mark S.; Plescia, Jeff B.

    2015-04-01

    The Lunar Reconnaissance Orbiter (LRO) Narrow Angle Cameras (NAC) provides new opportunities to investigate crater size-frequency distributions (CSFDs) on individual geological units at key lunar impact craters. We performed new CSFD measurements for the Copernican-aged Cone crater at the Apollo 14 landing site because it is an anchor point for the lunar cratering chronology at young ages [1-4]. Cone crater (340 m diameter) is located about 1100 m NE of the Apollo 14 landing site on a 90 m high ridge of the Fra Mauro Formation, and exhibits a sharp rim [e.g., 5,6,7]. Samples from Cone crater were collected from four stations (Dg, C1, C2, C') during the Apollo 14 mission [7]. Exposure ages of those samples were used to date the formation of Cone crater. Although there is a considerable range of exposure ages (~12 Ma [8] to ~661 Ma [9]), several studies of Cone crater samples indicate an age of ~25-26 Ma [e.g., 2,10,11]. On the basis of our CSFD measurements we determined an absolute model age (AMA) for Cone crater of ~39 Ma, which is in the range of model ages derived by previous CSFD measurements that vary between ~24 Ma [12] and ~73 Ma [13]. However, we found a wide spread of model ages ranging from ~16 to ~82 Ma for individual areas on the crater ejecta blanket. Like [13], we find that the CSFD measurements on LROC images yield older AMAs than previous CSFDs [e.g., 12]. However, our results are closer to the older CSFDs than to those of [13] and are just within the error bars of [14]. Our derived N(1) = 3.26 x 10-5 km-2 is almost identical to the N(1) = 3.36 x 10-5 km-2 of [15]. Comparing the CSFD results to exposure ages of the returned samples we find somewhat older ages. However, at least two of our count areas produce AMAs that are within the error bars of the exposure ages [e.g., 10]. Six other areas show ages that are within two standard deviations to the exposure ages [e.g., 10]. For two count areas that were directly sampled, we obtained ages that are 10 and 23 Ma older than the exposure ages [e.g., 10]. We find that CSFD measurements performed on the ejecta blanket of Cone crater yield AMAs that agree well with the exposure ages, considering the relatively small count areas and the hummocky nature of the ejecta blanket. However, the AMAs are generally older than the exposure ages, which may be due to the small count area sizes [16], a possibly higher recent impact rate [17], some unidentified secondary craters [13], poor calibration of the production function, or inaccurate exposure ages. [1] Hiesinger et al. (2012) J. Geophys. Res. 117. [2] Stöffler and Ryder (2001) Chronology and Evolution of Mars. [3] Neukum (1983) Habil. thesis, U. of Munich. [4] Neukum et al. (2001) Space Sci. Rev. 96. [5] Swann et al. (1971) Apollo 14 Prelim. Sci. Rep. [6] Carlson (1978) NASA STI/Recon Technical Report. [7] Swann (1977) Washington US Govt. Print. Off. [8] Bhandari et al. (1972) Proc. Lunar Planet. Sci. Conf. 3. [9] Crozaz et al. (1972) Proc. Lunar Planet. Sci. Conf. 3. [10] Arvidson et al. (1975) Moon 13. [11] Stadermann et al. (1991) Geochim. Cosmochim. Acta 55. [12] Moore et al. (1980) Moon and Planets 23. [13] Plescia and Robinson (2011) LPSC 42. [14] Williams et al. (2014) Icarus 235. [15] Robbins (2014) Earth Planet. Sci. Lett. 403. [16] van der Bogert et al. (2015) LPSC 46. [17] McEwen et al. (2015) LPSC 46.

  5. Cleveland Clinic intelligent mouthguard: a new technology to accurately measure head impact in athletes and soldiers

    NASA Astrophysics Data System (ADS)

    Bartsch, Adam; Samorezov, Sergey

    2013-05-01

    Nearly 2 million Traumatic Brain Injuries (TBI) occur in the U.S. each year, with societal costs approaching $60 billion. Including mild TBI and concussion, TBI's are prevalent in soldiers returning from Iraq and Afghanistan as well as in domestic athletes. Long-term risks of single and cumulative head impact dosage may present in the form of post traumatic stress disorder (PTSD), depression, suicide, Chronic Traumatic Encephalopathy (CTE), dementia, Alzheimer's and Parkinson's diseases. Quantifying head impact dosage and understanding associated risk factors for the development of long-term sequelae is critical toward developing guidelines for TBI exposure and post-exposure management. The current knowledge gap between head impact exposure and clinical outcomes limits the understanding of underlying TBI mechanisms, including effective treatment protocols and prevention methods for soldiers and athletes. In order to begin addressing this knowledge gap, Cleveland Clinic is developing the "Intelligent Mouthguard" head impact dosimeter. Current testing indicates the Intelligent Mouthguard can quantify linear acceleration with 3% error and angular acceleration with 17% error during impacts ranging from 10g to 174g and 850rad/s2 to 10000rad/s2, respectively. Correlation was high (R2 > 0.99, R2 = 0.98, respectively). Near-term development will be geared towards quantifying head impact dosages in vitro, longitudinally in athletes and to test new sensors for possible improved accuracy and reduced bias. Long-term, the IMG may be useful to soldiers to be paired with neurocognitive clinical data quantifying resultant TBI functional deficits.

  6. Determining optimum wavelength of ultraviolet rays to pre-exposure of non-uniformity error correction in Gafchromic EBT2 films

    NASA Astrophysics Data System (ADS)

    Katsuda, Toshizo; Gotanda, Rumi; Gotanda, Tatsuhiro; Akagawa, Takuya; Tanki, Nobuyoshi; Kuwano, Tadao; Noguchi, Atsushi; Yabunaka, Kouichi

    2018-03-01

    Gafchromic films have been used to measure X-ray doses in diagnostic radiology such as computed tomography. The double-exposure technique is used to correct non-uniformity error of Gafchromic EBT2 films. Because of the heel effect of diagnostic x-rays, ultraviolet A (UV-A) is intended to be used as a substitute for x-rays. When using a UV-A light-emitting diode (LED), it is necessary to determine the effective optimal UV wavelength for the active layer of Gafchromic EBT2 films. This study evaluated the relation between the increase in color density of Gafchromic EBT2 films and the UV wavelengths. First, to correct non-uniformity, a Gafchromic EBT2 film was pre-irradiated using uniform UV-A radiation for 60 min from a 72-cm distance. Second, the film was irradiated using a UV-LED with a wavelength of 353-410 nm for 60 min from a 5.3-cm distance. The maximum, minimum, and mean ± standard deviation (SD) of pixel values of the subtraction images were evaluated using 0.5 inches of a circular region of interest (ROI). The highest mean ± SD (8915.25 ± 608.86) of the pixel value was obtained at a wavelength of 375 nm. The results indicated that 375 nm is the most effective and sensitive wavelength of UV-A for Gafchromic EBT2 films and that UV-A can be used as a substitute for x-rays in the double-exposure technique.

  7. Cross Section Sensitivity and Propagated Errors in HZE Exposures

    NASA Technical Reports Server (NTRS)

    Heinbockel, John H.; Wilson, John W.; Blatnig, Steve R.; Qualls, Garry D.; Badavi, Francis F.; Cucinotta, Francis A.

    2005-01-01

    It has long been recognized that galactic cosmic rays are of such high energy that they tend to pass through available shielding materials resulting in exposure of astronauts and equipment within space vehicles and habitats. Any protection provided by shielding materials result not so much from stopping such particles but by changing their physical character in interaction with shielding material nuclei forming, hopefully, less dangerous species. Clearly, the fidelity of the nuclear cross-sections is essential to correct specification of shield design and sensitivity to cross-section error is important in guiding experimental validation of cross-section models and database. We examine the Boltzmann transport equation which is used to calculate dose equivalent during solar minimum, with units (cSv/yr), associated with various depths of shielding materials. The dose equivalent is a weighted sum of contributions from neutrons, protons, light ions, medium ions and heavy ions. We investigate the sensitivity of dose equivalent calculations due to errors in nuclear fragmentation cross-sections. We do this error analysis for all possible projectile-fragment combinations (14,365 such combinations) to estimate the sensitivity of the shielding calculations to errors in the nuclear fragmentation cross-sections. Numerical differentiation with respect to the cross-sections will be evaluated in a broad class of materials including polyethylene, aluminum and copper. We will identify the most important cross-sections for further experimental study and evaluate their impact on propagated errors in shielding estimates.

  8. Design of an Air Pollution Monitoring Campaign in Beijing for Application to Cohort Health Studies.

    PubMed

    Vedal, Sverre; Han, Bin; Xu, Jia; Szpiro, Adam; Bai, Zhipeng

    2017-12-15

    No cohort studies in China on the health effects of long-term air pollution exposure have employed exposure estimates at the fine spatial scales desirable for cohort studies with individual-level health outcome data. Here we assess an array of modern air pollution exposure estimation approaches for assigning within-city exposure estimates in Beijing for individual pollutants and pollutant sources to individual members of a cohort. Issues considered in selecting specific monitoring data or new monitoring campaigns include: needed spatial resolution, exposure measurement error and its impact on health effect estimates, spatial alignment and compatibility with the cohort, and feasibility and expense. Sources of existing data largely include administrative monitoring data, predictions from air dispersion or chemical transport models and remote sensing (specifically satellite) data. New air monitoring campaigns include additional fixed site monitoring, snapshot monitoring, passive badge or micro-sensor saturation monitoring and mobile monitoring, as well as combinations of these. Each of these has relative advantages and disadvantages. It is concluded that a campaign in Beijing that at least includes a mobile monitoring component, when coupled with currently available spatio-temporal modeling methods, should be strongly considered. Such a campaign is economical and capable of providing the desired fine-scale spatial resolution for pollutants and sources.

  9. Design of an Air Pollution Monitoring Campaign in Beijing for Application to Cohort Health Studies

    PubMed Central

    Vedal, Sverre; Han, Bin; Szpiro, Adam; Bai, Zhipeng

    2017-01-01

    No cohort studies in China on the health effects of long-term air pollution exposure have employed exposure estimates at the fine spatial scales desirable for cohort studies with individual-level health outcome data. Here we assess an array of modern air pollution exposure estimation approaches for assigning within-city exposure estimates in Beijing for individual pollutants and pollutant sources to individual members of a cohort. Issues considered in selecting specific monitoring data or new monitoring campaigns include: needed spatial resolution, exposure measurement error and its impact on health effect estimates, spatial alignment and compatibility with the cohort, and feasibility and expense. Sources of existing data largely include administrative monitoring data, predictions from air dispersion or chemical transport models and remote sensing (specifically satellite) data. New air monitoring campaigns include additional fixed site monitoring, snapshot monitoring, passive badge or micro-sensor saturation monitoring and mobile monitoring, as well as combinations of these. Each of these has relative advantages and disadvantages. It is concluded that a campaign in Beijing that at least includes a mobile monitoring component, when coupled with currently available spatio-temporal modeling methods, should be strongly considered. Such a campaign is economical and capable of providing the desired fine-scale spatial resolution for pollutants and sources. PMID:29244738

  10. Exposed and Embedded Corrections in Aphasia Therapy: Issues of Voice and Identity

    ERIC Educational Resources Information Center

    Simmons-Mackie, Nina; Damico, Jack S.

    2008-01-01

    Background: Because communication after the onset of aphasia can be fraught with errors, therapist corrections are pervasive in therapy for aphasia. Although corrections are designed to improve the accuracy of communication, some corrections can have social and emotional consequences during interactions. That is, exposure of errors can potentially…

  11. Computational fluid dynamics analysis and experimental study of a low measurement error temperature sensor used in climate observation.

    PubMed

    Yang, Jie; Liu, Qingquan; Dai, Wei

    2017-02-01

    To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.

  12. Screen Time and Sleep among School-Aged Children and Adolescents: A Systematic Literature Review

    PubMed Central

    Hale, Lauren; Guan, Stanford

    2015-01-01

    Summary We systematically examined and updated the scientific literature on the association between screen time (e.g., television, computers, video games, and mobile devices) and sleep outcomes among school-aged children and adolescents. We reviewed 67 studies published from 1999 to early 2014. We found that screen time is adversely associated with sleep outcomes (primarily shortened duration and delayed timing) in 90% of studies. Some of the results varied by type of screen exposure, age of participant, gender, and day of the week. While the evidence regarding the association between screen time and sleep is consistent, we discuss limitations of the current studies: 1.) causal association not confirmed; 2.) measurement error (of both screen time exposure and sleep measures); 3.) limited data on simultaneous use of multiple screens, characteristics and content of screens used. Youth should be advised to limit or reduce screen time exposure, especially before or during bedtime hours to minimize any harmful effects of screen time on sleep and well-being. Future research should better account for the methodological limitations of the extant studies, and seek to better understand the magnitude and mechanisms of the association. These steps will help the development and implementation of policies or interventions related to screen time among youth. PMID:25193149

  13. Calibration of GafChromic XR-RV3 radiochromic film for skin dose measurement using standardized x-ray spectra and a commercial flatbed scanner.

    PubMed

    McCabe, Bradley P; Speidel, Michael A; Pike, Tina L; Van Lysel, Michael S

    2011-04-01

    In this study, newly formulated XR-RV3 GafChromic film was calibrated with National Institute of Standards and Technology (NIST) traceability for measurement of patient skin dose during fluoroscopically guided interventional procedures. The film was calibrated free-in-air to air kerma levels between 15 and 1100 cGy using four moderately filtered x-ray beam qualities (60, 80, 100, and 120 kVp). The calibration films were scanned with a commercial flatbed document scanner. Film reflective density-to-air kerma calibration curves were constructed for each beam quality, with both the orange and white sides facing the x-ray source. A method to correct for nonuniformity in scanner response (up to 25% depending on position) was developed to enable dose measurement with large films. The response of XR-RV3 film under patient backscattering conditions was examined using on-phantom film exposures and Monte Carlo simulations. The response of XR-RV3 film to a given air kerma depended on kVp and film orientation. For a 200 cGy air kerma exposure with the orange side of the film facing the source, the film response increased by 20% from 60 to 120 kVp. At 500 cGy, the increase was 12%. When 500 cGy exposures were performed with the white side facing the x-ray source, the film response increased by 4.0% (60 kVp) to 9.9% (120 kVp) compared to the orange-facing orientation. On-phantom film measurements and Monte Carlo simulations show that using a NIST-traceable free-in-air calibration curve to determine air kerma in the presence of backscatter results in an error from 2% up to 8% depending on beam quality. The combined uncertainty in the air kerma measurement from the calibration curves and scanner nonuniformity correction was +/- 7.1% (95% C.I.). The film showed notable stability. Calibrations of film and scanner separated by 1 yr differed by 1.0%. XR-RV3 radiochromic film response to a given air kerma shows dependence on beam quality and film orientation. The presence of backscatter slightly modifies the x-ray energy spectrum; however, the increase in film response can be attributed primarily to the increase in total photon fluence at the sensitive layer. Film calibration curves created under free-in-air conditions may be used to measure dose from fluoroscopic quality x-ray beams, including patient backscatter with an error less than the uncertainty of the calibration in most cases.

  14. Calibration of visually guided reaching is driven by error-corrective learning and internal dynamics.

    PubMed

    Cheng, Sen; Sabes, Philip N

    2007-04-01

    The sensorimotor calibration of visually guided reaching changes on a trial-to-trial basis in response to random shifts in the visual feedback of the hand. We show that a simple linear dynamical system is sufficient to model the dynamics of this adaptive process. In this model, an internal variable represents the current state of sensorimotor calibration. Changes in this state are driven by error feedback signals, which consist of the visually perceived reach error, the artificial shift in visual feedback, or both. Subjects correct for > or =20% of the error observed on each movement, despite being unaware of the visual shift. The state of adaptation is also driven by internal dynamics, consisting of a decay back to a baseline state and a "state noise" process. State noise includes any source of variability that directly affects the state of adaptation, such as variability in sensory feedback processing, the computations that drive learning, or the maintenance of the state. This noise is accumulated in the state across trials, creating temporal correlations in the sequence of reach errors. These correlations allow us to distinguish state noise from sensorimotor performance noise, which arises independently on each trial from random fluctuations in the sensorimotor pathway. We show that these two noise sources contribute comparably to the overall magnitude of movement variability. Finally, the dynamics of adaptation measured with random feedback shifts generalizes to the case of constant feedback shifts, allowing for a direct comparison of our results with more traditional blocked-exposure experiments.

  15. Associations between personal exposures and ambient concentrations of nitrogen dioxide: A quantitative research synthesis

    NASA Astrophysics Data System (ADS)

    Meng, Q. Y.; Svendsgaard, D.; Kotchmar, D. J.; Pinto, J. P.

    2012-09-01

    Although positive associations between ambient NO2 concentrations and personal exposures have generally been found by exposure studies, the strength of the associations varied among studies. Differences in results could be related to differences in study design and in exposure factors. However, the effects of study design, exposure factors, and sampling and measurement errors on the strength of the personal-ambient associations have not been evaluated quantitatively in a systematic manner. A quantitative research synthesis was conducted to examine these issues based on peer-reviewed publications in the past 30 years. Factors affecting the strength of the personal-ambient associations across the studies were also examined with meta-regression. Ambient NO2 was found to be significantly associated with personal NO2 exposures, with estimates of 0.42, 0.16, and 0.72 for overall pooled, longitudinal and daily average correlation coefficients based on random-effects meta-analysis. This conclusion was robust after correction for publication bias with correlation coefficients of 0.37, 0.16 and 0.45. We found that season and some population characteristics, such as pre-existing disease, were significant factors affecting the strength of the personal-ambient associations. More meaningful and rigorous comparisons would be possible if greater detail were published on the study design (e.g. local and indoor sources, housing characteristics, etc.) and data quality (e.g., detection limits and percent of data above detection limits).

  16. Empirical constrained Bayes predictors accounting for non-detects among repeated measures.

    PubMed

    Moore, Reneé H; Lyles, Robert H; Manatunga, Amita K

    2010-11-10

    When the prediction of subject-specific random effects is of interest, constrained Bayes predictors (CB) have been shown to reduce the shrinkage of the widely accepted Bayes predictor while still maintaining desirable properties, such as optimizing mean-square error subsequent to matching the first two moments of the random effects of interest. However, occupational exposure and other epidemiologic (e.g. HIV) studies often present a further challenge because data may fall below the measuring instrument's limit of detection. Although methodology exists in the literature to compute Bayes estimates in the presence of non-detects (Bayes(ND)), CB methodology has not been proposed in this setting. By combining methodologies for computing CBs and Bayes(ND), we introduce two novel CBs that accommodate an arbitrary number of observable and non-detectable measurements per subject. Based on application to real data sets (e.g. occupational exposure, HIV RNA) and simulation studies, these CB predictors are markedly superior to the Bayes predictor and to alternative predictors computed using ad hoc methods in terms of meeting the goal of matching the first two moments of the true random effects distribution. Copyright © 2010 John Wiley & Sons, Ltd.

  17. Volumetric error modeling, identification and compensation based on screw theory for a large multi-axis propeller-measuring machine

    NASA Astrophysics Data System (ADS)

    Zhong, Xuemin; Liu, Hongqi; Mao, Xinyong; Li, Bin; He, Songping; Peng, Fangyu

    2018-05-01

    Large multi-axis propeller-measuring machines have two types of geometric error, position-independent geometric errors (PIGEs) and position-dependent geometric errors (PDGEs), which both have significant effects on the volumetric error of the measuring tool relative to the worktable. This paper focuses on modeling, identifying and compensating for the volumetric error of the measuring machine. A volumetric error model in the base coordinate system is established based on screw theory considering all the geometric errors. In order to fully identify all the geometric error parameters, a new method for systematic measurement and identification is proposed. All the PIGEs of adjacent axes and the six PDGEs of the linear axes are identified with a laser tracker using the proposed model. Finally, a volumetric error compensation strategy is presented and an inverse kinematic solution for compensation is proposed. The final measuring and compensation experiments have further verified the efficiency and effectiveness of the measuring and identification method, indicating that the method can be used in volumetric error compensation for large machine tools.

  18. The measurement of radiation dose profiles for electron-beam computed tomography using film dosimetry.

    PubMed

    Zink, F E; McCollough, C H

    1994-08-01

    The unique geometry of electron-beam CT (EBCT) scanners produces radiation dose profiles with widths which can be considerably different from the corresponding nominal scan width. Additionally, EBCT scanners produce both complex (multiple-slice) and narrow (3 mm) radiation profiles. This work describes the measurement of the axial dose distribution from EBCT within a scattering phantom using film dosimetry methods, which offer increased convenience and spatial resolution compared to thermoluminescent dosimetry (TLD) techniques. Therapy localization film was cut into 8 x 220 mm strips and placed within specially constructed light-tight holders for placement within the cavities of a CT Dose Index (CTDI) phantom. The film was calibrated using a conventional overhead x-ray tube with spectral characteristics matched to the EBCT scanner (130 kVp, 10 mm A1 HVL). The films were digitized at five samples per mm and calibrated dose profiles plotted as a function of z-axis position. Errors due to angle-of-incidence and beam hardening were estimated to be less than 5% and 10%, respectively. The integral exposure under film dose profiles agreed with ion-chamber measurements to within 15%. Exposures measured along the radiation profile differed from TLD measurements by an average of 5%. The film technique provided acceptable accuracy and convenience in comparison to conventional TLD methods, and allowed high spatial-resolution measurement of EBCT radiation dose profiles.

  19. An Interlaboratory Comparison of Dosimetry for a Multi-institutional Radiobiological

    PubMed Central

    Seed, TM; Xiao, S; Manley, N; Nikolich-Zugich, J; Pugh, J; van den Brink, M; Hirabayashi, Y; Yasutomo, K; Iwama, A; Koyasu, S; Shterev, I; Sempowski, G; Macchiarini, F; Nakachi, K; Kunugi, KC; Hammer, CG; DeWerd, LA

    2016-01-01

    Purpose An interlaboratory comparison of radiation dosimetry was conducted to determine the accuracy of doses being used experimentally for animal exposures within a large multi-institutional research project. The background and approach to this effort are described and discussed in terms of basic findings, problems and solutions. Methods Dosimetry tests were carried out utilizing optically stimulated luminescence (OSL) dosimeters embedded midline into mouse carcasses and thermal luminescence dosimeters (TLD) embedded midline into acrylic phantoms. Results The effort demonstrated that the majority (4/7) of the laboratories was able to deliver sufficiently accurate exposures having maximum dosing errors of ≤ 5%. Comparable rates of ‘dosimetric compliance’ were noted between OSL- and TLD-based tests. Data analysis showed a highly linear relationship between ‘measured’ and ‘target’ doses, with errors falling largely between 0–20%. Outliers were most notable for OSL-based tests, while multiple tests by ‘non-compliant’ laboratories using orthovoltage x-rays contributed heavily to the wide variation in dosing errors. Conclusions For the dosimetrically non-compliant laboratories, the relatively high rates of dosing errors were problematic, potentially compromising the quality of ongoing radiobiological research. This dosimetry effort proved to be instructive in establishing rigorous reviews of basic dosimetry protocols ensuring that dosing errors were minimized. PMID:26857121

  20. Occurrence of refractive errors among students who before the age of two grew up under the influence of light emitted by incandescent or fluorescent lamps.

    PubMed

    Czepita, Damian; Gosławski, Wojciech; Mojsa, Artur

    2005-01-01

    The aim of the study was to determine whether the development of refractive errors could be associated with exposure to light emitted by incandescent or fluorescent lamps. 3636 students were examined (1638 boys and 1998 girls, aged 6-18 years, mean age 12.1, SD 3.4). The examination included skiascopy with cycloplegia. Myopia was defined as refractive error < or = -0.5 D, hyperopia as refractive error > or = +1.5 D, astigmatism as refractive error > 0.5 DC. Anisometropia was diagnosed when the difference in the refraction of both eyes was > 1.0 D. The parents of all the students examined completed a questionnaire on the child's light exposure before the age oftwo. Data were analyzed statistically with the chi2 test. P values of less than 0.05 were considered statistically significant. It was observed that sleeping until the age of two in a room with a light turned on is associated with an increase in the occurrence of anisometropia (p < 0.02) as well as with a reduction in the prevalence of emmetropia (p < 0.05). It was also found that light emitted by fluorescent lamps leads to more frequent occurrence of astigmatism (p < 0.01).

  1. A Comparison Study of Item Exposure Control Strategies in MCAT

    ERIC Educational Resources Information Center

    Mao, Xiuzhen; Ozdemir, Burhanettin; Wang, Yating; Xiu, Tao

    2016-01-01

    Four item selection indexes with and without exposure control are evaluated and compared in multidimensional computerized adaptive testing (CAT). The four item selection indices are D-optimality, Posterior expectation Kullback-Leibler information (KLP), the minimized error variance of the linear combination score with equal weight (V1), and the…

  2. Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stynes, J. K.; Ihas, B.

    2012-04-01

    The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of themore » absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.« less

  3. Accuracy of digital and analogue cephalometric measurements assessed with the sandwich technique.

    PubMed

    Santoro, Margherita; Jarjoura, Karim; Cangialosi, Thomas J

    2006-03-01

    The purpose of the study was to evaluate the accuracy of cephalometric measurements obtained with digital tracing software compared with equivalent hand-traced measurements. In the sandwich technique, a storage phosphor plate and a conventional radiographic film are placed in the same cassette and exposed simultaneously. The method eliminates positioning errors and potential differences associated with multiple radiographic exposures that affected previous studies. It was used to ensure the equivalence of the digital images to the hard copy radiographs. Cephalometric measurements instead of landmarks were the focus of this investigation in order to acquire data with direct clinical applications. The sample consisted of digital and analog radiographic images from 47 patients after orthodontic treatment. Nine cephalometric landmarks were identified and 13 measurements calculated by 1 operator, both manually and with digital tracing software. Measurement error was assessed for each method by duplicating measurements of 25 randomly selected radiographs and by using Pearson's correlation coefficient. A paired t test was used to detect differences between the manual and digital methods. An overall greater variability in the digital cephalometric measurements was found. Differences between the 2 methods for SNA, ANB, S-Go:N-Me, U1/L1, L1-GoGn, and N-ANS:ANS-Me were statistically significant (P < .05). However, only the U1/L1 and S-Go:N-Me measurements showed differences greater than 2 SE (P < .0001). The 2 tracing methods provide similar clinical results; therefore, efficient digital cephalometric software can be reliably chosen as a routine diagnostic tool. The user-friendly sandwich technique was effective as an option for interoffice communications.

  4. "Bad Luck Mutations": DNA Mutations Are not the Whole Answer to Understanding Cancer Risk.

    PubMed

    Trosko, James E; Carruba, Giuseppe

    2017-01-01

    It has been proposed that many human cancers are generated by intrinsic mechanisms that produce "Bad Luck" mutations by the proliferation of organ-specific adult stem cells. There have been serious challenges to this interpretation, including multiple extrinsic factors thought to be correlated with mutations found in cancers associated with these exposures. While support for both interpretations provides some validity, both interpretations ignore several concepts of the multistage, multimechanism process of carcinogenesis, namely, (1) mutations can be generated by both "errors of DNA repair" and "errors of DNA replication," during the "initiation" process of carcinogenesis; (2) "initiated" stem cells must be clonally amplified by nonmutagenic, intrinsic or extrinsic epigenetic mechanisms; (3) organ-specific stem cell numbers can be modified during in utero development, thereby altering the risk to cancer later in life; and (4) epigenetic tumor promoters are characterized by species, individual genetic-, gender-, developmental state-specificities, and threshold levels to be active; sustained and long-term exposures; and exposures in the absence of antioxidant "antipromoters." Because of the inevitability of some of the stem cells generating "initiating" mutations by either "errors of DNA repair" or "errors of DNA replication," a tumor is formed depending on the promotion phase of carcinogenesis. While it is possible to reduce our frequencies of mutagenic "initiated" cells, one can never reduce it to zero. Because of the extended period of the promotion phase of carcinogenesis, strategies to reduce the appearance of cancers must involve the interruption of the promotion of these initiated cells.

  5. Analysis of workplace compliance measurements of asbestos by the U.S. Occupational Safety and Health Administration (1984-2011).

    PubMed

    Cowan, Dallas M; Cheng, Thales J; Ground, Matthew; Sahmel, Jennifer; Varughese, Allysha; Madl, Amy K

    2015-08-01

    The United States Occupational Safety and Health Administration (OSHA) maintains the Chemical Exposure Health Data (CEHD) and the Integrated Management Information System (IMIS) databases, which contain quantitative and qualitative data resulting from compliance inspections conducted from 1984 to 2011. This analysis aimed to evaluate trends in workplace asbestos concentrations over time and across industries by combining the samples from these two databases. From 1984 to 2011, personal air samples ranged from 0.001 to 175 f/cc. Asbestos compliance sampling data associated with the construction, automotive repair, manufacturing, and chemical/petroleum/rubber industries included measurements in excess of 10 f/cc, and were above the permissible exposure limit from 2001 to 2011. The utility of combining the databases was limited by the completeness and accuracy of the data recorded. In this analysis, 40% of the data overlapped between the two databases. Other limitations included sampling bias associated with compliance sampling and errors occurring from user-entered data. A clear decreasing trend in both airborne fiber concentrations and the numbers of asbestos samples collected parallels historically decreasing trends in the consumption of asbestos, and declining mesothelioma incidence rates. Although air sampling data indicated that airborne fiber exposure potential was high (>10 f/cc for short and long-term samples) in some industries (e.g., construction, manufacturing), airborne concentrations have significantly declined over the past 30 years. Recommendations for improving the existing exposure OSHA databases are provided. Copyright © 2015. Published by Elsevier Inc.

  6. Mismeasurement and the resonance of strong confounders: correlated errors.

    PubMed

    Marshall, J R; Hastrup, J L; Ross, J S

    1999-07-01

    Confounding in epidemiology, and the limits of standard methods of control for an imperfectly measured confounder, have been understood for some time. However, most treatments of this problem are based on the assumption that errors of measurement in confounding and confounded variables are independent. This paper considers the situation in which a strong risk factor (confounder) and an inconsequential but suspected risk factor (confounded) are each measured with errors that are correlated; the situation appears especially likely to occur in the field of nutritional epidemiology. Error correlation appears to add little to measurement error as a source of bias in estimating the impact of a strong risk factor: it can add to, diminish, or reverse the bias induced by measurement error in estimating the impact of the inconsequential risk factor. Correlation of measurement errors can add to the difficulty involved in evaluating structures in which confounding and measurement error are present. In its presence, observed correlations among risk factors can be greater than, less than, or even opposite to the true correlations. Interpretation of multivariate epidemiologic structures in which confounding is likely requires evaluation of measurement error structures, including correlations among measurement errors.

  7. Analysis on the dynamic error for optoelectronic scanning coordinate measurement network

    NASA Astrophysics Data System (ADS)

    Shi, Shendong; Yang, Linghui; Lin, Jiarui; Guo, Siyang; Ren, Yongjie

    2018-01-01

    Large-scale dynamic three-dimension coordinate measurement technique is eagerly demanded in equipment manufacturing. Noted for advantages of high accuracy, scale expandability and multitask parallel measurement, optoelectronic scanning measurement network has got close attention. It is widely used in large components jointing, spacecraft rendezvous and docking simulation, digital shipbuilding and automated guided vehicle navigation. At present, most research about optoelectronic scanning measurement network is focused on static measurement capacity and research about dynamic accuracy is insufficient. Limited by the measurement principle, the dynamic error is non-negligible and restricts the application. The workshop measurement and positioning system is a representative which can realize dynamic measurement function in theory. In this paper we conduct deep research on dynamic error resources and divide them two parts: phase error and synchronization error. Dynamic error model is constructed. Based on the theory above, simulation about dynamic error is carried out. Dynamic error is quantized and the rule of volatility and periodicity has been found. Dynamic error characteristics are shown in detail. The research result lays foundation for further accuracy improvement.

  8. Information technology-based approaches to reducing repeat drug exposure in patients with known drug allergies.

    PubMed

    Cresswell, Kathrin M; Sheikh, Aziz

    2008-05-01

    There is increasing interest internationally in ways of reducing the high disease burden resulting from errors in medicine management. Repeat exposure to drugs to which patients have a known allergy has been a repeatedly identified error, often with disastrous consequences. Drug allergies are immunologically mediated reactions that are characterized by specificity and recurrence on reexposure. These repeat reactions should therefore be preventable. We argue that there is insufficient attention being paid to studying and implementing system-based approaches to reducing the risk of such accidental reexposure. Drawing on recent and ongoing research, we discuss a number of information technology-based interventions that can be used to reduce the risk of recurrent exposure. Proven to be effective in this respect are interventions that provide real-time clinical decision support; also promising are interventions aiming to enhance patient recognition, such as bar coding, radiofrequency identification, and biometric technologies.

  9. Relationships of Measurement Error and Prediction Error in Observed-Score Regression

    ERIC Educational Resources Information Center

    Moses, Tim

    2012-01-01

    The focus of this paper is assessing the impact of measurement errors on the prediction error of an observed-score regression. Measures are presented and described for decomposing the linear regression's prediction error variance into parts attributable to the true score variance and the error variances of the dependent variable and the predictor…

  10. The correspondence of surface climate parameters with satellite and terrain data

    NASA Technical Reports Server (NTRS)

    Dozier, Jeff; Davis, Frank

    1987-01-01

    One of the goals of the research was to develop a ground sampling stragegy for calibrating remotely sensed measurements of surface climate parameters. The initial sampling strategy involved the stratification of the terrain based on important ancillary surface variables such as slope, exposure, insolation, geology, drainage, fire history, etc. For a spatially heterogeneous population, sampling error is reduced and efficiency increased by stratification of the landscape into more homogeneous sub-areas and by employing periodic random spacing of samples. These concepts were applied in the initial stratification of the study site for the purpose of locating and allocating instrumentation.

  11. Sustained attention, selective attention and cognitive control in deaf and hearing children

    PubMed Central

    Dye, Matthew W. G.; Hauser, Peter C.

    2014-01-01

    Deaf children have been characterized as being impulsive, distractible, and unable to sustain attention. However, past research has tested deaf children born to hearing parents who are likely to have experienced language delays. The purpose of this study was to determine whether an absence of auditory input modulates attentional problems in deaf children with no delayed exposure to language. Two versions of a continuous performance test were administered to 37 deaf children born to Deaf parents and 60 hearing children, all aged 6–13 years. A vigilance task was used to measure sustained attention over the course of several minutes, and a distractibility test provided a measure of the ability to ignore task irrelevant information – selective attention. Both tasks provided assessments of cognitive control through analysis of commission errors. The deaf and hearing children did not differ on measures of sustained attention. However, younger deaf children were more distracted by task-irrelevant information in their peripheral visual field, and deaf children produced a higher number of commission errors in the selective attention task. It is argued that this is not likely to be an effect of audition on cognitive processing, but may rather reflect difficulty in endogenous control of reallocated visual attention resources stemming from early profound deafness. PMID:24355653

  12. Sustained attention, selective attention and cognitive control in deaf and hearing children.

    PubMed

    Dye, Matthew W G; Hauser, Peter C

    2014-03-01

    Deaf children have been characterized as being impulsive, distractible, and unable to sustain attention. However, past research has tested deaf children born to hearing parents who are likely to have experienced language delays. The purpose of this study was to determine whether an absence of auditory input modulates attentional problems in deaf children with no delayed exposure to language. Two versions of a continuous performance test were administered to 37 deaf children born to Deaf parents and 60 hearing children, all aged 6-13 years. A vigilance task was used to measure sustained attention over the course of several minutes, and a distractibility test provided a measure of the ability to ignore task irrelevant information - selective attention. Both tasks provided assessments of cognitive control through analysis of commission errors. The deaf and hearing children did not differ on measures of sustained attention. However, younger deaf children were more distracted by task-irrelevant information in their peripheral visual field, and deaf children produced a higher number of commission errors in the selective attention task. It is argued that this is not likely to be an effect of audition on cognitive processing, but may rather reflect difficulty in endogenous control of reallocated visual attention resources stemming from early profound deafness. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Prediction of CO concentrations based on a hybrid Partial Least Square and Support Vector Machine model

    NASA Astrophysics Data System (ADS)

    Yeganeh, B.; Motlagh, M. Shafie Pour; Rashidi, Y.; Kamalan, H.

    2012-08-01

    Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS-SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS-SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65-85% for hybrid PLS-SVM model respectively. Also it was found that the hybrid PLS-SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS-SVM model.

  14. Characterizing exposure to household air pollution within the Prospective Urban Rural Epidemiology (PURE) study.

    PubMed

    Arku, Raphael E; Birch, Aaron; Shupler, Matthew; Yusuf, Salim; Hystad, Perry; Brauer, Michael

    2018-05-01

    Household air pollution (HAP) from combustion of solid fuels is an important contributor to disease burden in low- and middle-income countries (LIC, and MIC). However, current HAP disease burden estimates are based on integrated exposure response curves that are not currently informed by quantitative HAP studies in LIC and MIC. While there is adequate evidence supporting causal relationships between HAP and respiratory disease, large cohort studies specifically examining relationships between quantitative measures of HAP exposure with cardiovascular disease are lacking. We aim to improve upon exposure proxies based on fuel type, and to reduce exposure misclassification by quantitatively measuring exposure across varying cooking fuel types and conditions in diverse geographies and socioeconomic settings. We leverage technology advancements to estimate household and personal PM 2.5 (particles below 2.5 μm in aerodynamic diameter) exposure within the large (N~250,000) multi-country (N~26) Prospective Urban and Rural Epidemiological (PURE) cohort study. Here, we detail the study protocol and the innovative methodologies being used to characterize HAP exposures, and their application in epidemiologic analyses. This study characterizes HAP PM 2.5 exposures for participants in rural communities in ten PURE countries with >10% solid fuel use at baseline (Bangladesh, Brazil, Chile, China, Colombia, India, Pakistan, South Africa, Tanzania, and Zimbabwe). PM 2.5 monitoring includes 48-h cooking area measurements in 4500 households and simultaneous personal monitoring of male and female pairs from 20% of the selected households. Repeat measurements occur in 20% of households to assess impacts of seasonality. Monitoring began in 2017, and will continue through 2019. The Ultrasonic Personal Aerosol Sampler (UPAS), a novel, robust, and inexpensive filter based monitor that is programmable through a dedicated mobile phone application is used for sampling. Pilot study field evaluation of cooking area measurements indicated high correlation between the UPAS and reference Harvard Impactors (r = 0.91; 95% CI: 0.84, 0.95; slope = 0.95). To facilitate tracking and to minimize contamination and analytical error, the samplers utilize barcoded filters and filter cartridges that are weighed pre- and post-sampling using a fully automated weighing system. Pump flow and pressure measurements, temperature and RH, GPS coordinates and semi-quantitative continuous particle mass concentrations based on filter differential pressure are uploaded to a central server automatically whenever the mobile phone is connected to the internet, with sampled data automatically screened for quality control parameters. A short survey is administered during the 48-h monitoring period. Post-weighed filters are further analyzed to estimate black carbon concentrations through a semi-automated, rapid, cost-effective image analysis approach. The measured PM 2.5 data will then be combined with PURE survey information on household characteristics and behaviours collected at baseline and during follow-up to develop quantitative HAP models for PM 2.5 exposures for all rural PURE participants (~50,000) and across different cooking fuel types within the 10 index countries. Both the measured (in the subset) and the modelled exposures will be used in separate longitudinal epidemiologic analyses to assess associations with cardiopulmonary mortality, and disease incidence. The collected data and resulting characterization of cooking area and personal PM 2.5 exposures in multiple rural communities from 10 countries will better inform exposure assessment as well as future epidemiologic analyses assessing the relationships between quantitative estimates of chronic HAP exposure with adult mortality and incident cardiovascular and respiratory disease. This will provide refined and more accurate exposure estimates in global CVD related exposure-response analyses. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Co-optimization of lithographic and patterning processes for improved EPE performance

    NASA Astrophysics Data System (ADS)

    Maslow, Mark J.; Timoshkov, Vadim; Kiers, Ton; Jee, Tae Kwon; de Loijer, Peter; Morikita, Shinya; Demand, Marc; Metz, Andrew W.; Okada, Soichiro; Kumar, Kaushik A.; Biesemans, Serge; Yaegashi, Hidetami; Di Lorenzo, Paolo; Bekaert, Joost P.; Mao, Ming; Beral, Christophe; Larivière, Stephane

    2017-03-01

    Complimentary lithography is already being used for advanced logic patterns. The tight pitches for 1D Metal layers are expected to be created using spacer based multiple patterning ArF-i exposures and the more complex cut/block patterns are made using EUV exposures. At the same time, control requirements of CDU, pattern shift and pitch-walk are approaching sub-nanometer levels to meet edge placement error (EPE) requirements. Local variability, such as Line Edge Roughness (LER), Local CDU, and Local Placement Error (LPE), are dominant factors in the total Edge Placement error budget. In the lithography process, improving the imaging contrast when printing the core pattern has been shown to improve the local variability. In the etch process, it has been shown that the fusion of atomic level etching and deposition can also improve these local variations. Co-optimization of lithography and etch processing is expected to further improve the performance over individual optimizations alone. To meet the scaling requirements and keep process complexity to a minimum, EUV is increasingly seen as the platform for delivering the exposures for both the grating and the cut/block patterns beyond N7. In this work, we evaluated the overlay and pattern fidelity of an EUV block printed in a negative tone resist on an ArF-i SAQP grating. High-order Overlay modeling and corrections during the exposure can reduce overlay error after development, a significant component of the total EPE. During etch, additional degrees of freedom are available to improve the pattern placement error in single layer processes. Process control of advanced pitch nanoscale-multi-patterning techniques as described above is exceedingly complicated in a high volume manufacturing environment. Incorporating potential patterning optimizations into both design and HVM controls for the lithography process is expected to bring a combined benefit over individual optimizations. In this work we will show the EPE performance improvement for a 32nm pitch SAQP + block patterned Metal 2 layer by cooptimizing the lithography and etch processes. Recommendations for further improvements and alternative processes will be given.

  16. Noise exposure during ambulance flights and repatriation operations.

    PubMed

    Küpper, Thomas E; Zimmer, Bernd; Conrad, Gerson; Jansing, Paul; Hardt, Aline

    2010-01-01

    Although ambulance flights are routine work and thousands of employees work in repatriation organizations, there is no data on noise exposure which may be used for preventive advice. We investigated the noise exposure of crews working in ambulance flight organizations for international patient repatriation to get the data for specific guidelines concerning noise protection. Noise levels inside Learjet 35A, the aircraft type which is most often used for repatriation operations, were collected from locations where flight crews typically spend their time. A sound level meter class 1 meeting the DIN IEC 651 requirements was used for noise measurements, but several factors during the real flight situations caused a measurement error of ~3%. Therefore, the results fulfill the specifications for class 2. The data was collected during several real repatriation operations and was combined with the flight data (hours per day) regarding the personnel to evaluate the occupationally encountered equivalent noise level according to DIN 45645-2. The measured noise levels were safely just below the 85 dB(A) threshold and should not induce permanent threshold shifts, provided that additional high noise exposure by non-occupational or private activities was avoided. As the levels of the noise produced by the engines outside the cabin are significantly above the 85 dB(A) threshold, the doors of the aircraft must be kept closed while the engines are running, and any activity performed outside the aircraft - or with the doors opened while the engines are running - must be done with adequate noise protection. The new EU noise directive (2003/10/EG) states that protective equipment must be made available to the aircrew to protect their hearing, though its use is not mandatory.

  17. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    PubMed

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  18. Warmth of familiarity and chill of error: affective consequences of recognition decisions.

    PubMed

    Chetverikov, Andrey

    2014-04-01

    The present research aimed to assess the effect of recognition decision on subsequent affective evaluations of recognised and non-recognised objects. Consistent with the proposed account of post-decisional preferences, results showed that the effect of recognition on preferences depends upon objective familiarity. If stimuli are recognised, liking ratings are positively associated with exposure frequency; if stimuli are not recognised, this link is either absent (Experiment 1) or negative (Experiments 2 and 3). This interaction between familiarity and recognition exists even when recognition accuracy is at chance level and the "mere exposure" effect is absent. Finally, data obtained from repeated measurements of preferences and using manipulations of task order confirm that recognition decisions have a causal influence on preferences. The findings suggest that affective evaluation can provide fine-grained access to the efficacy of cognitive processing even in simple cognitive tasks.

  19. Mechanical evaluation of a ruptured Swedish adjustable gastric band.

    PubMed

    Reijnen, Michael M P J; Naus, J H; Janssen, Ignace M C

    2004-02-01

    Leakage of a laparoscopically placed Swedish adjustable gastric band (SAGB) was observed 2 1/2 years after placement. The band was evaluated for mechanical inaccuracies by a laboratory. The ruptured SAGB was investigated microscopically and wall thicknesses were measured. An unused SAGB was tested, both empty and filled, for mechanical deformity after exposure to saline solution. A permanent transformation of the silicone rubber was found, caused by bowing of the device. 2 tears were present at the end of a kink. The mean wall thickness was within acceptable limits. Exposure of the gastric band to saline solution did not cause any sign of permanent deformity of the silicone rubber. The rupture of the gastric band did not seem to be caused by a production error. Long-term deformity, in combination with a continuous dynamic load, may increase the risk of tearing. Long-term follow up is recommended for patients treated with this device.

  20. Mediation Analysis: A Practitioner's Guide.

    PubMed

    VanderWeele, Tyler J

    2016-01-01

    This article provides an overview of recent developments in mediation analysis, that is, analyses used to assess the relative magnitude of different pathways and mechanisms by which an exposure may affect an outcome. Traditional approaches to mediation in the biomedical and social sciences are described. Attention is given to the confounding assumptions required for a causal interpretation of direct and indirect effect estimates. Methods from the causal inference literature to conduct mediation in the presence of exposure-mediator interactions, binary outcomes, binary mediators, and case-control study designs are presented. Sensitivity analysis techniques for unmeasured confounding and measurement error are introduced. Discussion is given to extensions to time-to-event outcomes and multiple mediators. Further flexible modeling strategies arising from the precise counterfactual definitions of direct and indirect effects are also described. The focus throughout is on methodology that is easily implementable in practice across a broad range of potential applications.

  1. Regression Discontinuity for Causal Effect Estimation in Epidemiology.

    PubMed

    Oldenburg, Catherine E; Moscoe, Ellen; Bärnighausen, Till

    Regression discontinuity analyses can generate estimates of the causal effects of an exposure when a continuously measured variable is used to assign the exposure to individuals based on a threshold rule. Individuals just above the threshold are expected to be similar in their distribution of measured and unmeasured baseline covariates to individuals just below the threshold, resulting in exchangeability. At the threshold exchangeability is guaranteed if there is random variation in the continuous assignment variable, e.g., due to random measurement error. Under exchangeability, causal effects can be identified at the threshold. The regression discontinuity intention-to-treat (RD-ITT) effect on an outcome can be estimated as the difference in the outcome between individuals just above (or below) versus just below (or above) the threshold. This effect is analogous to the ITT effect in a randomized controlled trial. Instrumental variable methods can be used to estimate the effect of exposure itself utilizing the threshold as the instrument. We review the recent epidemiologic literature reporting regression discontinuity studies and find that while regression discontinuity designs are beginning to be utilized in a variety of applications in epidemiology, they are still relatively rare, and analytic and reporting practices vary. Regression discontinuity has the potential to greatly contribute to the evidence base in epidemiology, in particular on the real-life and long-term effects and side-effects of medical treatments that are provided based on threshold rules - such as treatments for low birth weight, hypertension or diabetes.

  2. Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes

    ERIC Educational Resources Information Center

    Zavorsky, Gerald S.

    2010-01-01

    Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…

  3. Error measuring system of rotary Inductosyn

    NASA Astrophysics Data System (ADS)

    Liu, Chengjun; Zou, Jibin; Fu, Xinghe

    2008-10-01

    The inductosyn is a kind of high-precision angle-position sensor. It has important applications in servo table, precision machine tool and other products. The precision of inductosyn is calibrated by its error. It's an important problem about the error measurement in the process of production and application of the inductosyn. At present, it mainly depends on the method of artificial measurement to obtain the error of inductosyn. Therefore, the disadvantages can't be ignored such as the high labour intensity of the operator, the occurrent error which is easy occurred and the poor repeatability, and so on. In order to solve these problems, a new automatic measurement method is put forward in this paper which based on a high precision optical dividing head. Error signal can be obtained by processing the output signal of inductosyn and optical dividing head precisely. When inductosyn rotating continuously, its zero position error can be measured dynamically, and zero error curves can be output automatically. The measuring and calculating errors caused by man-made factor can be overcome by this method, and it makes measuring process more quickly, exactly and reliably. Experiment proves that the accuracy of error measuring system is 1.1 arc-second (peak - peak value).

  4. The impact of response measurement error on the analysis of designed experiments

    DOE PAGES

    Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

    2016-11-01

    This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less

  5. The impact of response measurement error on the analysis of designed experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

    This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less

  6. Error-compensation model for simultaneous measurement of five degrees of freedom motion errors of a rotary axis

    NASA Astrophysics Data System (ADS)

    Bao, Chuanchen; Li, Jiakun; Feng, Qibo; Zhang, Bin

    2018-07-01

    This paper introduces an error-compensation model for our measurement method to measure five motion errors of a rotary axis based on fibre laser collimation. The error-compensation model is established in a matrix form using the homogeneous coordinate transformation theory. The influences of the installation errors, error crosstalk, and manufacturing errors are analysed. The model is verified by both ZEMAX simulation and measurement experiments. The repeatability values of the radial and axial motion errors are significantly suppressed by more than 50% after compensation. The repeatability experiments of five degrees of freedom motion errors and the comparison experiments of two degrees of freedom motion errors of an indexing table were performed by our measuring device and a standard instrument. The results show that the repeatability values of the angular positioning error ε z and tilt motion error around the Y axis ε y are 1.2″ and 4.4″, and the comparison deviations of the two motion errors are 4.0″ and 4.4″, respectively. The repeatability values of the radial and axial motion errors, δ y and δ z , are 1.3 and 0.6 µm, respectively. The repeatability value of the tilt motion error around the X axis ε x is 3.8″.

  7. Exposure to emotionally arousing, contamination-relevant pictorial stimuli interferes with response inhibition: Implication for obsessive–compulsive disorder

    PubMed Central

    Adams, Thomas G.

    2016-01-01

    Multiple emotional processes are implicated in the pathogenesis of obsessions and compulsions and individuals diagnosed with obsessive-compulsive disorder (OCD) have reliably shown deficits in response inhibition. Little research has tested how emotional processes might interact with cognitive control in the context of OCD. High contamination obsessive-compulsive (OC) and low contamination-OC participants completed an emotional go/no-go task to measure the interfering effects contamination-threat images relative to neutral images on action restraint (errors of commission). Results revealed that high contamination-OC participants committed marginally more commission errors (11.04%) than low contamination-OC participants (10.30%) on neutral no-go trials, but this effect was not significant (p > .05). All participants committed significantly more errors of commission on contamination-threat trails relative to neutral no-go trials, p < .01, but the interfering effects of contamination-threat images was significantly larger (p = .05) for high-contamination-OC participants. Errors of commission almost doubled for high contamination-OC participants on contamination-threat no-go trials (20.78%), compared to a more modest increase for low contamination-OC participants (14.80%). These findings suggest that individuals with elevated symptoms of OCD may have significantly more difficulty inhibiting their actions when processing disorder relevant or emotionally arousing information. This observation has implications for the pathogenesis of obsessions and compulsions. PMID:28090434

  8. Assessing effects of military aircraft noise on residential property values near airbases

    NASA Astrophysics Data System (ADS)

    Fidell, Sanford; Tabachnick, Barbara; Silvati, Laura; Cook, Brenda

    The question, 'Does military aircraft noise exposure affect residential property values in the vicinity of Air Force bases?', can be asked and answered with varying degrees of generality and tolerable errors of inference. Definitive answers are difficult to develop because the question itself may not be meaningful in some circumstances: property values are affected by many factors other than aircraft noise which can fluctuate greatly in different areas and during different time periods; credible attribution of causality for changes in property values uniquely to aircraft noise requires many costly study design measures; and prior findings suggest that if a relationship exists, it is not a large or especially strong one. Thus, evidence of a simple geographic association between aircraft noise exposure and residential property values does not provide a conclusive answer to the question. In an effort to develop more compelling evidence, the US Air Force is planning to compare historical records of sale prices of properties in areas of differential aircraft noise exposure during specific time periods with predictions of sale prices derived from a validated statistical model of residential property values.

  9. Impact of Measurement Error on Synchrophasor Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include themore » possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.« less

  10. Neurophysiological assessment of auditory, peripheral nerve, somatosensory, and visual system function after developmental exposure to gasoline, E15, and E85 vapors.

    PubMed

    Herr, David W; Freeborn, Danielle L; Degn, Laura; Martin, Sheppard A; Ortenzio, Jayna; Pantlin, Lara; Hamm, Charles W; Boyes, William K

    2016-01-01

    The use of gasolines blended with a range of ethanol concentrations may result in inhalation of vapors containing a variable combination of ethanol with other volatile gasoline constituents. The possibility of exposure and potential interactions between vapor constituents suggests the need to evaluate the possible risks of this complex mixture. Previously we evaluated the effects of developmental exposure to ethanol vapors on neurophysiological measures of sensory function as a component of a larger project evaluating developmental ethanol toxicity. Here we report an evaluation using the same battery of sensory function testing in offspring of pregnant dams exposed during gestation to condensed vapors of gasoline (E0), gasoline blended with 15% ethanol (E15) or gasoline blended with 85% ethanol (E85). Pregnant Long-Evans rats were exposed to target concentrations 0, 3000, 6000, or 9000 ppm total hydrocarbon vapors for 6.5h/day over GD9 - GD20. Sensory evaluations of male offspring began as adults. The electrophysiological testing battery included tests of: peripheral nerve (compound action potentials, nerve conduction velocity [NCV]), somatosensory (cortical and cerebellar evoked potentials), auditory (brainstem auditory evoked responses), and visual functions. Visual function assessment included pattern elicited visual evoked potentials (VEP), VEP contrast sensitivity, dark-adapted (scotopic) electroretinograms (ERGs), light-adapted (photopic) ERGs, and green flicker ERGs. The results included sporadic statistically significant effects, but the observations were not consistently concentration-related and appeared to be statistical Type 1 errors related to multiple dependent measures evaluated. The exposure concentrations were much higher than can be reasonably expected from typical exposures to the general population during refueling or other common exposure situations. Overall the results indicate that gestational exposure of male rats to ethanol/gasoline vapor combinations did not cause detectable changes in peripheral nerve, somatosensory, auditory, or visual function when the offspring were assessed as adults. Published by Elsevier Inc.

  11. Multi-exposure speckle imaging of cerebral blood flow: a pilot clinical study (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Richards, Lisa M.; Kazmi, S. M. S.; Olin, Katherine E.; Waldron, James S.; Fox, Douglas J.; Dunn, Andrew K.

    2017-03-01

    Monitoring cerebral blood flow (CBF) during neurosurgery is essential for detecting ischemia in a timely manner for a wide range of procedures. Multiple clinical studies have demonstrated that laser speckle contrast imaging (LSCI) has high potential to be a valuable, label-free CBF monitoring technique during neurosurgery. LSCI is an optical imaging method that provides blood flow maps with high spatiotemporal resolution requiring only a coherent light source, a lens system, and a camera. However, the quantitative accuracy and sensitivity of LSCI is limited and highly dependent on the exposure time. An extension to LSCI called multi-exposure speckle imaging (MESI) overcomes these limitations, and was evaluated intraoperatively in patients undergoing brain tumor resection. This clinical study (n = 7) recorded multiple exposure times from the same cortical tissue area, and demonstrates that shorter exposure times (≤1 ms) provide the highest dynamic range and sensitivity for sampling flow rates in human neurovasculature. This study also combined exposure times using the MESI model, demonstrating high correlation with proper image calibration and acquisition. The physiological accuracy of speckle-estimated flow was validated using conservation of flow analysis on vascular bifurcations. Flow estimates were highly conserved in MESI and 1 ms exposure LSCI, with percent errors at 6.4% ± 5.3% and 7.2% ± 7.2%, respectively, while 5 ms exposure LSCI had higher errors at 21% ± 10% (n = 14 bifurcations). Results from this study demonstrate the importance of exposure time selection for LSCI, and that intraoperative MESI can be performed with high quantitative accuracy.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCabe, Bradley P.; Speidel, Michael A.; Pike, Tina L.

    Purpose: In this study, newly formulated XR-RV3 GafChromic film was calibrated with National Institute of Standards and Technology (NIST) traceability for measurement of patient skin dose during fluoroscopically guided interventional procedures. Methods: The film was calibrated free-in-air to air kerma levels between 15 and 1100 cGy using four moderately filtered x-ray beam qualities (60, 80, 100, and 120 kVp). The calibration films were scanned with a commercial flatbed document scanner. Film reflective density-to-air kerma calibration curves were constructed for each beam quality, with both the orange and white sides facing the x-ray source. A method to correct for nonuniformity inmore » scanner response (up to 25% depending on position) was developed to enable dose measurement with large films. The response of XR-RV3 film under patient backscattering conditions was examined using on-phantom film exposures and Monte Carlo simulations. Results: The response of XR-RV3 film to a given air kerma depended on kVp and film orientation. For a 200 cGy air kerma exposure with the orange side of the film facing the source, the film response increased by 20% from 60 to 120 kVp. At 500 cGy, the increase was 12%. When 500 cGy exposures were performed with the white side facing the x-ray source, the film response increased by 4.0% (60 kVp) to 9.9% (120 kVp) compared to the orange-facing orientation. On-phantom film measurements and Monte Carlo simulations show that using a NIST-traceable free-in-air calibration curve to determine air kerma in the presence of backscatter results in an error from 2% up to 8% depending on beam quality. The combined uncertainty in the air kerma measurement from the calibration curves and scanner nonuniformity correction was {+-}7.1% (95% C.I.). The film showed notable stability. Calibrations of film and scanner separated by 1 yr differed by 1.0%. Conclusions: XR-RV3 radiochromic film response to a given air kerma shows dependence on beam quality and film orientation. The presence of backscatter slightly modifies the x-ray energy spectrum; however, the increase in film response can be attributed primarily to the increase in total photon fluence at the sensitive layer. Film calibration curves created under free-in-air conditions may be used to measure dose from fluoroscopic quality x-ray beams, including patient backscatter with an error less than the uncertainty of the calibration in most cases.« less

  13. C(m)-History Method, a Novel Approach to Simultaneously Measure Source and Sink Parameters Important for Estimating Indoor Exposures to Phthalates.

    PubMed

    Cao, Jianping; Weschler, Charles J; Luo, Jiajun; Zhang, Yinping

    2016-01-19

    The concentration of a gas-phase semivolatile organic compound (SVOC) in equilibrium with its mass-fraction in the source material, y0, and the coefficient for partitioning of an SVOC between clothing and air, K, are key parameters for estimating emission and subsequent dermal exposure to SVOCs. Most of the available methods for their determination depend on achieving steady-state in ventilated chambers. This can be time-consuming and of variable accuracy. Additionally, no existing method simultaneously determines y0 and K in a single experiment. In this paper, we present a sealed-chamber method, using early-stage concentration measurements, to simultaneously determine y0 and K. The measurement error for the method is analyzed, and the optimization of experimental parameters is explored. Using this method, y0 for phthalates (DiBP, DnBP, and DEHP) emitted by two types of PVC flooring, coupled with K values for these phthalates partitioning between a cotton T-shirt and air, were measured at 25 and 32 °C (room and skin temperatures, respectively). The measured y0 values agree well with results obtained by alternate methods. The changes of y0 and K with temperature were used to approximate the changes in enthalpy, ΔH, associated with the relevant phase changes. We conclude with suggestions for further related research.

  14. TH-AB-202-04: Auto-Adaptive Margin Generation for MLC-Tracked Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glitzner, M; Lagendijk, J; Raaymakers, B

    Purpose: To develop an auto-adaptive margin generator for MLC tracking. The generator is able to estimate errors arising in image guided radiotherapy, particularly on an MR-Linac, which depend on the latencies of machine and image processing, as well as on patient motion characteristics. From the estimated error distribution, a segment margin is generated, able to compensate errors up to a user-defined confidence. Method: In every tracking control cycle (TCC, 40ms), the desired aperture D(t) is compared to the actual aperture A(t), a delayed and imperfect representation of D(t). Thus an error e(t)=A(T)-D(T) is measured every TCC. Applying kernel-density-estimation (KDE), themore » cumulative distribution (CDF) of e(t) is estimated. With CDF-confidence limits, upper and lower error limits are extracted for motion axes along and perpendicular leaf-travel direction and applied as margins. To test the dosimetric impact, two representative motion traces were extracted from fast liver-MRI (10Hz). The traces were applied onto a 4D-motion platform and continuously tracked by an Elekta Agility 160 MLC using an artificially imposed tracking delay. Gafchromic film was used to detect dose exposition for static, tracked, and error-compensated tracking cases. The margin generator was parameterized to cover 90% of all tracking errors. Dosimetric impact was rated by calculating the ratio between underexposed points (>5% underdosage) to the total number of points inside FWHM of static exposure. Results: Without imposing adaptive margins, tracking experiments showed a ratio of underexposed points of 17.5% and 14.3% for two motion cases with imaging delays of 200ms and 300ms, respectively. Activating the margin generated yielded total suppression (<1%) of underdosed points. Conclusion: We showed that auto-adaptive error compensation using machine error statistics is possible for MLC tracking. The error compensation margins are calculated on-line, without the need of assuming motion or machine models. Further strategies to reduce consequential overdosages are currently under investigation. This work was funded by the SoRTS consortium, which includes the industry partners Elekta, Philips and Technolution.« less

  15. An Empirical State Error Covariance Matrix Orbit Determination Example

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.

  16. Childhood myopia and parental smoking.

    PubMed

    Saw, S-M; Chia, K-S; Lindstrom, J M; Tan, D T H; Stone, R A

    2004-07-01

    To examine the relation between exposure to passive parental smoke and myopia in Chinese children in Singapore. 1334 Chinese children from three schools in Singapore were recruited, all of whom were participants in the Singapore Cohort study Of the Risk factors for Myopia (SCORM). Information on whether the father or mother smoked, number of years smoked, and the number of cigarettes smoked per day during the child's lifetime were derived. These data were correlated with contemporaneously obtained data available in SCORM. The children's cycloplegic autorefraction, corneal curvature radius, and biometry measures were compared with reported parental smoking history. There were 434 fathers (33.3%) and 23 mothers (1.7%) who smoked during their child's lifetime. There were no significant trends observed between paternal smoking and refractive error or axial length. After controlling for age, sex, school, mother's education, and mother's myopia, children with mothers who had ever smoked during their lifetime had more "positive" refractions (adjusted mean -0.28 D v -1.38 D) compared with children whose mother did not smoke (p = 0.012). The study found no consistent evidence of association between parental smoking and refractive error. There was a suggestion that children whose mothers smoked cigarettes had more hyperopic refractions, but the absence of a relation with paternal smoking and the small number of mothers who smoked in this sample preclude definite conclusions about a link between passive smoking exposure and myopia.

  17. Assessment of outdoor radiofrequency electromagnetic field exposure through hotspot localization using kriging-based sequential sampling.

    PubMed

    Aerts, Sam; Deschrijver, Dirk; Verloock, Leen; Dhaene, Tom; Martens, Luc; Joseph, Wout

    2013-10-01

    In this study, a novel methodology is proposed to create heat maps that accurately pinpoint the outdoor locations with elevated exposure to radiofrequency electromagnetic fields (RF-EMF) in an extensive urban region (or, hotspots), and that would allow local authorities and epidemiologists to efficiently assess the locations and spectral composition of these hotspots, while at the same time developing a global picture of the exposure in the area. Moreover, no prior knowledge about the presence of radiofrequency radiation sources (e.g., base station parameters) is required. After building a surrogate model from the available data using kriging, the proposed method makes use of an iterative sampling strategy that selects new measurement locations at spots which are deemed to contain the most valuable information-inside hotspots or in search of them-based on the prediction uncertainty of the model. The method was tested and validated in an urban subarea of Ghent, Belgium with a size of approximately 1 km2. In total, 600 input and 50 validation measurements were performed using a broadband probe. Five hotspots were discovered and assessed, with maximum total electric-field strengths ranging from 1.3 to 3.1 V/m, satisfying the reference levels issued by the International Commission on Non-Ionizing Radiation Protection for exposure of the general public to RF-EMF. Spectrum analyzer measurements in these hotspots revealed five radiofrequency signals with a relevant contribution to the exposure. The radiofrequency radiation emitted by 900 MHz Global System for Mobile Communications (GSM) base stations was always dominant, with contributions ranging from 45% to 100%. Finally, validation of the subsequent surrogate models shows high prediction accuracy, with the final model featuring an average relative error of less than 2dB (factor 1.26 in electric-field strength), a correlation coefficient of 0.7, and a specificity of 0.96. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Error Analysis and Validation for Insar Height Measurement Induced by Slant Range

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Li, T.; Fan, W.; Geng, X.

    2018-04-01

    InSAR technique is an important method for large area DEM extraction. Several factors have significant influence on the accuracy of height measurement. In this research, the effect of slant range measurement for InSAR height measurement was analysis and discussed. Based on the theory of InSAR height measurement, the error propagation model was derived assuming no coupling among different factors, which directly characterise the relationship between slant range error and height measurement error. Then the theoretical-based analysis in combination with TanDEM-X parameters was implemented to quantitatively evaluate the influence of slant range error to height measurement. In addition, the simulation validation of InSAR error model induced by slant range was performed on the basis of SRTM DEM and TanDEM-X parameters. The spatial distribution characteristics and error propagation rule of InSAR height measurement were further discussed and evaluated.

  19. Error disclosure: a new domain for safety culture assessment.

    PubMed

    Etchegaray, Jason M; Gallagher, Thomas H; Bell, Sigall K; Dunlap, Ben; Thomas, Eric J

    2012-07-01

    To (1) develop and test survey items that measure error disclosure culture, (2) examine relationships among error disclosure culture, teamwork culture and safety culture and (3) establish predictive validity for survey items measuring error disclosure culture. All clinical faculty from six health institutions (four medical schools, one cancer centre and one health science centre) in The University of Texas System were invited to anonymously complete an electronic survey containing questions about safety culture and error disclosure. The authors found two factors to measure error disclosure culture: one factor is focused on the general culture of error disclosure and the second factor is focused on trust. Both error disclosure culture factors were unique from safety culture and teamwork culture (correlations were less than r=0.85). Also, error disclosure general culture and error disclosure trust culture predicted intent to disclose a hypothetical error to a patient (r=0.25, p<0.001 and r=0.16, p<0.001, respectively) while teamwork and safety culture did not predict such an intent (r=0.09, p=NS and r=0.12, p=NS). Those who received prior error disclosure training reported significantly higher levels of error disclosure general culture (t=3.7, p<0.05) and error disclosure trust culture (t=2.9, p<0.05). The authors created and validated a new measure of error disclosure culture that predicts intent to disclose an error better than other measures of healthcare culture. This measure fills an existing gap in organisational assessments by assessing transparent communication after medical error, an important aspect of culture.

  20. Increased Depression and Anxiety Symptoms are Associated with More Breakdowns in Cognitive Control to Cocaine Cues in Veterans with Cocaine Use Disorder.

    PubMed

    DiGirolamo, Gregory J; Gonzalez, Gerardo; Smelson, David; Guevremont, Nathan; Andre, Michael I; Patnaik, Pooja O; Zaniewski, Zachary R

    2017-01-01

    Cue-elicited craving is a clinically important aspect of cocaine addiction directly linked to cognitive control breakdowns and relapse to cocaine-taking behavior. However, whether craving drives breakdowns in cognitive control toward cocaine cues in veterans, who experience significantly more co-occurring mood disorders, is unknown. The present study tests whether veterans have breakdowns in cognitive control because of cue-elicited craving or current anxiety or depression symptoms. Twenty-four veterans with cocaine use disorder were cue-exposed, then tested on an antisaccade task in which participants were asked to control their eye movements toward cocaine or neutral cues by looking away from the cue. The relationship among cognitive control breakdowns (as measured by eye errors), cue-induced craving (changes in self-reported craving following cocaine cue exposure), and mood measures (depression and anxiety) was investigated. Veterans made significantly more errors toward cocaine cues than neutral cues. Depression and anxiety scores, but not cue-elicited craving, were significantly associated with increased subsequent errors toward cocaine cues for veterans. Increased depression and anxiety are specifically related to more cognitive control breakdowns toward cocaine cues in veterans. Depression and anxiety must be considered further in the etiology and treatment of cocaine use disorder in veterans. Furthermore, treating depression and anxiety as well, rather than solely alleviating craving levels, may prove a more effective combined treatment option in veterans with cocaine use disorder.

  1. Impact of geocoding methods on associations between long-term exposure to urban air pollution and lung function.

    PubMed

    Jacquemin, Bénédicte; Lepeule, Johanna; Boudier, Anne; Arnould, Caroline; Benmerad, Meriem; Chappaz, Claire; Ferran, Joane; Kauffmann, Francine; Morelli, Xavier; Pin, Isabelle; Pison, Christophe; Rios, Isabelle; Temam, Sofia; Künzli, Nino; Slama, Rémy; Siroux, Valérie

    2013-09-01

    Errors in address geocodes may affect estimates of the effects of air pollution on health. We investigated the impact of four geocoding techniques on the association between urban air pollution estimated with a fine-scale (10 m × 10 m) dispersion model and lung function in adults. We measured forced expiratory volume in 1 sec (FEV1) and forced vital capacity (FVC) in 354 adult residents of Grenoble, France, who were participants in two well-characterized studies, the Epidemiological Study on the Genetics and Environment on Asthma (EGEA) and the European Community Respiratory Health Survey (ECRHS). Home addresses were geocoded using individual building matching as the reference approach and three spatial interpolation approaches. We used a dispersion model to estimate mean PM10 and nitrogen dioxide concentrations at each participant's address during the 12 months preceding their lung function measurements. Associations between exposures and lung function parameters were adjusted for individual confounders and same-day exposure to air pollutants. The geocoding techniques were compared with regard to geographical distances between coordinates, exposure estimates, and associations between the estimated exposures and health effects. Median distances between coordinates estimated using the building matching and the three interpolation techniques were 26.4, 27.9, and 35.6 m. Compared with exposure estimates based on building matching, PM10 concentrations based on the three interpolation techniques tended to be overestimated. When building matching was used to estimate exposures, a one-interquartile range increase in PM10 (3.0 μg/m3) was associated with a 3.72-point decrease in FVC% predicted (95% CI: -0.56, -6.88) and a 3.86-point decrease in FEV1% predicted (95% CI: -0.14, -3.24). The magnitude of associations decreased when other geocoding approaches were used [e.g., for FVC% predicted -2.81 (95% CI: -0.26, -5.35) using NavTEQ, or 2.08 (95% CI -4.63, 0.47, p = 0.11) using Google Maps]. Our findings suggest that the choice of geocoding technique may influence estimated health effects when air pollution exposures are estimated using a fine-scale exposure model.

  2. Correction to Dittmar, Halliwell, and Ive (2006)

    ERIC Educational Resources Information Center

    Dittmar, Helga; Halliwell, Emma; Ive, Suzanne

    2006-01-01

    Reports an error in "Does Barbie make girls want to be thin? The effect of experimental exposure to images of dolls on the body image of 5- to 8-year-old girls" by Helga Dittmar, Emma Halliwell and Suzanne Ive ("Developmental Psychology," 2006 Mar, Vol 42[2], 283-292). A substantive error occurs in the Body shape dissatisfaction section on page…

  3. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM

    PubMed Central

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei

    2018-01-01

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model’s performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM’s parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models’ performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors. PMID:29342942

  4. Errors in laboratory medicine: practical lessons to improve patient safety.

    PubMed

    Howanitz, Peter J

    2005-10-01

    Patient safety is influenced by the frequency and seriousness of errors that occur in the health care system. Error rates in laboratory practices are collected routinely for a variety of performance measures in all clinical pathology laboratories in the United States, but a list of critical performance measures has not yet been recommended. The most extensive databases describing error rates in pathology were developed and are maintained by the College of American Pathologists (CAP). These databases include the CAP's Q-Probes and Q-Tracks programs, which provide information on error rates from more than 130 interlaboratory studies. To define critical performance measures in laboratory medicine, describe error rates of these measures, and provide suggestions to decrease these errors, thereby ultimately improving patient safety. A review of experiences from Q-Probes and Q-Tracks studies supplemented with other studies cited in the literature. Q-Probes studies are carried out as time-limited studies lasting 1 to 4 months and have been conducted since 1989. In contrast, Q-Tracks investigations are ongoing studies performed on a yearly basis and have been conducted only since 1998. Participants from institutions throughout the world simultaneously conducted these studies according to specified scientific designs. The CAP has collected and summarized data for participants about these performance measures, including the significance of errors, the magnitude of error rates, tactics for error reduction, and willingness to implement each of these performance measures. A list of recommended performance measures, the frequency of errors when these performance measures were studied, and suggestions to improve patient safety by reducing these errors. Error rates for preanalytic and postanalytic performance measures were higher than for analytic measures. Eight performance measures were identified, including customer satisfaction, test turnaround times, patient identification, specimen acceptability, proficiency testing, critical value reporting, blood product wastage, and blood culture contamination. Error rate benchmarks for these performance measures were cited and recommendations for improving patient safety presented. Not only has each of the 8 performance measures proven practical, useful, and important for patient care, taken together, they also fulfill regulatory requirements. All laboratories should consider implementing these performance measures and standardizing their own scientific designs, data analysis, and error reduction strategies according to findings from these published studies.

  5. Research on Measurement Accuracy of Laser Tracking System Based on Spherical Mirror with Rotation Errors of Gimbal Mount Axes

    NASA Astrophysics Data System (ADS)

    Shi, Zhaoyao; Song, Huixu; Chen, Hongfang; Sun, Yanqiang

    2018-02-01

    This paper presents a novel experimental approach for confirming that spherical mirror of a laser tracking system can reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy. By simplifying the optical system model of laser tracking system based on spherical mirror, we can easily extract the laser ranging measurement error caused by rotation errors of gimbal mount axes with the positions of spherical mirror, biconvex lens, cat's eye reflector, and measuring beam. The motions of polarization beam splitter and biconvex lens along the optical axis and vertical direction of optical axis are driven by error motions of gimbal mount axes. In order to simplify the experimental process, the motion of biconvex lens is substituted by the motion of spherical mirror according to the principle of relative motion. The laser ranging measurement error caused by the rotation errors of gimbal mount axes could be recorded in the readings of laser interferometer. The experimental results showed that the laser ranging measurement error caused by rotation errors was less than 0.1 μm if radial error motion and axial error motion were within ±10 μm. The experimental method simplified the experimental procedure and the spherical mirror could reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy of the laser tracking system.

  6. Prenatal Particulate Air Pollution and Neurodevelopment in Urban Children: Examining Sensitive Windows and Sex-specific Associations

    PubMed Central

    Chiu, Yueh-Hsiu Mathilda; Hsu, Hsiao-Hsien Leon; Coull, Brent A.; Bellinger, David C.; Kloog, Itai; Schwartz, Joel; Wright, Robert O.; Wright, Rosalind J.

    2015-01-01

    Background Brain growth and structural organization occurs in stages beginning prenatally. Toxicants may impact neurodevelopment differently dependent upon exposure timing and fetal sex. Objectives We implemented innovative methodology to identify sensitive windows for the associations between prenatal particulate matter with diameter≤2.5μm (PM2.5) and children’s neurodevelopment. Methods We assessed 267 full-term urban children’s prenatal daily PM2.5 exposure using a validated satellite-based spatio-temporally resolved prediction model. Outcomes included IQ (WISC-IV), attention (omission errors [OEs], commission errors [CEs], hit reaction time [HRT], and HRT standard error [HRT-SE] on the Conners’ CPT-II), and memory (general memory [GM] index and its components - verbal [VEM] and visual [VIM] memory, and attention-concentration [AC] indices on the WRAML-2) assessed at age 6.5±0.98 years. To identify the role of exposure timing, we used distributed lag models to examine associations between weekly prenatal PM2.5 exposure and neurodevelopment. Sex-specific associations were also examined. Results Mothers were primarily minorities (60% Hispanic, 25% black); 69% had ≤12 years of education. Adjusting for maternal age, education, race, and smoking, we found associations between higher PM2.5 levels at 31–38 weeks with lower IQ, at 20–26 weeks gestation with increased OEs, at 32–36 weeks with slower HRT, and at 22–40 weeks with increased HRT-SE among boys, while significant associations were found in memory domains in girls (higher PM2.5 exposure at 18–26 weeks with reduced VIM, at 12–20 weeks with reduced GM). Conclusions Increased PM2.5 exposure in specific prenatal windows was associated with poorer function across memory and attention domains with variable associations based on sex. Refined determination of time window- and sex-specific associations may enhance insight into underlying mechanisms and identification of vulnerable subgroups. PMID:26641520

  7. Prenatal particulate air pollution and neurodevelopment in urban children: Examining sensitive windows and sex-specific associations.

    PubMed

    Chiu, Yueh-Hsiu Mathilda; Hsu, Hsiao-Hsien Leon; Coull, Brent A; Bellinger, David C; Kloog, Itai; Schwartz, Joel; Wright, Robert O; Wright, Rosalind J

    2016-02-01

    Brain growth and structural organization occurs in stages beginning prenatally. Toxicants may impact neurodevelopment differently dependent upon exposure timing and fetal sex. We implemented innovative methodology to identify sensitive windows for the associations between prenatal particulate matter with diameter ≤ 2.5 μm (PM2.5) and children's neurodevelopment. We assessed 267 full-term urban children's prenatal daily PM2.5 exposure using a validated satellite-based spatio-temporally resolved prediction model. Outcomes included IQ (WISC-IV), attention (omission errors [OEs], commission errors [CEs], hit reaction time [HRT], and HRT standard error [HRT-SE] on the Conners' CPT-II), and memory (general memory [GM] index and its components - verbal [VEM] and visual [VIM] memory, and attention-concentration [AC] indices on the WRAML-2) assessed at age 6.5±0.98 years. To identify the role of exposure timing, we used distributed lag models to examine associations between weekly prenatal PM2.5 exposure and neurodevelopment. Sex-specific associations were also examined. Mothers were primarily minorities (60% Hispanic, 25% black); 69% had ≤12 years of education. Adjusting for maternal age, education, race, and smoking, we found associations between higher PM2.5 levels at 31-38 weeks with lower IQ, at 20-26 weeks gestation with increased OEs, at 32-36 weeks with slower HRT, and at 22-40 weeks with increased HRT-SE among boys, while significant associations were found in memory domains in girls (higher PM2.5 exposure at 18-26 weeks with reduced VIM, at 12-20 weeks with reduced GM). Increased PM2.5 exposure in specific prenatal windows may be associated with poorer function across memory and attention domains with variable associations based on sex. Refined determination of time window- and sex-specific associations may enhance insight into underlying mechanisms and identification of vulnerable subgroups. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Predicting Directly Measured Trunk and Upper Arm Postures in Paper Mill Work From Administrative Data, Workers' Ratings and Posture Observations.

    PubMed

    Heiden, Marina; Garza, Jennifer; Trask, Catherine; Mathiassen, Svend Erik

    2017-03-01

    A cost-efficient approach for assessing working postures could be to build statistical models for predicting results of direct measurements from cheaper data, and apply these models to samples in which only the latter data are available. The present study aimed to build and assess the performance of statistical models predicting inclinometer-assessed trunk and arm posture among paper mill workers. Separate models were built using administrative data, workers' ratings of their exposure, and observations of the work from video recordings as predictors. Trunk and upper arm postures were measured using inclinometry on 28 paper mill workers during three work shifts each. Simultaneously, the workers were video filmed, and their postures were assessed by observation of the videos afterwards. Workers' ratings of exposure, and administrative data on staff and production during the shifts were also collected. Linear mixed models were fitted for predicting inclinometer-assessed exposure variables (median trunk and upper arm angle, proportion of time with neutral trunk and upper arm posture, and frequency of periods in neutral trunk and upper arm inclination) from administrative data, workers' ratings, and observations, respectively. Performance was evaluated in terms of Akaike information criterion, proportion of variance explained (R2), and standard error (SE) of the model estimate. For models performing well, validity was assessed by bootstrap resampling. Models based on administrative data performed poorly (R2 ≤ 15%) and would not be useful for assessing posture in this population. Models using workers' ratings of exposure performed slightly better (8% ≤ R2 ≤ 27% for trunk posture; 14% ≤ R2 ≤ 36% for arm posture). The best model was obtained when using observational data for predicting frequency of periods with neutral arm inclination. It explained 56% of the variance in the postural exposure, and its SE was 5.6. Bootstrap validation of this model showed similar expected performance in other samples (5th-95th percentile: R2 = 45-63%; SE = 5.1-6.2). Observational data had a better ability to predict inclinometer-assessed upper arm exposures than workers' ratings or administrative data. However, observational measurements are typically more expensive to obtain. The results encourage analyses of the cost-efficiency of modeling based on administrative data, workers' ratings, and observation, compared to the performance and cost of measuring exposure directly. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  9. Evaluation of single-point sampling strategies for the estimation of moclobemide exposure in depressive patients.

    PubMed

    Ignjatovic, Anita Rakic; Miljkovic, Branislava; Todorovic, Dejan; Timotijevic, Ivana; Pokrajac, Milena

    2011-05-01

    Because moclobemide pharmacokinetics vary considerably among individuals, monitoring of plasma concentrations lends insight into its pharmacokinetic behavior and enhances its rational use in clinical practice. The aim of this study was to evaluate whether single concentration-time points could adequately predict moclobemide systemic exposure. Pharmacokinetic data (full 7-point pharmacokinetic profiles), obtained from 21 depressive inpatients receiving moclobemide (150 mg 3 times daily), were randomly split into development (n = 18) and validation (n = 16) sets. Correlations between the single concentration-time points and the area under the concentration-time curve within a 6-hour dosing interval at steady-state (AUC(0-6)) were assessed by linear regression analyses. The predictive performance of single-point sampling strategies was evaluated in the validation set by mean prediction error, mean absolute error, and root mean square error. Plasma concentrations in the absorption phase yielded unsatisfactory predictions of moclobemide AUC(0-6). The best estimation of AUC(0-6) was achieved from concentrations at 4 and 6 hours following dosing. As the most reliable surrogate for moclobemide systemic exposure, concentrations at 4 and 6 hours should be used instead of predose trough concentrations as an indicator of between-patient variability and a guide for dose adjustments in specific clinical situations.

  10. Identifying EGRET Sources

    NASA Technical Reports Server (NTRS)

    Schlegel, E.; Norris, Jay P. (Technical Monitor)

    2002-01-01

    This project was awarded funding from the CGRO program to support ROSAT and ground-based observations of unidentified sources from data obtained by the EGRET instrument on the Compton Gamma-Ray Observatory. The critical items in the project are the individual ROSAT observations that are used to cover the 99% error circle of the unidentified EGRET source. Each error circle is a degree or larger in diameter. Each ROSAT field is about 30 deg in diameter. Hence, a number (>4) of ROSAT pointings must be obtained for each EGRET source to cover the field. The scheduling of ROSAT observations is carried out to maximize the efficiency of the total schedule. As a result, each pointing is broken into one or more sub-pointings of various exposure times. This project was awarded ROSAT observing time for four unidentified EGRET sources, summarized in the table. The column headings are defined as follows: 'Coverings' = number of observations to cover the error circle; 'SubPtg' = total number of sub-pointings to observe all of the coverings; 'Rec'd' = number of individual sub-pointings received to date; 'CompFlds' = number of individual coverings for which the requested complete exposure has been received. Processing of the data can not occur until a complete exposure has been accumulated for each covering.

  11. Beyond alpha: an empirical examination of the effects of different sources of measurement error on reliability estimates for measures of individual differences constructs.

    PubMed

    Schmidt, Frank L; Le, Huy; Ilies, Remus

    2003-06-01

    On the basis of an empirical study of measures of constructs from the cognitive domain, the personality domain, and the domain of affective traits, the authors of this study examine the implications of transient measurement error for the measurement of frequently studied individual differences variables. The authors clarify relevant reliability concepts as they relate to transient error and present a procedure for estimating the coefficient of equivalence and stability (L. J. Cronbach, 1947), the only classical reliability coefficient that assesses all 3 major sources of measurement error (random response, transient, and specific factor errors). The authors conclude that transient error exists in all 3 trait domains and is especially large in the domain of affective traits. Their findings indicate that the nearly universal use of the coefficient of equivalence (Cronbach's alpha; L. J. Cronbach, 1951), which fails to assess transient error, leads to overestimates of reliability and undercorrections for biases due to measurement error.

  12. Evaluation of Acoustic Doppler Current Profiler measurements of river discharge

    USGS Publications Warehouse

    Morlock, S.E.

    1996-01-01

    The standard deviations of the ADCP measurements ranged from approximately 1 to 6 percent and were generally higher than the measurement errors predicted by error-propagation analysis of ADCP instrument performance. These error-prediction methods assume that the largest component of ADCP discharge measurement error is instrument related. The larger standard deviations indicate that substantial portions of measurement error may be attributable to sources unrelated to ADCP electronics or signal processing and are functions of the field environment.

  13. Performance of Vascular Exposure and Fasciotomy Among Surgical Residents Before and After Training Compared With Experts.

    PubMed

    Mackenzie, Colin F; Garofalo, Evan; Puche, Adam; Chen, Hegang; Pugh, Kristy; Shackelford, Stacy; Tisherman, Samuel; Henry, Sharon; Bowyer, Mark W

    2017-06-01

    Surgical patient outcomes are related to surgeon skills. To measure resident surgeon technical and nontechnical skills for trauma core competencies before and after training and up to 18 months later and to compare resident performance with the performance of expert traumatologists. This longitudinal study performed from May 1, 2013, through February 29, 2016, at Maryland State Anatomy Board cadaver laboratories included 40 surgical residents and 10 expert traumatologists. Performance was measured during extremity vascular exposures and lower extremity fasciotomy in fresh cadavers before and after taking the Advanced Surgical Skills for Exposure in Trauma (ASSET) course. The primary outcome variable was individual procedure score (IPS), with secondary outcomes of IPSs on 5 components of technical and nontechnical skills, Global Rating Scale scores, errors, and time to complete the procedure. Two trained evaluators located in the same laboratory evaluated performance with a standardized script and mobile touch-screen data collection. Thirty-eight (95%) of 40 surgical residents (mean [SD] age, 31 [2.9] years) who were evaluated before and within 4 weeks of ASSET training completed follow-up evaluations 12 to 18 months later (mean [SD], 14 [2.7] months). The experts (mean [SD] age, 52 [10.0] years) were significantly older and had a longer (mean [SD], 46 [16.3] months) interval since taking the ASSET course (both P < .001). Overall resident cohort performance improved with increased anatomy knowledge, correct procedural steps, and decreased errors from 60% to 19% after the ASSET course regardless of clinical year of training (P < .001). For 21 of 40 residents (52%), correct vascular procedural steps plotted against anatomy knowledge (the 2 IPS components most improved with training) indicates the resident's performance was within 1 nearest-neighbor classifier of experts after ASSET training. Five residents had no improvement with training. The Trauma Readiness Index for experts (mean [SD], 74 [4]) was significantly different compared with the trained residents (mean [SD], 48 [7] before training vs 63 [7] after training [P = .004] and vs 64 [6] 14 months later [P = .002]). Critical errors that might lead to patient death were identified by pretraining IPS decile of less than 0.5. At follow-up, frequency of resident critical errors was no different from experts. The IPSs ranged from 31.6% to 76.9% among residents for core trauma competency procedures. Modeling revealed that interval experience, rather than time since training, affected skill retention up to 18 months later. Only 4 experts and 16 residents (40%) adequately decompressed and confirmed entry into all 4 lower extremity compartments. This study found that ASSET training improved resident procedural skills for up to 18 months. Performance was highly variable. Interval experience after training affected performance. Pretraining skill identified competency of residents vs experts. Extremity vascular and fasciotomy performance evaluations suggest the need for specific anatomical training interventions in residents with IPS deciles less than 0.5.

  14. Secondhand tobacco smoke exposure and heart rate variability and inflammation among non-smoking construction workers: a repeated measures study.

    PubMed

    Zhang, Jinming; Fang, Shona C; Mittleman, Murray A; Christiani, David C; Cavallari, Jennifer M

    2013-10-02

    Although it has been well recognized that exposure to secondhand tobacco smoke (SHS) is associated with cardiovascular mortality, the mechanisms and time course by which SHS exposure may lead to cardiovascular effects are still being explored. Non-smoking workers were recruited from a local union and monitored inside a union hall while exposed to SHS over approximately 6 hours. Participants were fitted with a continuous electrocardiographic monitor upon enrollment which was removed at the end of a 24-hr monitoring period. A repeated measures study design was used where resting ECGs and blood samples were taken from individuals before SHS exposure (baseline), immediately following SHS exposure (post) and the morning following SHS exposure (next-morning).Inflammatory markers, including high sensitivity C-reactive protein (CRP) and white blood cell count (WBC) were analyzed. Heart rate variability (HRV) was analyzed from the ECG recordings in time (SDNN, rMSSD) and frequency (LF, HF) domain parameters over 5-minute periods. SHS exposure was quantified using a personal fine particulate matter (PM2.5) monitor.Linear mixed effects regression models were used to examine within-person changes in inflammatory and HRV parameters across the 3 time periods. Exposure-response relationships with PM2.5 were examined using mixed effects models. All models were adjusted for age, BMI and circadian variation. A total of 32 male non-smokers were monitored between June 2010 and June 2012. The mean PM2.5 from SHS exposure was 132 μg/m3. Immediately following SHS exposure, a 100 μg/m3 increase in PM2.5 was associated with declines in HRV (7.8% [standard error (SE) =3%] SDNN, 8.0% (SE = 3.9%) rMSSD, 17.2% (SE = 6.3%) LF, 29.0% (SE = 10.1%) HF) and increases in WBC count 0.42 (SE = 0.14) k/μl. Eighteen hours following SHS exposure, a 100 μg/m3 increase in PM2.5 was associated with 24.2% higher CRP levels. Our study suggest that short-term SHS exposure is associated with significantly lower HRV and higher levels of inflammatory markers. Exposure-associated declines in HRV were observed immediately following exposure while higher levels of CRP were not observed until 18 hours following exposure. Cardiovascular autonomic and inflammation responses may contribute to the pathophysiologic pathways that link SHS exposure with adverse cardiovascular outcomes.

  15. Probing-error compensation using 5 degree of freedom force/moment sensor for coordinate measuring machine

    NASA Astrophysics Data System (ADS)

    Lee, Minho; Cho, Nahm-Gyoo

    2013-09-01

    A new probing and compensation method is proposed to improve the three-dimensional (3D) measuring accuracy of 3D shapes, including irregular surfaces. A new tactile coordinate measuring machine (CMM) probe with a five-degree of freedom (5-DOF) force/moment sensor using carbon fiber plates was developed. The proposed method efficiently removes the anisotropic sensitivity error and decreases the stylus deformation and the actual contact point estimation errors that are major error components of shape measurement using touch probes. The relationship between the measuring force and estimation accuracy of the actual contact point error and stylus deformation error are examined for practical use of the proposed method. The appropriate measuring force condition is presented for the precision measurement.

  16. Lifetime use of cannabis from longitudinal assessments, cannabinoid receptor (CNR1) variation, and reduced volume of the right anterior cingulate

    PubMed Central

    Hill, Shirley Y.; Sharma, Vinod; Jones, Bobby L.

    2016-01-01

    Lifetime measures of cannabis use and co-occurring exposures were obtained from a longitudinal cohort followed an average of 13 years at the time they received a structural MRI scan. MRI scans were analyzed for 88 participants (mean age=25.9 years), 34 of whom were regular users of cannabis. Whole brain voxel based morphometry analyses (SPM8) were conducted using 50 voxel clusters at p=0.005. Controlling for age, familial risk, and gender, we found reduced volume in Regular Users compared to Non-Users, in the lingual gyrus, anterior cingulum (right and left), and the rolandic operculum (right). The right anterior cingulum reached family-wise error statistical significance at p=0.001, controlling for personal lifetime use of alcohol and cigarettes and any prenatal exposures. CNR1 haplotypes were formed from four CNR1 SNPs (rs806368, rs1049353, rs2023239, and rs6454674) and tested with level of cannabis exposure to assess their interactive effects on the lingual gyrus, cingulum (right and left) and rolandic operculum, regions showing cannabis exposure effects in the SPM8 analyses. These analyses used mixed model analyses (SPSS) to control for multiple potentially confounding variables. Level of cannabis exposure was associated with decreased volume of the right anterior cingulum and showed interaction effects with haplotype variation. PMID:27500453

  17. Digital radiography can reduce scoliosis x-ray exposure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kling, T.F. Jr.; Cohen, M.J.; Lindseth, R.E.

    1990-09-01

    Digital radiology is a new computerized system of acquiring x-rays in a digital (electronic) format. It possesses a greatly expanded dose response curve that allows a very broad range of x-ray dose to produce a diagnostic image. Potential advantages include significantly reduced radiation exposure without loss of image quality, acquisition of images of constant density irrespective of under or over exposure, and reduced repeat rates for unsatisfactory films. The authors prospectively studied 30 adolescents with scoliosis who had both conventional (full dose) and digital (full, one-half, or one-third dose) x-rays. They found digital made AP and lateral image with allmore » anatomic areas clearly depicted at full and one-half dose. Digital laterals were better at full dose and equal to conventional at one-half dose. Cobb angles were easily measured on all one-third dose AP and on 8 of 10 one-third dose digital laterals. Digital clearly depicted the Risser sign at one-half and one-third dose and the repeat rate was nil in this study, indicating digital compensates well for exposure errors. The study indicates that digital does allow radiation dose to be reduced by at least one-half in scoliosis patients and that it does have improved image quality with good contrast over a wide range of x-ray exposure.« less

  18. Problems in evaluating radiation dose via terrestrial and aquatic pathways.

    PubMed Central

    Vaughan, B E; Soldat, J K; Schreckhise, R G; Watson, E C; McKenzie, D H

    1981-01-01

    This review is concerned with exposure risk and the environmental pathways models used for predictive assessment of radiation dose. Exposure factors, the adequacy of available data, and the model subcomponents are critically reviewed from the standpoint of absolute error propagation. Although the models are inherently capable of better absolute accuracy, a calculated dose is usually overestimated by from two to six orders of magnitude, in practice. The principal reason for so large an error lies in using "generic" concentration ratios in situations where site specific data are needed. Major opinion of the model makers suggests a number midway between these extremes, with only a small likelihood of ever underestimating the radiation dose. Detailed evaluations are made of source considerations influencing dose (i.e., physical and chemical status of released material); dispersal mechanisms (atmospheric, hydrologic and biotic vector transport); mobilization and uptake mechanisms (i.e., chemical and other factors affecting the biological availability of radioelements); and critical pathways. Examples are shown of confounding in food-chain pathways, due to uncritical application of concentration ratios. Current thoughts of replacing the critical pathways approach to calculating dose with comprehensive model calculations are also shown to be ill-advised, given present limitations in the comprehensive data base. The pathways models may also require improved parametrization, as they are not at present structured adequately to lend themselves to validation. The extremely wide errors associated with predicting exposure stand in striking contrast to the error range associated with the extrapolation of animal effects data to the human being. PMID:7037381

  19. Effects of infrared-A irradiation on skin: discrepancies in published data highlight the need for an exact consideration of physical and photobiological laws and appropriate experimental settings.

    PubMed

    Piazena, Helmut; Kelleher, Debra K

    2010-01-01

    Skin exposure to infrared (IR) radiation should be limited in terms of irradiance, exposure time and frequency in order to avoid acute or chronic damage. Recommendations aimed at protecting humans from the risks of skin exposure to IR (e.g. ICNIRP, ACGIH) are only defined in terms of acute effects (e.g. heat pain and cardiovascular collapse), whereas the actual exposure conditions (e.g. spectral distribution, exposure geometry, frequency and number of exposures, thermal exchange with the environment, metabolic energy production and regulatory responses) are not taken into consideration. Since the IR component of solar radiation reaching the Earth's surface is mainly IR-A, and considering the increased use of devices emitting artificially generated IR-A radiation, this radiation band is of special interest. A number of in vitro and/or in vivo investigations assessing cellular or tissue damage caused by IR-A radiation have been undertaken. While such studies are necessary for the development of safety recommendations, the results of measurements undertaken to examine the interaction between skin and IR radiation emitted from different sources presented in this study, together with the detailed examination of the literature reveals a wide spectrum of contradictory findings, which in some instances may be related to methodological shortcomings or fundamental errors in the application of physical and photobiological laws, thus highlighting the need for physically and photobiologically appropriate experiments.

  20. Incorporating measurement error in n = 1 psychological autoregressive modeling.

    PubMed

    Schuurman, Noémi K; Houtveen, Jan H; Hamaker, Ellen L

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30-50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters.

  1. Accuracy Assessments and Validation of an Expanded UV Irradiance Database from Satellite Total Ozone Mapping Spectrometer (TOMS)

    NASA Technical Reports Server (NTRS)

    Krotkov, N. A.; Herman, J.; Fioletov, V.; Seftor, C.; Larko, D.; Vasilkov, A.

    2004-01-01

    The TOMS UV irradiance database (1978 to 2000) has been expanded to include 5 new products (noon irradiance at 305, 310, 324, and 380 nm, and noon erythemal-weighted irradiance), in addition to the existing erythemal daily exposure, which permit direct Comparisons with ground-based measurements from UV spectrometers. Sensitivity studies are conducted to estimate uncertainties of the new TOMS UV irradiance data due to algorithm apriori assumptions. Comparisons with Brewer spectrometers as well as filter radiometers are used to review of the sources of known errors. Inability to distinguish between snow and cloud cover using only TOMS data results in large errors in estimating surface UV using snow climatology. A correction is suggested for the case when the regional snow albedo is known from an independent source. The summer-time positive bias between TOMS UV estimations and Brewer measurements can be seen at all wavelengths. This suggests the difference is not related to ozone absorption effects. We emphasize that uncertainty of boundary layer UV aerosol absorption properties remains a major source of error in modeling UV irradiance in clear sky conditions. Neglecting aerosol absorption by the present TOMS algorithm results in a positive summertime bias in clear-sky UV estimations over many locations. Due to high aerosol variability the bias is strongly site dependent. Data from UV-shadow-band radiometer and well-calibrated CIMEL sun-sky radiometer are used to quantify the bias at NASA/GSFC site in Greenbelt, MD. Recommendations are given to enable potential users to better account for local conditions by combining standard TOMS UV data with ancillary ground measurements.

  2. High performance Si immersion gratings patterned with electron beam lithography

    NASA Astrophysics Data System (ADS)

    Gully-Santiago, Michael A.; Jaffe, Daniel T.; Brooks, Cynthia B.; Wilson, Daniel W.; Muller, Richard E.

    2014-07-01

    Infrared spectrographs employing silicon immersion gratings can be significantly more compact than spectro- graphs using front-surface gratings. The Si gratings can also offer continuous wavelength coverage at high spectral resolution. The grooves in Si gratings are made with semiconductor lithography techniques, to date almost entirely using contact mask photolithography. Planned near-infrared astronomical spectrographs require either finer groove pitches or higher positional accuracy than standard UV contact mask photolithography can reach. A collaboration between the University of Texas at Austin Silicon Diffractive Optics Group and the Jet Propulsion Laboratory Microdevices Laboratory has experimented with direct writing silicon immersion grating grooves with electron beam lithography. The patterning process involves depositing positive e-beam resist on 1 to 30 mm thick, 100 mm diameter monolithic crystalline silicon substrates. We then use the facility JEOL 9300FS e-beam writer at JPL to produce the linear pattern that defines the gratings. There are three key challenges to produce high-performance e-beam written silicon immersion gratings. (1) E- beam field and subfield stitching boundaries cause periodic cross-hatch structures along the grating grooves. The structures manifest themselves as spectral and spatial dimension ghosts in the diffraction limited point spread function (PSF) of the diffraction grating. In this paper, we show that the effects of e-beam field boundaries must be mitigated. We have significantly reduced ghost power with only minor increases in write time by using four or more field sizes of less than 500 μm. (2) The finite e-beam stage drift and run-out error cause large-scale structure in the wavefront error. We deal with this problem by applying a mark detection loop to check for and correct out minuscule stage drifts. We measure the level and direction of stage drift and show that mark detection reduces peak-to-valley wavefront error by a factor of 5. (3) The serial write process for typical gratings yields write times of about 24 hours- this makes prototyping costly. We discuss work with negative e-beam resist to reduce the fill factor of exposure, and therefore limit the exposure time. We also discuss the tradeoffs of long write-time serial write processes like e-beam with UV photomask lithography. We show the results of experiments on small pattern size prototypes on silicon wafers. Current prototypes now exceed 30 dB of suppression on spectral and spatial dimension ghosts compared to monochromatic spectral purity measurements of the backside of Si echelle gratings in reflection at 632 nm. We perform interferometry at 632 nm in reflection with a 25 mm circular beam on a grating with a blaze angle of 71.6°. The measured wavefront error is 0.09 waves peak to valley.

  3. Mitigation measures of electromagnetic field exposure in the vicinity of high frequency welders.

    PubMed

    Zubrzak, Bartłomiej; Bieńkowski, Paweł; Cała, Pawel

    2017-10-17

    Presented information about the welding process and equipment, focusing on the emission of electromagnetic field (EMF) with levels significant in terms of the labor safety regulations in force in Poland - the ordinances of the Minister of Family, Labour and Social Policy that came into force on June 27, 2016 and June 29, 2016 - emerged due to harmonization with European Union directive 2013/35/EU of 26 June 2013 of the European Parliament and the Council. They presented methods of determination of the EMF distribution in the welding machine surroundings and analyzed the background knowledge from the available literature. The subject of the analysis included popular high frequency welders widely used in the industry. Electromagnetic field measurements were performed in the welder operating place (in situ) during machine normal operations, using measurement methods accordant with labor safety regulations in force in Poland and according to the same guidelines, the EMF distributions and parameters having been described. They presented various scenarios of particular, real examples of excessive exposure to EMF in the dielectric welder surroundings and showed solutions, ranging from simple and costless and ending on dedicated electromagnetic shielding systems, which allowed to reduce EMF exposure in some cases of more than 80% (protection zone ranges) or eliminate dangerous zone presence. It has shown that in the dielectric welders surrounding, significant EMF strength levels may be the result of errors or omissions which often occur during development, installation, operation or modification of welding machines. It has allowed to present the measures that may significantly reduce the exposure to EMF of workers in the welder surroundings. The role of accredited laboratories in helping in such cases was underlined. Med Pr 2017;68(6):693-703. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  4. Application of round grating angle measurement composite error amendment in the online measurement accuracy improvement of large diameter

    NASA Astrophysics Data System (ADS)

    Wang, Biao; Yu, Xiaofen; Li, Qinzhao; Zheng, Yu

    2008-10-01

    The paper aiming at the influence factor of round grating dividing error, rolling-wheel produce eccentricity and surface shape errors provides an amendment method based on rolling-wheel to get the composite error model which includes all influence factors above, and then corrects the non-circle measurement angle error of the rolling-wheel. We make soft simulation verification and have experiment; the result indicates that the composite error amendment method can improve the diameter measurement accuracy with rolling-wheel theory. It has wide application prospect for the measurement accuracy higher than 5 μm/m.

  5. Observation of Burial and Migration of Instrumented Surrogate Munitions Deployed in the Swash Zone

    NASA Astrophysics Data System (ADS)

    Cristaudo, D.; Puleo, J. A.; Bruder, B. L.

    2017-12-01

    Munitions (also known as unexploded ordnance; UXO) in the nearshore environment due to past military activities, may be found on the beach, constituting a risk for beach users. Munitions may be transported from offshore to shallower water and/or migrate along the coast. In addition, munitions may bury in place or be exhumed due to hydrodynamic forcing. Observations on munitions mobility have generally been collected offshore, while observations in the swash zone are scarce. The swash zone is the region of the beach alternately covered by wave runup where hydrodynamic processes may be intense. Studies of munitions mobility require the use of realistic surrogates to quantify mobility/burial and hydrodynamic forcing conditions. Four surrogates (BLU-61 Cluster Bomb, 81 mm Mortar, M151-70 Hydra Rocket and M107 155 mm High Explosive Howitzer) were developed and tested during large-scale laboratory and field studies. Surrogates house sensors that measure different components of motion. Errors between real munitions and surrogate parameters (mass, center of gravity and axial moment of inertia) are all within an absolute error of 20%. Internal munitions sensors consist of inertial motion units (for acceleration and angular velocity in and around the three directions and orientation), pressure transducers (for water depth above surrogate), shock recorders (for high frequency acceleration to detect wave impact on the surrogate), and an in-house designed array of optical sensors (for burial/exposure and rolling). An in situ array of sensors to measure hydrodynamics, bed morphology and sediment concentrations, was deployed in the swash zone, aligned with the surrogate deployment. Data collected during the studies will be shown highlighting surrogate sensor capabilities. Sensors response will be compared with GPS measurements and imagery from cameras overlooking the study sites of surrogate position as a function of time. Examples of burial/exposure and migration of surrogates will be discussed. Relationships between burial/migration and incoming forcing conditions, bed slope and munitions characteristics (such as specific density, length/diameter) will all be shown.

  6. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  7. Single Event Effect Testing of the Analog Devices ADV212

    NASA Technical Reports Server (NTRS)

    Wilcox, Ted; Campola, Michael; Kadari, Madhu; Nadendla, Seshagiri R.

    2017-01-01

    The Analog Devices ADV212 was initially tested for single event effects (SEE) at the Texas AM University Cyclotron Facility (TAMU) in July of 2013. Testing revealed a sensitivity to device hang-ups classified as single event functional interrupts (SEFI), soft data errors classified as single event upsets (SEU), and, of particular concern, single event latch-ups (SEL). All error types occurred so frequently as to make accurate measurements of the exposure time, and thus total particle fluence, challenging. To mitigate some of the risk posed by single event latch-ups, circuitry was added to the electrical design to detect a high current event and automatically recycle power and reboot the device. An additional heavy-ion test was scheduled to validate the operation of the recovery circuitry and the continuing functionality of the ADV212 after a substantial number of latch-up events. As a secondary goal, more precise data would be gathered by an improved test method, described in this test report.

  8. A colorimetric sensor array for identification of toxic gases below permissible exposure limits†

    PubMed Central

    Feng, Liang; Musto, Christopher J.; Kemling, Jonathan W.; Lim, Sung H.; Suslick, Kenneth S.

    2010-01-01

    A colorimetric sensor array has been developed for the rapid and sensitive detection of 20 toxic industrial chemicals (TICs) at their PELs (permissible exposure limits). The color changes in an array of chemically responsive nanoporous pigments provide facile identification of the TICs with an error rate below 0.7%. PMID:20221484

  9. Prenatal Exposure to Alcohol, Caffeine, Tobacco, and Aspirin: Effects on Fine and Gross Motor Preformance in 4-Year-Old Children.

    ERIC Educational Resources Information Center

    Barr, Helen M.; And Others

    1990-01-01

    Multiple regression analyses of data from 449 children indicated statistically significant relationships between moderate levels of prenatal alcohol exposure and increased errors, increased latency, and increased total time on the Wisconsin Fine Motor Steadiness Battery and poorer balance on the Gross Motor Scale. (RH)

  10. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    PubMed Central

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2014-01-01

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

  11. Modeling spatial and temporal variability of residential air exchange rates for the Near-Road Exposures and Effects of Urban Air Pollutants Study (NEXUS).

    PubMed

    Breen, Michael S; Burke, Janet M; Batterman, Stuart A; Vette, Alan F; Godwin, Christopher; Croghan, Carry W; Schultz, Bradley D; Long, Thomas C

    2014-11-07

    Air pollution health studies often use outdoor concentrations as exposure surrogates. Failure to account for variability of residential infiltration of outdoor pollutants can induce exposure errors and lead to bias and incorrect confidence intervals in health effect estimates. The residential air exchange rate (AER), which is the rate of exchange of indoor air with outdoor air, is an important determinant for house-to-house (spatial) and temporal variations of air pollution infiltration. Our goal was to evaluate and apply mechanistic models to predict AERs for 213 homes in the Near-Road Exposures and Effects of Urban Air Pollutants Study (NEXUS), a cohort study of traffic-related air pollution exposures and respiratory effects in asthmatic children living near major roads in Detroit, Michigan. We used a previously developed model (LBL), which predicts AER from meteorology and questionnaire data on building characteristics related to air leakage, and an extended version of this model (LBLX) that includes natural ventilation from open windows. As a critical and novel aspect of our AER modeling approach, we performed a cross validation, which included both parameter estimation (i.e., model calibration) and model evaluation, based on daily AER measurements from a subset of 24 study homes on five consecutive days during two seasons. The measured AER varied between 0.09 and 3.48 h(-1) with a median of 0.64 h(-1). For the individual model-predicted and measured AER, the median absolute difference was 29% (0.19 h‑1) for both the LBL and LBLX models. The LBL and LBLX models predicted 59% and 61% of the variance in the AER, respectively. Daily AER predictions for all 213 homes during the three year study (2010-2012) showed considerable house-to-house variations from building leakage differences, and temporal variations from outdoor temperature and wind speed fluctuations. Using this novel approach, NEXUS will be one of the first epidemiology studies to apply calibrated and home-specific AER models, and to include the spatial and temporal variations of AER for over 200 individual homes across multiple years into an exposure assessment in support of improving risk estimates.

  12. Neurobehavioral effects of transportation noise in primary schoolchildren: a cross-sectional study

    PubMed Central

    2010-01-01

    Background Due to shortcomings in the design, no source-specific exposure-effect relations are as yet available describing the effects of noise on children's cognitive performance. This paper reports on a study investigating the effects of aircraft and road traffic noise exposure on the cognitive performance of primary schoolchildren in both the home and the school setting. Methods Participants were 553 children (age 9-11 years) attending 24 primary schools around Schiphol Amsterdam Airport. Cognitive performance was measured by the Neurobehavioral Evaluation System (NES), and a set of paper-and-pencil tests. Multilevel regression analyses were applied to estimate the association between noise exposure and cognitive performance, accounting for demographic and school related confounders. Results Effects of school noise exposure were observed in the more difficult parts of the Switching Attention Test (SAT): children attending schools with higher road or aircraft noise levels made significantly more errors. The correlational pattern and factor structure of the data indicate that the coherence between the neurobehavioral tests and paper-and-pencil tests is high. Conclusions Based on this study and previous scientific literature it can be concluded that performance on simple tasks is less susceptible to the effects of noise than performance on more complex tasks. PMID:20515466

  13. Test-retest of self-reported exposure to artificial tanning devices, self-tanning creams, and sun sensitivity showed consistency.

    PubMed

    Beane Freeman, Laura E; Dennis, Leslie K; Lynch, Charles F; Lowe, John B; Clarke, William R

    2005-04-01

    Exposure to ultraviolet radiation has consistently been linked to an increased risk of melanoma. Epidemiologic studies are susceptible to measurement error, which can distort the magnitude of observed effects. Although the reliability of self-report of many sun exposure factors has been previously described in several studies, self-report of use of artificial tanning devices and self-tanning creams has been less well characterized. A mailed survey was re-administered 2-4 weeks after completion of the initial survey to 76 randomly selected participants in a case-control study of melanoma. Cases and controls were individuals diagnosed in 1999 and 2000 who were ascertained from the Iowa Cancer Registry in 2002. We assessed the consistency of self-reported use of sunlamps and self-tanning creams, sun sensitivity, and history of sunburns. There was substantial reliability in reporting the use of sunlamps or self-tanning creams (cases: Kappa (kappa)=1.0 for both exposures; controls: kappa=0.71 and 0.87, respectively). kappa estimates of 0.62-0.78 were found for overall reliability of several sun sensitivity factors. Overall, the survey instrument demonstrated substantial reproducibility for factors related to the use of sunlamps or tanning beds, self-tanning creams, and sun sensitivity factors.

  14. Overcoming ecologic bias using the two-phase study design.

    PubMed

    Wakefield, Jon; Haneuse, Sebastien J-P A

    2008-04-15

    Ecologic (aggregate) data are widely available and widely utilized in epidemiologic studies. However, ecologic bias, which arises because aggregate data cannot characterize within-group variability in exposure and confounder variables, can only be removed by supplementing ecologic data with individual-level data. Here the authors describe the two-phase study design as a framework for achieving this objective. In phase 1, outcomes are stratified by any combination of area, confounders, and error-prone (or discretized) versions of exposures of interest. Phase 2 data, sampled within each phase 1 stratum, provide accurate measures of exposure and possibly of additional confounders. The phase 1 aggregate-level data provide a high level of statistical power and a cross-classification by which individuals may be efficiently sampled in phase 2. The phase 2 individual-level data then provide a control for ecologic bias by characterizing the within-area variability in exposures and confounders. In this paper, the authors illustrate the two-phase study design by estimating the association between infant mortality and birth weight in several regions of North Carolina for 2000-2004, controlling for gender and race. This example shows that the two-phase design removes ecologic bias and produces gains in efficiency over the use of case-control data alone. The authors discuss the advantages and disadvantages of the approach.

  15. Daily morning light therapy is associated with an increase in choroidal thickness in healthy young adults.

    PubMed

    Read, Scott A; Pieterse, Emily C; Alonso-Caneiro, David; Bormann, Rebekah; Hong, Seentinie; Lo, Chai-Hoon; Richer, Rhiannon; Syed, Atif; Tran, Linda

    2018-05-29

    Ambient light exposure is one environmental factor thought to play a role in the regulation of eye growth and refractive error development, and choroidal thickness changes have also been linked to longer term changes in eye growth. Therefore in this study we aimed to examine the influence of a 1-week period of morning light therapy upon choroidal thickness. Twenty two healthy young adult subjects had a series of macular choroidal thickness measurements collected with spectral domain optical coherence tomography before, and then following a 7-day period of increased daily light exposure. Increased light exposure was delivered through the use of commercially available light therapy glasses, worn for 30 minutes in the morning each day. A significant increase in subfoveal choroidal thickness (mean increase of +5.4 ± 10.3 µm) was found following 7-days of increased daily light exposure (p = 0.02). An increase in choroidal thickness was also observed associated with light therapy across the central 5 mm macular region. This study provides the first evidence in the human eye that daily morning light therapy results in small magnitude but statistically significant increases in choroidal thickness. These changes may have implications for our understanding of the impact of environmental factors upon eye growth.

  16. Tilt error in cryospheric surface radiation measurements at high latitudes: a model study

    NASA Astrophysics Data System (ADS)

    Bogren, Wiley Steven; Faulkner Burkhart, John; Kylling, Arve

    2016-03-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response fore optic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high-latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can, respectively introduce up to 2.7, 8.1, and 13.5 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo. Simulations including a cloud layer demonstrate decreasing tilt error with increasing cloud optical depth.

  17. Measurement system and model for simultaneously measuring 6DOF geometric errors.

    PubMed

    Zhao, Yuqiong; Zhang, Bin; Feng, Qibo

    2017-09-04

    A measurement system to simultaneously measure six degree-of-freedom (6DOF) geometric errors is proposed. The measurement method is based on a combination of mono-frequency laser interferometry and laser fiber collimation. A simpler and more integrated optical configuration is designed. To compensate for the measurement errors introduced by error crosstalk, element fabrication error, laser beam drift, and nonparallelism of two measurement beam, a unified measurement model, which can improve the measurement accuracy, is deduced and established using the ray-tracing method. A numerical simulation using the optical design software Zemax is conducted, and the results verify the correctness of the model. Several experiments are performed to demonstrate the feasibility and effectiveness of the proposed system and measurement model.

  18. An Improved Measurement Method for the Strength of Radiation of Reflective Beam in an Industrial Optical Sensor Based on Laser Displacement Meter.

    PubMed

    Bae, Youngchul

    2016-05-23

    An optical sensor such as a laser range finder (LRF) or laser displacement meter (LDM) uses reflected and returned laser beam from a target. The optical sensor has been mainly used to measure the distance between a launch position and the target. However, optical sensor based LRF and LDM have numerous and various errors such as statistical errors, drift errors, cyclic errors, alignment errors and slope errors. Among these errors, an alignment error that contains measurement error for the strength of radiation of returned laser beam from the target is the most serious error in industrial optical sensors. It is caused by the dependence of the measurement offset upon the strength of radiation of returned beam incident upon the focusing lens from the target. In this paper, in order to solve these problems, we propose a novel method for the measurement of the output of direct current (DC) voltage that is proportional to the strength of radiation of returned laser beam in the received avalanche photo diode (APD) circuit. We implemented a measuring circuit that is able to provide an exact measurement of reflected laser beam. By using the proposed method, we can measure the intensity or strength of radiation of laser beam in real time and with a high degree of precision.

  19. An Improved Measurement Method for the Strength of Radiation of Reflective Beam in an Industrial Optical Sensor Based on Laser Displacement Meter

    PubMed Central

    Bae, Youngchul

    2016-01-01

    An optical sensor such as a laser range finder (LRF) or laser displacement meter (LDM) uses reflected and returned laser beam from a target. The optical sensor has been mainly used to measure the distance between a launch position and the target. However, optical sensor based LRF and LDM have numerous and various errors such as statistical errors, drift errors, cyclic errors, alignment errors and slope errors. Among these errors, an alignment error that contains measurement error for the strength of radiation of returned laser beam from the target is the most serious error in industrial optical sensors. It is caused by the dependence of the measurement offset upon the strength of radiation of returned beam incident upon the focusing lens from the target. In this paper, in order to solve these problems, we propose a novel method for the measurement of the output of direct current (DC) voltage that is proportional to the strength of radiation of returned laser beam in the received avalanche photo diode (APD) circuit. We implemented a measuring circuit that is able to provide an exact measurement of reflected laser beam. By using the proposed method, we can measure the intensity or strength of radiation of laser beam in real time and with a high degree of precision. PMID:27223291

  20. Modifying Spearman's Attenuation Equation to Yield Partial Corrections for Measurement Error--With Application to Sample Size Calculations

    ERIC Educational Resources Information Center

    Nicewander, W. Alan

    2018-01-01

    Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…

  1. Tilt Error in Cryospheric Surface Radiation Measurements at High Latitudes: A Model Study

    NASA Astrophysics Data System (ADS)

    Bogren, W.; Kylling, A.; Burkhart, J. F.

    2015-12-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250nm to 4500nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60◦, a sensor tilted by 1, 3, and 5◦ can respectively introduce up to 2.6, 7.7, and 12.8% error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.

  2. Colour vision and light sensitivity in tunnel workers previously exposed to acrylamide and N-methylolacrylamide containing grouting agents.

    PubMed

    Goffeng, Lars Ole; Kjuus, Helge; Heier, Mona Skard; Alvestrand, Monica; Ulvestad, Bente; Skaug, Vidar

    2008-01-01

    The aim of the study was to examine possible persisting visual system effects in tunnel workers previously exposed to acrylamide and N-methylolacrylamide during grouting work. Visual field light sensitivity threshold and colour vision has been examined among 44 tunnel workers 2-10 years after exposure to acrylamide and N-methylolacrylamide containing grouting agents. Forty-four tunnel workers not involved in grouting operations served as control group. Information on exposure and background variables was obtained for all participants from a questionnaire. Visual light sensitivity threshold was measured using Humphrey Visual Field Static Perimeter 740, program 30-2 Fastpack, with red stimuli on white background, and colour vision, using Lanthony D-15 Desaturated Color test. Based on D-15d test results, colour confusion index (CCI), and a severity index (C-index) was calculated. The exposed group had a significantly higher threshold for detecting single stimuli in all parts of the inner 30 degrees of the visual field compared to the control group. The foveal threshold group difference was 1.4 dB (p=0.002) (mean value, both eyes). On the Lanthony 15 Hue Desaturated test, the exposed subjects made more errors in sorting blue colours, and a statistically significant increase in C-index was observed. Surrogate measures for duration and intensity of exposure gave no further improvement of the model. The results indicate slightly reduced light sensitivity and reduced colour discrimination among the exposed subjects compared to the controls. The findings may be due to previous exposure to acrylamide containing grouts among the tunnel workers.

  3. Anomalous annealing of floating gate errors due to heavy ion irradiation

    NASA Astrophysics Data System (ADS)

    Yin, Yanan; Liu, Jie; Sun, Youmei; Hou, Mingdong; Liu, Tianqi; Ye, Bing; Ji, Qinggang; Luo, Jie; Zhao, Peixiong

    2018-03-01

    Using the heavy ions provided by the Heavy Ion Research Facility in Lanzhou (HIRFL), the annealing of heavy-ion induced floating gate (FG) errors in 34 nm and 25 nm NAND Flash memories has been studied. The single event upset (SEU) cross section of FG and the evolution of the errors after irradiation depending on the ion linear energy transfer (LET) values, data pattern and feature size of the device are presented. Different rates of annealing for different ion LET and different pattern are observed in 34 nm and 25 nm memories. The variation of the percentage of different error patterns in 34 nm and 25 nm memories with annealing time shows that the annealing of FG errors induced by heavy-ion in memories will mainly take place in the cells directly hit under low LET ion exposure and other cells affected by heavy ions when the ion LET is higher. The influence of Multiple Cell Upsets (MCUs) on the annealing of FG errors is analyzed. MCUs with high error multiplicity which account for the majority of the errors can induce a large percentage of annealed errors.

  4. A comparison of endoscopic localization error rate between operating surgeons and referring endoscopists in colorectal cancer.

    PubMed

    Azin, Arash; Saleh, Fady; Cleghorn, Michelle; Yuen, Andrew; Jackson, Timothy; Okrainec, Allan; Quereshy, Fayez A

    2017-03-01

    Colonoscopy for colorectal cancer (CRC) has a localization error rate as high as 21 %. Such errors can have substantial clinical consequences, particularly in laparoscopic surgery. The primary objective of this study was to compare accuracy of tumor localization at initial endoscopy performed by either the operating surgeon or non-operating referring endoscopist. All patients who underwent surgical resection for CRC at a large tertiary academic hospital between January 2006 and August 2014 were identified. The exposure of interest was the initial endoscopist: (1) surgeon who also performed the definitive operation (operating surgeon group); and (2) referring gastroenterologist or general surgeon (referring endoscopist group). The outcome measure was localization error, defined as a difference in at least one anatomic segment between initial endoscopy and final operative location. Multivariate logistic regression was used to explore the association between localization error rate and the initial endoscopist. A total of 557 patients were included in the study; 81 patients in the operating surgeon cohort and 476 patients in the referring endoscopist cohort. Initial diagnostic colonoscopy performed by the operating surgeon compared to referring endoscopist demonstrated statistically significant lower intraoperative localization error rate (1.2 vs. 9.0 %, P = 0.016); shorter mean time from endoscopy to surgery (52.3 vs. 76.4 days, P = 0.015); higher tattoo localization rate (32.1 vs. 21.0 %, P = 0.027); and lower preoperative repeat endoscopy rate (8.6 vs. 40.8 %, P < 0.001). Initial endoscopy performed by the operating surgeon was protective against localization error on both univariate analysis, OR 7.94 (95 % CI 1.08-58.52; P = 0.016), and multivariate analysis, OR 7.97 (95 % CI 1.07-59.38; P = 0.043). This study demonstrates that diagnostic colonoscopies performed by an operating surgeon are independently associated with a lower localization error rate. Further research exploring the factors influencing localization accuracy and why operating surgeons have lower error rates relative to non-operating endoscopists is necessary to understand differences in care.

  5. Fuzzy risk analysis of a modern γ-ray industrial irradiator.

    PubMed

    Castiglia, F; Giardina, M

    2011-06-01

    Fuzzy fault tree analyses were used to investigate accident scenarios that involve radiological exposure to operators working in industrial γ-ray irradiation facilities. The HEART method, a first generation human reliability analysis method, was used to evaluate the probability of adverse human error in these analyses. This technique was modified on the basis of fuzzy set theory to more directly take into account the uncertainties in the error-promoting factors on which the methodology is based. Moreover, with regard to some identified accident scenarios, fuzzy radiological exposure risk, expressed in terms of potential annual death, was evaluated. The calculated fuzzy risks for the examined plant were determined to be well below the reference risk suggested by International Commission on Radiological Protection.

  6. A manufacturing error measurement methodology for a rotary vector reducer cycloidal gear based on a gear measuring center

    NASA Astrophysics Data System (ADS)

    Li, Tianxing; Zhou, Junxiang; Deng, Xiaozhong; Li, Jubo; Xing, Chunrong; Su, Jianxin; Wang, Huiliang

    2018-07-01

    A manufacturing error of a cycloidal gear is the key factor affecting the transmission accuracy of a robot rotary vector (RV) reducer. A methodology is proposed to realize the digitized measurement and data processing of the cycloidal gear manufacturing error based on the gear measuring center, which can quickly and accurately measure and evaluate the manufacturing error of the cycloidal gear by using both the whole tooth profile measurement and a single tooth profile measurement. By analyzing the particularity of the cycloidal profile and its effect on the actual meshing characteristics of the RV transmission, the cycloid profile measurement strategy is planned, and the theoretical profile model and error measurement model of cycloid-pin gear transmission are established. Through the digital processing technology, the theoretical trajectory of the probe and the normal vector of the measured point are calculated. By means of precision measurement principle and error compensation theory, a mathematical model for the accurate calculation and data processing of manufacturing error is constructed, and the actual manufacturing error of the cycloidal gear is obtained by the optimization iterative solution. Finally, the measurement experiment of the cycloidal gear tooth profile is carried out on the gear measuring center and the HEXAGON coordinate measuring machine, respectively. The measurement results verify the correctness and validity of the measurement theory and method. This methodology will provide the basis for the accurate evaluation and the effective control of manufacturing precision of the cycloidal gear in a robot RV reducer.

  7. Accounting for the measurement error of spectroscopically inferred soil carbon data for improved precision of spatial predictions.

    PubMed

    Somarathna, P D S N; Minasny, Budiman; Malone, Brendan P; Stockmann, Uta; McBratney, Alex B

    2018-08-01

    Spatial modelling of environmental data commonly only considers spatial variability as the single source of uncertainty. In reality however, the measurement errors should also be accounted for. In recent years, infrared spectroscopy has been shown to offer low cost, yet invaluable information needed for digital soil mapping at meaningful spatial scales for land management. However, spectrally inferred soil carbon data are known to be less accurate compared to laboratory analysed measurements. This study establishes a methodology to filter out the measurement error variability by incorporating the measurement error variance in the spatial covariance structure of the model. The study was carried out in the Lower Hunter Valley, New South Wales, Australia where a combination of laboratory measured, and vis-NIR and MIR inferred topsoil and subsoil soil carbon data are available. We investigated the applicability of residual maximum likelihood (REML) and Markov Chain Monte Carlo (MCMC) simulation methods to generate parameters of the Matérn covariance function directly from the data in the presence of measurement error. The results revealed that the measurement error can be effectively filtered-out through the proposed technique. When the measurement error was filtered from the data, the prediction variance almost halved, which ultimately yielded a greater certainty in spatial predictions of soil carbon. Further, the MCMC technique was successfully used to define the posterior distribution of measurement error. This is an important outcome, as the MCMC technique can be used to estimate the measurement error if it is not explicitly quantified. Although this study dealt with soil carbon data, this method is amenable for filtering the measurement error of any kind of continuous spatial environmental data. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Physicians involved in the care of patients with high risk of skin cancer should be trained regarding sun protection measures: evidence from a cross sectional study.

    PubMed

    Thomas, M; Rioual, E; Adamski, H; Roguedas, A-M; Misery, L; Michel, M; Chastel, F; Schmutz, J-L; Aubin, F; Marguery, M-C; Meyer, N

    2011-01-01

    Knowledge, regarding sun protection, is essential to change behaviour and to reduce sun exposure of patients at risk for skin cancer. Patient education regarding appropriate or sun protection measures, is a priority to reduce skin cancer incidence. The aim of this study was to evaluate the knowledge about sun protection and the recommendations given in a population of non-dermatologists physicians involved in the care of patients at high risk of skin cancer. This study is a cross-sectional study. Physicians were e-mailed an anonymous questionnaire evaluating the knowledge about risk factors for skin cancer, sun protection and about the role of the physician in providing sun protection recommendations. Of the responders, 71.4% considered that the risk of skin cancer of their patients was increased when compared with the general population. All the responders knew that UV-radiations can contribute to induce skin cancers and 71.4% of them declared having adequate knowledge about sun protection measures. A proportion of 64.2% of them declared that they were able to give sun protection advices: using sunscreens (97.8%), wearing covering clothes (95.5%), performing regular medical skin examination (91.1%), to avoid direct sunlight exposure (77.8%), avoiding outdoor activities in the hottest midday hours (73.3%) and practising progressive exposure (44.4%). Non-dermatologist physicians reported a correct knowledge of UV-induced skin cancer risk factors. The majority of responders displayed adequate knowledge of sun protection measures and declared providing patients with sun protection recommendation on a regular basis. Several errors persisted. © 2010 The Authors. Journal of the European Academy of Dermatology and Venereology © 2010 European Academy of Dermatology and Venereology.

  9. Space-Borne Laser Altimeter Geolocation Error Analysis

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Fang, J.; Ai, Y.

    2018-05-01

    This paper reviews the development of space-borne laser altimetry technology over the past 40 years. Taking the ICESAT satellite as an example, a rigorous space-borne laser altimeter geolocation model is studied, and an error propagation equation is derived. The influence of the main error sources, such as the platform positioning error, attitude measurement error, pointing angle measurement error and range measurement error, on the geolocation accuracy of the laser spot are analysed by simulated experiments. The reasons for the different influences on geolocation accuracy in different directions are discussed, and to satisfy the accuracy of the laser control point, a design index for each error source is put forward.

  10. Calibration of GafChromic XR-RV3 radiochromic film for skin dose measurement using standardized x-ray spectra and a commercial flatbed scanner

    PubMed Central

    McCabe, Bradley P.; Speidel, Michael A.; Pike, Tina L.; Van Lysel, Michael S.

    2011-01-01

    Purpose: In this study, newly formulated XR-RV3 GafChromic® film was calibrated with National Institute of Standards and Technology (NIST) traceability for measurement of patient skin dose during fluoroscopically guided interventional procedures. Methods: The film was calibrated free-in-air to air kerma levels between 15 and 1100 cGy using four moderately filtered x-ray beam qualities (60, 80, 100, and 120 kVp). The calibration films were scanned with a commercial flatbed document scanner. Film reflective density-to-air kerma calibration curves were constructed for each beam quality, with both the orange and white sides facing the x-ray source. A method to correct for nonuniformity in scanner response (up to 25% depending on position) was developed to enable dose measurement with large films. The response of XR-RV3 film under patient backscattering conditions was examined using on-phantom film exposures and Monte Carlo simulations. Results: The response of XR-RV3 film to a given air kerma depended on kVp and film orientation. For a 200 cGy air kerma exposure with the orange side of the film facing the source, the film response increased by 20% from 60 to 120 kVp. At 500 cGy, the increase was 12%. When 500 cGy exposures were performed with the white side facing the x-ray source, the film response increased by 4.0% (60 kVp) to 9.9% (120 kVp) compared to the orange-facing orientation. On-phantom film measurements and Monte Carlo simulations show that using a NIST-traceable free-in-air calibration curve to determine air kerma in the presence of backscatter results in an error from 2% up to 8% depending on beam quality. The combined uncertainty in the air kerma measurement from the calibration curves and scanner nonuniformity correction was ±7.1% (95% C.I.). The film showed notable stability. Calibrations of film and scanner separated by 1 yr differed by 1.0%. Conclusions: XR-RV3 radiochromic film response to a given air kerma shows dependence on beam quality and film orientation. The presence of backscatter slightly modifies the x-ray energy spectrum; however, the increase in film response can be attributed primarily to the increase in total photon fluence at the sensitive layer. Film calibration curves created under free-in-air conditions may be used to measure dose from fluoroscopic quality x-ray beams, including patient backscatter with an error less than the uncertainty of the calibration in most cases. PMID:21626925

  11. Temperature distributions measurement of high intensity focused ultrasound using a thin-film thermocouple array and estimation of thermal error caused by viscous heating.

    PubMed

    Matsuki, Kosuke; Narumi, Ryuta; Azuma, Takashi; Yoshinaka, Kiyoshi; Sasaki, Akira; Okita, Kohei; Takagi, Shu; Matsumoto, Yoichiro

    2013-01-01

    To improve the throughput of high intensity focused ultrasound (HIFU) treatment, we have considered a focus switching method at two points. For this method, it is necessary to evaluate the thermal distribution under exposure to ultrasound. The thermal distribution was measured using a prototype thin-film thermocouple array, which has the advantage of minimizing the influence of the thermocouple on the acoustic and temperature fields. Focus switching was employed to enlarge the area of temperature increase and evaluate the proposed evaluation parameters with respect to safety and uniformity. The results indicate that focus switching can effectively expand the thermal lesion while maintaining a steep thermal boundary. In addition, the influence caused by the thin-film thermocouple array was estimated experimentally. This thermocouple was demonstrated to be an effective tool for the measurement of temperature distributions induced by HIFU.

  12. Landscape Evolution Mechanisms in Gale Crater from In-Situ Measurement of Cosmogenic Noble Gas Isotopes

    NASA Astrophysics Data System (ADS)

    Martin, P.; Farley, K. A.; Mahaffy, P. R.; Malespin, C.; Vasconcelos, P. M.

    2017-12-01

    The Sample Analysis at Mars (SAM) instrument onboard the Curiosity rover can measure the noble gas isotopes contained in drilled rock samples on Mars by heating these samples to 930°C. In combination with bulk chemistry measured by the Alpha Particle X-ray Spectrometer (APXS), cosmogenic nuclide production rates can be determined and an exposure age may be calculated. Three cosmogenic nuclides are measured: 3He, and 21Ne, which are produced via spallation of mainly O, Mg, Si, and Al (held mostly in detrital grains); and 36Ar, which is produced from neutron capture of 35Cl (held mostly in secondary materials). To date, three samples have been measured: Cumberland (CB), Windjana (WJ), and Mojave 2 (MJ2). CB yielded 3He, 21Ne, and 36Ar ages of 72 ± 15, 84 ± 28, and 79 ± 24 Ma, respectively [Farley et al., 2014]. Two aliquots of WJ gave error-weighted mean ages of 30 ± 27 Ma (3He), 54 ± 19 Ma (21Ne), and 63 ± 84 Ma (36Ar) [Vasconcelos et al., 2016]. These relatively young ages were interpreted to suggest that a scarp-retreat mechanism is responsible for erosion at both the CB and WJ localities. The most recent measurements on MJ2 do not include the 21Ne isotope because of an instrument issue at this mass. 3He observed in MJ2 is the highest of any sample yet measured, suggesting an exposure age of approximately 1 Ga. In contrast, the calculated exposure age from 36Ar appears to be less than 100 Ma (despite a high uncertainty due to isobaric H35Cl). This discrepancy could be explained by 1) a contribution of extraterrestrial 3He from interplanetary dust or meteoritic fragments, or 2) approximately 1 Ga of prior exposure to the detrital grains. In the latter case 36Ar accumulates only after the Cl-bearing secondary minerals are formed and exposed at the surface. In either scenario the 36Ar measurement provides the better estimate of the recent exposure history. The young upper limit for 36Ar at MJ2 is consistent with the scarp-retreat mechanism observed at CB and WJ. These results have important implications in the search for organics in Gale Crater. Complex organic molecules are vulnerable to breakdown via cosmic ray bombardment. The apparent dominance of scarp-retreat landscape evolution in Gale Crater suggests that in the search for potentially biogenic organics, the base of a freshly eroded scarp offers the best potential for organics preservation.

  13. Statistical analysis of modeling error in structural dynamic systems

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.; Chrostowski, J. D.

    1990-01-01

    The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.

  14. Highway proximity associated with cardiovascular disease risk: the influence of individual-level confounders and exposure misclassification

    PubMed Central

    2013-01-01

    Background Elevated cardiovascular disease risk has been reported with proximity to highways or busy roadways, but proximity measures can be challenging to interpret given potential confounders and exposure error. Methods We conducted a cross sectional analysis of plasma levels of C-Reactive Protein (hsCRP), Interleukin-6 (IL-6), Tumor Necrosis Factor alpha receptor II (TNF-RII) and fibrinogen with distance of residence to a highway in and around Boston, Massachusetts. Distance was assigned using ortho-photo corrected parcel matching, as well as less precise approaches such as simple parcel matching and geocoding addresses to street networks. We used a combined random and convenience sample of 260 adults >40 years old. We screened a large number of individual-level variables including some infrequently collected for assessment of highway proximity, and included a subset in our final regression models. We monitored ultrafine particle (UFP) levels in the study areas to help interpret proximity measures. Results Using the orthophoto corrected geocoding, in a fully adjusted model, hsCRP and IL-6 differed by distance category relative to urban background: 43% (-16%,141%) and 49% (6%,110%) increase for 0-50 m; 7% (-39%,45%) and 41% (6%,86%) for 50-150 m; 54% (-2%,142%) and 18% (-11%,57%) for 150-250 m, and 49% (-4%, 131%) and 42% (6%, 89%) for 250-450 m. There was little evidence for association for TNF-RII or fibrinogen. Ortho-photo corrected geocoding resulted in stronger associations than traditional methods which introduced differential misclassification. Restricted analysis found the effect of proximity on biomarkers was mostly downwind from the highway or upwind where there was considerable local street traffic, consistent with patterns of monitored UFP levels. Conclusion We found associations between highway proximity and both hsCRP and IL-6, with non-monotonic patterns explained partly by individual-level factors and differences between proximity and UFP concentrations. Our analyses emphasize the importance of controlling for the risk of differential exposure misclassification from geocoding error. PMID:24090339

  15. Highway proximity associated with cardiovascular disease risk: the influence of individual-level confounders and exposure misclassification.

    PubMed

    Brugge, Doug; Lane, Kevin; Padró-Martínez, Luz T; Stewart, Andrea; Hoesterey, Kyle; Weiss, David; Wang, Ding Ding; Levy, Jonathan I; Patton, Allison P; Zamore, Wig; Mwamburi, Mkaya

    2013-10-03

    Elevated cardiovascular disease risk has been reported with proximity to highways or busy roadways, but proximity measures can be challenging to interpret given potential confounders and exposure error. We conducted a cross sectional analysis of plasma levels of C-Reactive Protein (hsCRP), Interleukin-6 (IL-6), Tumor Necrosis Factor alpha receptor II (TNF-RII) and fibrinogen with distance of residence to a highway in and around Boston, Massachusetts. Distance was assigned using ortho-photo corrected parcel matching, as well as less precise approaches such as simple parcel matching and geocoding addresses to street networks. We used a combined random and convenience sample of 260 adults >40 years old. We screened a large number of individual-level variables including some infrequently collected for assessment of highway proximity, and included a subset in our final regression models. We monitored ultrafine particle (UFP) levels in the study areas to help interpret proximity measures. Using the orthophoto corrected geocoding, in a fully adjusted model, hsCRP and IL-6 differed by distance category relative to urban background: 43% (-16%,141%) and 49% (6%,110%) increase for 0-50 m; 7% (-39%,45%) and 41% (6%,86%) for 50-150 m; 54% (-2%,142%) and 18% (-11%,57%) for 150-250 m, and 49% (-4%, 131%) and 42% (6%, 89%) for 250-450 m. There was little evidence for association for TNF-RII or fibrinogen. Ortho-photo corrected geocoding resulted in stronger associations than traditional methods which introduced differential misclassification. Restricted analysis found the effect of proximity on biomarkers was mostly downwind from the highway or upwind where there was considerable local street traffic, consistent with patterns of monitored UFP levels. We found associations between highway proximity and both hsCRP and IL-6, with non-monotonic patterns explained partly by individual-level factors and differences between proximity and UFP concentrations. Our analyses emphasize the importance of controlling for the risk of differential exposure misclassification from geocoding error.

  16. Incorporating measurement error in n = 1 psychological autoregressive modeling

    PubMed Central

    Schuurman, Noémi K.; Houtveen, Jan H.; Hamaker, Ellen L.

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30–50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters. PMID:26283988

  17. Correcting AUC for Measurement Error.

    PubMed

    Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang

    2015-12-01

    Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.

  18. Analysis of measured data of human body based on error correcting frequency

    NASA Astrophysics Data System (ADS)

    Jin, Aiyan; Peipei, Gao; Shang, Xiaomei

    2014-04-01

    Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.

  19. Use of units of measurement error in anthropometric comparisons.

    PubMed

    Lucas, Teghan; Henneberg, Maciej

    2017-09-01

    Anthropometrists attempt to minimise measurement errors, however, errors cannot be eliminated entirely. Currently, measurement errors are simply reported. Measurement errors should be included into analyses of anthropometric data. This study proposes a method which incorporates measurement errors into reported values, replacing metric units with 'units of technical error of measurement (TEM)' by applying these to forensics, industrial anthropometry and biological variation. The USA armed forces anthropometric survey (ANSUR) contains 132 anthropometric dimensions of 3982 individuals. Concepts of duplication and Euclidean distance calculations were applied to the forensic-style identification of individuals in this survey. The National Size and Shape Survey of Australia contains 65 anthropometric measurements of 1265 women. This sample was used to show how a woman's body measurements expressed in TEM could be 'matched' to standard clothing sizes. Euclidean distances show that two sets of repeated anthropometric measurements of the same person cannot be matched (> 0) on measurements expressed in millimetres but can in units of TEM (= 0). Only 81 women can fit into any standard clothing size when matched using centimetres, with units of TEM, 1944 women fit. The proposed method can be applied to all fields that use anthropometry. Units of TEM are considered a more reliable unit of measurement for comparisons.

  20. Baseline Error Analysis and Experimental Validation for Height Measurement of Formation Insar Satellite

    NASA Astrophysics Data System (ADS)

    Gao, X.; Li, T.; Zhang, X.; Geng, X.

    2018-04-01

    In this paper, we proposed the stochastic model of InSAR height measurement by considering the interferometric geometry of InSAR height measurement. The model directly described the relationship between baseline error and height measurement error. Then the simulation analysis in combination with TanDEM-X parameters was implemented to quantitatively evaluate the influence of baseline error to height measurement. Furthermore, the whole emulation validation of InSAR stochastic model was performed on the basis of SRTM DEM and TanDEM-X parameters. The spatial distribution characteristics and error propagation rule of InSAR height measurement were fully evaluated.

  1. The Contributions of Near Work and Outdoor Activity to the Correlation Between Siblings in the Collaborative Longitudinal Evaluation of Ethnicity and Refractive Error (CLEERE) Study

    PubMed Central

    Jones-Jordan, Lisa A.; Sinnott, Loraine T.; Graham, Nicholas D.; Cotter, Susan A.; Kleinstein, Robert N.; Manny, Ruth E.; Mutti, Donald O.; Twelker, J. Daniel; Zadnik, Karla

    2014-01-01

    Purpose. We determined the correlation between sibling refractive errors adjusted for shared and unique environmental factors using data from the Collaborative Longitudinal Evaluation of Ethnicity and Refractive Error (CLEERE) Study. Methods. Refractive error from subjects' last study visits was used to estimate the intraclass correlation coefficient (ICC) between siblings. The correlation models used environmental factors (diopter-hours and outdoor/sports activity) assessed annually from parents by survey to adjust for shared and unique environmental exposures when estimating the heritability of refractive error (2*ICC). Results. Data from 700 families contributed to the between-sibling correlation for spherical equivalent refractive error. The mean age of the children at the last visit was 13.3 ± 0.90 years. Siblings engaged in similar amounts of near and outdoor activities (correlations ranged from 0.40–0.76). The ICC for spherical equivalent, controlling for age, sex, ethnicity, and site was 0.367 (95% confidence interval [CI] = 0.304, 0.420), with an estimated heritability of no more than 0.733. After controlling for these variables, and near and outdoor/sports activities, the resulting ICC was 0.364 (95% CI = 0.304, 0.420; estimated heritability no more than 0.728, 95% CI = 0.608, 0.850). The ICCs did not differ significantly between male–female and single sex pairs. Conclusions. Adjusting for shared family and unique, child-specific environmental factors only reduced the estimate of refractive error correlation between siblings by 0.5%. Consistent with a lack of association between myopia progression and either near work or outdoor/sports activity, substantial common environmental exposures had little effect on this correlation. Genetic effects appear to have the major role in determining the similarity of refractive error between siblings. PMID:25205866

  2. Improved characterisation of measurement errors in electrical resistivity tomography (ERT) surveys

    NASA Astrophysics Data System (ADS)

    Tso, C. H. M.; Binley, A. M.; Kuras, O.; Graham, J.

    2016-12-01

    Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe a statistical model of data errors before inversion. Wrongly prescribed error levels can lead to over- or under-fitting of data, yet commonly used models of measurement error are relatively simplistic. With the heightening interests in uncertainty estimation across hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide more reliable estimates of uncertainty. We have analysed two time-lapse electrical resistivity tomography (ERT) datasets; one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24h timeframe, while the other is a year-long cross-borehole survey at a UK nuclear site with over 50,000 daily measurements. Our study included the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and covariance analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used. This agrees with reported speculation in previous literature that ERT errors could be somewhat correlated. Based on these findings, we develop a new error model that allows grouping based on electrode number in additional to fitting a linear model to transfer resistance. The new model fits the observed measurement errors better and shows superior inversion and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the number of the four electrodes used to make each measurement. The new model can be readily applied to the diagonal data weighting matrix commonly used in classical inversion methods, as well as to the data covariance matrix in the Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.

  3. An interpretation of radiosonde errors in the atmospheric boundary layer

    Treesearch

    Bernadette H. Connell; David R. Miller

    1995-01-01

    The authors review sources of error in radiosonde measurements in the atmospheric boundary layer and analyze errors of two radiosonde models manufactured by Atmospheric Instrumentation Research, Inc. The authors focus on temperature and humidity lag errors and wind errors. Errors in measurement of azimuth and elevation angles and pressure over short time intervals and...

  4. [Improvement of the knowledge on allergic cross-reactions between two drug groups: beta-lactams and NSAIDS].

    PubMed

    Sánchez-Quiles, I; Nájera-Pérez, M D; Calleja-Hernández, M Á; Martinez-Martínez, F; Belchí-Hernández, J; Canteras, M

    2013-01-01

    To identify opportunities for improving the available knowledge of health care professionals (particularly, physicians, pharmacists, and nurses) on crossed allergic reactions (CAR) to penicillins and NSAIDs. Quasi-experimental prospective pre-exposure study at a 412-beds hospital. An assessment of the knowledge on CAR to penicillins and NSAIDs was performed by means of anonymous questionnaires before (1st questionnaire) and after (2d questionnaire) the implementation of a series of improvement measures: protocol of "patient allergic to drugs", pocket card, poster with summarized information, and informative talks. The questionnaires served as the CRF and the statistical analysis was done with the SPSS v18.0 software. The mean number of errors in the first questionnaire on CARs of penicillin allergic patient and on CARs of NSAIDs allergic patients was 20.53 and 27.62, respectively. The mean number of errors in the second questionnaire on CARs of penicillin allergic patient and on CARs of NSAIDs allergic patients was 2.27 and 7.26, respectively. All the results were significant for a p level < 0.005. - There is insufficient knowledge on CARs to penicillins and NSAIDS, which justifies improvement measures. - After the implementation of improvement measures, there is an increased knowledge on CARs to penicillins and NSAIDs in the study groups. Copyright © 2013 SEFH. Published by AULA MEDICA. All rights reserved.

  5. Correction of odds ratios in case-control studies for exposure misclassification with partial knowledge of the degree of agreement among experts who assessed exposures.

    PubMed

    Burstyn, Igor; Gustafson, Paul; Pintos, Javier; Lavoué, Jérôme; Siemiatycki, Jack

    2018-02-01

    Estimates of association between exposures and diseases are often distorted by error in exposure classification. When the validity of exposure assessment is known, this can be used to adjust these estimates. When exposure is assessed by experts, even if validity is not known, we sometimes have information about interrater reliability. We present a Bayesian method for translating the knowledge of interrater reliability, which is often available, into knowledge about validity, which is often needed but not directly available, and applying this to correct odds ratios (OR). The method allows for inclusion of observed potential confounders in the analysis, as is common in regression-based control for confounding. Our method uses a novel type of prior on sensitivity and specificity. The approach is illustrated with data from a case-control study of lung cancer risk and occupational exposure to diesel engine emissions, in which exposure assessment was made by detailed job history interviews with study subjects followed by expert judgement. Using interrater agreement measured by kappas (κ), we estimate sensitivity and specificity of exposure assessment and derive misclassification-corrected confounder-adjusted OR. Misclassification-corrected and confounder-adjusted OR obtained with the most defensible prior had a posterior distribution centre of 1.6 with 95% credible interval (Crl) 1.1 to 2.6. This was on average greater in magnitude than frequentist point estimate of 1.3 (95% Crl 1.0 to 1.7). The method yields insights into the degree of exposure misclassification and appears to reduce attenuation bias due to misclassification of exposure while the estimated uncertainty increased. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  6. Development of a software based automatic exposure control system for use in image guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Morton, Daniel R.

    Modern image guided radiation therapy involves the use of an isocentrically mounted imaging system to take radiographs of a patient's position before the start of each treatment. Image guidance helps to minimize errors associated with a patients setup, but the radiation dose received by patients from imaging must be managed to ensure no additional risks. The Varian On-Board Imager (OBI) (Varian Medical Systems, Inc., Palo Alto, CA) does not have an automatic exposure control system and therefore requires exposure factors to be manually selected. Without patient specific exposure factors, images may become saturated and require multiple unnecessary exposures. A software based automatic exposure control system has been developed to predict optimal, patient specific exposure factors. The OBI system was modelled in terms of the x-ray tube output and detector response in order to calculate the level of detector saturation for any exposure situation. Digitally reconstructed radiographs are produced via ray-tracing through the patients' volumetric datasets that are acquired for treatment planning. The ray-trace determines the attenuation of the patient and subsequent x-ray spectra incident on the imaging detector. The resulting spectra are used in the detector response model to determine the exposure levels required to minimize detector saturation. Images calculated for various phantoms showed good agreement with the images that were acquired on the OBI. Overall, regions of detector saturation were accurately predicted and the detector response for non-saturated regions in images of an anthropomorphic phantom were calculated to generally be within 5 to 10 % of the measured values. Calculations were performed on patient data and found similar results as the phantom images, with the calculated images being able to determine detector saturation with close agreement to images that were acquired during treatment. Overall, it was shown that the system model and calculation method could potentially be used to predict patients' exposure factors before their treatment begins, thus preventing the need for multiple exposures.

  7. Investigation on coupling error characteristics in angular rate matching based ship deformation measurement approach

    NASA Astrophysics Data System (ADS)

    Yang, Shuai; Wu, Wei; Wang, Xingshu; Xu, Zhiguang

    2018-01-01

    The coupling error in the measurement of ship hull deformation can significantly influence the attitude accuracy of the shipborne weapons and equipments. It is therefore important to study the characteristics of the coupling error. In this paper, an comprehensive investigation on the coupling error is reported, which has a potential of deducting the coupling error in the future. Firstly, the causes and characteristics of the coupling error are analyzed theoretically based on the basic theory of measuring ship deformation. Then, simulations are conducted for verifying the correctness of the theoretical analysis. Simulation results show that the cross-correlation between dynamic flexure and ship angular motion leads to the coupling error in measuring ship deformation, and coupling error increases with the correlation value between them. All the simulation results coincide with the theoretical analysis.

  8. Image reduction pipeline for the detection of variable sources in highly crowded fields

    NASA Astrophysics Data System (ADS)

    Gössl, C. A.; Riffeser, A.

    2002-01-01

    We present a reduction pipeline for CCD (charge-coupled device) images which was built to search for variable sources in highly crowded fields like the M 31 bulge and to handle extensive databases due to large time series. We describe all steps of the standard reduction in detail with emphasis on the realisation of per pixel error propagation: Bias correction, treatment of bad pixels, flatfielding, and filtering of cosmic rays. The problems of conservation of PSF (point spread function) and error propagation in our image alignment procedure as well as the detection algorithm for variable sources are discussed: we build difference images via image convolution with a technique called OIS (optimal image subtraction, Alard & Lupton \\cite{1998ApJ...503..325A}), proceed with an automatic detection of variable sources in noise dominated images and finally apply a PSF-fitting, relative photometry to the sources found. For the WeCAPP project (Riffeser et al. \\cite{2001A&A...0000..00R}) we achieve 3sigma detections for variable sources with an apparent brightness of e.g. m = 24.9;mag at their minimum and a variation of Delta m = 2.4;mag (or m = 21.9;mag brightness minimum and a variation of Delta m = 0.6;mag) on a background signal of 18.1;mag/arcsec2 based on a 500;s exposure with 1.5;arcsec seeing at a 1.2;m telescope. The complete per pixel error propagation allows us to give accurate errors for each measurement.

  9. Power Measurement Errors on a Utility Aircraft

    NASA Technical Reports Server (NTRS)

    Bousman, William G.

    2002-01-01

    Extensive flight test data obtained from two recent performance tests of a UH 60A aircraft are reviewed. A power difference is calculated from the power balance equation and is used to examine power measurement errors. It is shown that the baseline measurement errors are highly non-Gaussian in their frequency distribution and are therefore influenced by additional, unquantified variables. Linear regression is used to examine the influence of other variables and it is shown that a substantial portion of the variance depends upon measurements of atmospheric parameters. Correcting for temperature dependence, although reducing the variance in the measurement errors, still leaves unquantified effects. Examination of the power difference over individual test runs indicates significant errors from drift, although it is unclear how these may be corrected. In an idealized case, where the drift is correctable, it is shown that the power measurement errors are significantly reduced and the error distribution is Gaussian. A new flight test program is recommended that will quantify the thermal environment for all torque measurements on the UH 60. Subsequently, the torque measurement systems will be recalibrated based on the measured thermal environment and a new power measurement assessment performed.

  10. Fusing metabolomics data sets with heterogeneous measurement errors

    PubMed Central

    Waaijenborg, Sandra; Korobko, Oksana; Willems van Dijk, Ko; Lips, Mirjam; Hankemeier, Thomas; Wilderjans, Tom F.; Smilde, Age K.

    2018-01-01

    Combining different metabolomics platforms can contribute significantly to the discovery of complementary processes expressed under different conditions. However, analysing the fused data might be hampered by the difference in their quality. In metabolomics data, one often observes that measurement errors increase with increasing measurement level and that different platforms have different measurement error variance. In this paper we compare three different approaches to correct for the measurement error heterogeneity, by transformation of the raw data, by weighted filtering before modelling and by a modelling approach using a weighted sum of residuals. For an illustration of these different approaches we analyse data from healthy obese and diabetic obese individuals, obtained from two metabolomics platforms. Concluding, the filtering and modelling approaches that both estimate a model of the measurement error did not outperform the data transformation approaches for this application. This is probably due to the limited difference in measurement error and the fact that estimation of measurement error models is unstable due to the small number of repeats available. A transformation of the data improves the classification of the two groups. PMID:29698490

  11. Limited Sampling Strategy for Accurate Prediction of Pharmacokinetics of Saroglitazar: A 3-point Linear Regression Model Development and Successful Prediction of Human Exposure.

    PubMed

    Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V

    2018-03-01

    Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were <30% (predefined criterion) and correlation (r) was at least 0.7950 for the consolidated internal and external datasets of 102 healthy subjects for the AUC 0-t prediction of saroglitazar. The same models, when applied to the AUC 0-t prediction of saroglitazar sulfoxide, showed mean prediction error, mean absolute prediction error, and root mean square error <30% and correlation (r) was at least 0.9339 in the same pool of healthy subjects. A 3-concentration-time points limited sampling model predicts the exposure of saroglitazar (ie, AUC 0-t ) within predefined acceptable bias and imprecision limit. Same model was also used to predict AUC 0-∞ . The same limited sampling model was found to predict the exposure of saroglitazar sulfoxide within predefined criteria. This model can find utility during late-phase clinical development of saroglitazar in the patient population. Copyright © 2018 Elsevier HS Journals, Inc. All rights reserved.

  12. Aliasing errors in measurements of beam position and ellipticity

    NASA Astrophysics Data System (ADS)

    Ekdahl, Carl

    2005-09-01

    Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all.

  13. Error analysis and experiments of attitude measurement using laser gyroscope

    NASA Astrophysics Data System (ADS)

    Ren, Xin-ran; Ma, Wen-li; Jiang, Ping; Huang, Jin-long; Pan, Nian; Guo, Shuai; Luo, Jun; Li, Xiao

    2018-03-01

    The precision of photoelectric tracking and measuring equipment on the vehicle and vessel is deteriorated by the platform's movement. Specifically, the platform's movement leads to the deviation or loss of the target, it also causes the jitter of visual axis and then produces image blur. In order to improve the precision of photoelectric equipment, the attitude of photoelectric equipment fixed with the platform must be measured. Currently, laser gyroscope is widely used to measure the attitude of the platform. However, the measurement accuracy of laser gyro is affected by its zero bias, scale factor, installation error and random error. In this paper, these errors were analyzed and compensated based on the laser gyro's error model. The static and dynamic experiments were carried out on a single axis turntable, and the error model was verified by comparing the gyro's output with an encoder with an accuracy of 0.1 arc sec. The accuracy of the gyroscope has increased from 7000 arc sec to 5 arc sec for an hour after error compensation. The method used in this paper is suitable for decreasing the laser gyro errors in inertial measurement applications.

  14. Detection of Chemical Precursors of Explosives

    NASA Technical Reports Server (NTRS)

    Li, Jing

    2012-01-01

    Certain selected chemicals associated with terrorist activities are too unstable to be prepared in final form. These chemicals are often prepared as precursor components, to be combined at a time immediately preceding the detonation. One example is a liquid explosive, which usually requires an oxidizer, an energy source, and a chemical or physical mechanism to combine the other components. Detection of the oxidizer (e.g. H2O2) or the energy source (e.g., nitromethane) is often possible, but must be performed in a short time interval (e.g., 5 15 seconds) and in an environment with a very small concentration (e.g.,1 100 ppm), because the target chemical(s) is carried in a sealed container. These needs are met by this invention, which provides a system and associated method for detecting one or more chemical precursors (components) of a multi-component explosive compound. Different carbon nanotubes (CNTs) are loaded (by doping, impregnation, coating, or other functionalization process) for detecting of different chemical substances that are the chemical precursors, respectively, if these precursors are present in a gas to which the CNTs are exposed. After exposure to the gas, a measured electrical parameter (e.g. voltage or current that correlate to impedance, conductivity, capacitance, inductance, etc.) changes with time and concentration in a predictable manner if a selected chemical precursor is present, and will approach an asymptotic value promptly after exposure to the precursor. The measured voltage or current are compared with one or more sequences of their reference values for one or more known target precursor molecules, and a most probable concentration value is estimated for each one, two, or more target molecules. An error value is computed, based on differences of voltage or current for the measured and reference values, using the most probable concentration values. Where the error value is less than a threshold, the system concludes that the target molecule is likely. Presence of one, two, or more target molecules in the gas can be sensed from a single set of measurements.

  15. Designing and evaluating an automated system for real-time medication administration error detection in a neonatal intensive care unit.

    PubMed

    Ni, Yizhao; Lingren, Todd; Hall, Eric S; Leonard, Matthew; Melton, Kristin; Kirkendall, Eric S

    2018-05-01

    Timely identification of medication administration errors (MAEs) promises great benefits for mitigating medication errors and associated harm. Despite previous efforts utilizing computerized methods to monitor medication errors, sustaining effective and accurate detection of MAEs remains challenging. In this study, we developed a real-time MAE detection system and evaluated its performance prior to system integration into institutional workflows. Our prospective observational study included automated MAE detection of 10 high-risk medications and fluids for patients admitted to the neonatal intensive care unit at Cincinnati Children's Hospital Medical Center during a 4-month period. The automated system extracted real-time medication use information from the institutional electronic health records and identified MAEs using logic-based rules and natural language processing techniques. The MAE summary was delivered via a real-time messaging platform to promote reduction of patient exposure to potential harm. System performance was validated using a physician-generated gold standard of MAE events, and results were compared with those of current practice (incident reporting and trigger tools). Physicians identified 116 MAEs from 10 104 medication administrations during the study period. Compared to current practice, the sensitivity with automated MAE detection was improved significantly from 4.3% to 85.3% (P = .009), with a positive predictive value of 78.0%. Furthermore, the system showed potential to reduce patient exposure to harm, from 256 min to 35 min (P < .001). The automated system demonstrated improved capacity for identifying MAEs while guarding against alert fatigue. It also showed promise for reducing patient exposure to potential harm following MAE events.

  16. A comparative study of space radiation organ doses and associated cancer risks using PHITS and HZETRN.

    PubMed

    Bahadori, Amir A; Sato, Tatsuhiko; Slaba, Tony C; Shavers, Mark R; Semones, Edward J; Van Baalen, Mary; Bolch, Wesley E

    2013-10-21

    NASA currently uses one-dimensional deterministic transport to generate values of the organ dose equivalent needed to calculate stochastic radiation risk following crew space exposures. In this study, organ absorbed doses and dose equivalents are calculated for 50th percentile male and female astronaut phantoms using both the NASA High Charge and Energy Transport Code to perform one-dimensional deterministic transport and the Particle and Heavy Ion Transport Code System to perform three-dimensional Monte Carlo transport. Two measures of radiation risk, effective dose and risk of exposure-induced death (REID) are calculated using the organ dose equivalents resulting from the two methods of radiation transport. For the space radiation environments and simplified shielding configurations considered, small differences (<8%) in the effective dose and REID are found. However, for the galactic cosmic ray (GCR) boundary condition, compensating errors are observed, indicating that comparisons between the integral measurements of complex radiation environments and code calculations can be misleading. Code-to-code benchmarks allow for the comparison of differential quantities, such as secondary particle differential fluence, to provide insight into differences observed in integral quantities for particular components of the GCR spectrum.

  17. A comparative study of space radiation organ doses and associated cancer risks using PHITS and HZETRN

    NASA Astrophysics Data System (ADS)

    Bahadori, Amir A.; Sato, Tatsuhiko; Slaba, Tony C.; Shavers, Mark R.; Semones, Edward J.; Van Baalen, Mary; Bolch, Wesley E.

    2013-10-01

    NASA currently uses one-dimensional deterministic transport to generate values of the organ dose equivalent needed to calculate stochastic radiation risk following crew space exposures. In this study, organ absorbed doses and dose equivalents are calculated for 50th percentile male and female astronaut phantoms using both the NASA High Charge and Energy Transport Code to perform one-dimensional deterministic transport and the Particle and Heavy Ion Transport Code System to perform three-dimensional Monte Carlo transport. Two measures of radiation risk, effective dose and risk of exposure-induced death (REID) are calculated using the organ dose equivalents resulting from the two methods of radiation transport. For the space radiation environments and simplified shielding configurations considered, small differences (<8%) in the effective dose and REID are found. However, for the galactic cosmic ray (GCR) boundary condition, compensating errors are observed, indicating that comparisons between the integral measurements of complex radiation environments and code calculations can be misleading. Code-to-code benchmarks allow for the comparison of differential quantities, such as secondary particle differential fluence, to provide insight into differences observed in integral quantities for particular components of the GCR spectrum.

  18. GPS measurement error gives rise to spurious 180 degree turning angles and strong directional biases in animal movement data.

    PubMed

    Hurford, Amy

    2009-05-20

    Movement data are frequently collected using Global Positioning System (GPS) receivers, but recorded GPS locations are subject to errors. While past studies have suggested methods to improve location accuracy, mechanistic movement models utilize distributions of turning angles and directional biases and these data present a new challenge in recognizing and reducing the effect of measurement error. I collected locations from a stationary GPS collar, analyzed a probabilistic model and used Monte Carlo simulations to understand how measurement error affects measured turning angles and directional biases. Results from each of the three methods were in complete agreement: measurement error gives rise to a systematic bias where a stationary animal is most likely to be measured as turning 180 degrees or moving towards a fixed point in space. These spurious effects occur in GPS data when the measured distance between locations is <20 meters. Measurement error must be considered as a possible cause of 180 degree turning angles in GPS data. Consequences of failing to account for measurement error are predicting overly tortuous movement, numerous returns to previously visited locations, inaccurately predicting species range, core areas, and the frequency of crossing linear features. By understanding the effect of GPS measurement error, ecologists are able to disregard false signals to more accurately design conservation plans for endangered wildlife.

  19. Improved characterisation and modelling of measurement errors in electrical resistivity tomography (ERT) surveys

    NASA Astrophysics Data System (ADS)

    Tso, Chak-Hau Michael; Kuras, Oliver; Wilkinson, Paul B.; Uhlemann, Sebastian; Chambers, Jonathan E.; Meldrum, Philip I.; Graham, James; Sherlock, Emma F.; Binley, Andrew

    2017-11-01

    Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe or assume a statistical model of data errors before inversion. Wrongly prescribed errors can lead to over- or under-fitting of data; however, the derivation of models of data errors is often neglected. With the heightening interest in uncertainty estimation within hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide improved image appraisal. Here we focus on the role of measurement errors in electrical resistivity tomography (ERT). We have analysed two time-lapse ERT datasets: one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24 h timeframe; the other is a two-year-long cross-borehole survey at a UK nuclear site with 246 sets of over 50,000 measurements. Our study includes the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and correlation coefficient analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used, i.e. errors may not be uncorrelated as often assumed. Based on these findings, we develop a new error model that allows grouping based on electrode number in addition to fitting a linear model to transfer resistance. The new model explains the observed measurement errors better and shows superior inversion results and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the electrodes used to make the measurements. The new model can be readily applied to the diagonal data weighting matrix widely used in common inversion methods, as well as to the data covariance matrix in a Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.

  20. Measurement error in time-series analysis: a simulation study comparing modelled and monitored data.

    PubMed

    Butland, Barbara K; Armstrong, Ben; Atkinson, Richard W; Wilkinson, Paul; Heal, Mathew R; Doherty, Ruth M; Vieno, Massimo

    2013-11-13

    Assessing health effects from background exposure to air pollution is often hampered by the sparseness of pollution monitoring networks. However, regional atmospheric chemistry-transport models (CTMs) can provide pollution data with national coverage at fine geographical and temporal resolution. We used statistical simulation to compare the impact on epidemiological time-series analysis of additive measurement error in sparse monitor data as opposed to geographically and temporally complete model data. Statistical simulations were based on a theoretical area of 4 regions each consisting of twenty-five 5 km × 5 km grid-squares. In the context of a 3-year Poisson regression time-series analysis of the association between mortality and a single pollutant, we compared the error impact of using daily grid-specific model data as opposed to daily regional average monitor data. We investigated how this comparison was affected if we changed the number of grids per region containing a monitor. To inform simulations, estimates (e.g. of pollutant means) were obtained from observed monitor data for 2003-2006 for national network sites across the UK and corresponding model data that were generated by the EMEP-WRF CTM. Average within-site correlations between observed monitor and model data were 0.73 and 0.76 for rural and urban daily maximum 8-hour ozone respectively, and 0.67 and 0.61 for rural and urban loge(daily 1-hour maximum NO2). When regional averages were based on 5 or 10 monitors per region, health effect estimates exhibited little bias. However, with only 1 monitor per region, the regression coefficient in our time-series analysis was attenuated by an estimated 6% for urban background ozone, 13% for rural ozone, 29% for urban background loge(NO2) and 38% for rural loge(NO2). For grid-specific model data the corresponding figures were 19%, 22%, 54% and 44% respectively, i.e. similar for rural loge(NO2) but more marked for urban loge(NO2). Even if correlations between model and monitor data appear reasonably strong, additive classical measurement error in model data may lead to appreciable bias in health effect estimates. As process-based air pollution models become more widely used in epidemiological time-series analysis, assessments of error impact that include statistical simulation may be useful.

Top