The Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD) Tool
Providing quantal response models, which are also used in the U.S. EPA benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates.
Nonparametric estimation of benchmark doses in environmental risk assessment
Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen
2013-01-01
Summary An important statistical objective in environmental risk analysis is estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a pre-specified benchmark response in a dose-response experiment. In such settings, representations of the risk are traditionally based on a parametric dose-response model. It is a well-known concern, however, that if the chosen parametric form is misspecified, inaccurate and possibly unsafe low-dose inferences can result. We apply a nonparametric approach for calculating benchmark doses, based on an isotonic regression method for dose-response estimation with quantal-response data (Bhattacharya and Kong, 2007). We determine the large-sample properties of the estimator, develop bootstrap-based confidence limits on the BMDs, and explore the confidence limits’ small-sample properties via a short simulation study. An example from cancer risk assessment illustrates the calculations. PMID:23914133
Model Uncertainty and Bayesian Model Averaged Benchmark Dose Estimation for Continuous Data
The benchmark dose (BMD) approach has gained acceptance as a valuable risk assessment tool, but risk assessors still face significant challenges associated with selecting an appropriate BMD/BMDL estimate from the results of a set of acceptable dose-response models. Current approa...
Wheeler, Matthew W; Bailer, A John
2007-06-01
Model averaging (MA) has been proposed as a method of accounting for model uncertainty in benchmark dose (BMD) estimation. The technique has been used to average BMD dose estimates derived from dichotomous dose-response experiments, microbial dose-response experiments, as well as observational epidemiological studies. While MA is a promising tool for the risk assessor, a previous study suggested that the simple strategy of averaging individual models' BMD lower limits did not yield interval estimators that met nominal coverage levels in certain situations, and this performance was very sensitive to the underlying model space chosen. We present a different, more computationally intensive, approach in which the BMD is estimated using the average dose-response model and the corresponding benchmark dose lower bound (BMDL) is computed by bootstrapping. This method is illustrated with TiO(2) dose-response rat lung cancer data, and then systematically studied through an extensive Monte Carlo simulation. The results of this study suggest that the MA-BMD, estimated using this technique, performs better, in terms of bias and coverage, than the previous MA methodology. Further, the MA-BMDL achieves nominal coverage in most cases, and is superior to picking the "best fitting model" when estimating the benchmark dose. Although these results show utility of MA for benchmark dose risk estimation, they continue to highlight the importance of choosing an adequate model space as well as proper model fit diagnostics.
Role of the standard deviation in the estimation of benchmark doses with continuous data.
Gaylor, David W; Slikker, William
2004-12-01
For continuous data, risk is defined here as the proportion of animals with values above a large percentile, e.g., the 99th percentile or below the 1st percentile, for the distribution of values among control animals. It is known that reducing the standard deviation of measurements through improved experimental techniques will result in less stringent (higher) doses for the lower confidence limit on the benchmark dose that is estimated to produce a specified risk of animals with abnormal levels for a biological effect. Thus, a somewhat larger (less stringent) lower confidence limit is obtained that may be used as a point of departure for low-dose risk assessment. It is shown in this article that it is important for the benchmark dose to be based primarily on the standard deviation among animals, s(a), apart from the standard deviation of measurement errors, s(m), within animals. If the benchmark dose is incorrectly based on the overall standard deviation among average values for animals, which includes measurement error variation, the benchmark dose will be overestimated and the risk will be underestimated. The bias increases as s(m) increases relative to s(a). The bias is relatively small if s(m) is less than one-third of s(a), a condition achieved in most experimental designs.
A Web-Based System for Bayesian Benchmark Dose Estimation.
Shao, Kan; Shapiro, Andrew J
2018-01-11
Benchmark dose (BMD) modeling is an important step in human health risk assessment and is used as the default approach to identify the point of departure for risk assessment. A probabilistic framework for dose-response assessment has been proposed and advocated by various institutions and organizations; therefore, a reliable tool is needed to provide distributional estimates for BMD and other important quantities in dose-response assessment. We developed an online system for Bayesian BMD (BBMD) estimation and compared results from this software with U.S. Environmental Protection Agency's (EPA's) Benchmark Dose Software (BMDS). The system is built on a Bayesian framework featuring the application of Markov chain Monte Carlo (MCMC) sampling for model parameter estimation and BMD calculation, which makes the BBMD system fundamentally different from the currently prevailing BMD software packages. In addition to estimating the traditional BMDs for dichotomous and continuous data, the developed system is also capable of computing model-averaged BMD estimates. A total of 518 dichotomous and 108 continuous data sets extracted from the U.S. EPA's Integrated Risk Information System (IRIS) database (and similar databases) were used as testing data to compare the estimates from the BBMD and BMDS programs. The results suggest that the BBMD system may outperform the BMDS program in a number of aspects, including fewer failed BMD and BMDL calculations and estimates. The BBMD system is a useful alternative tool for estimating BMD with additional functionalities for BMD analysis based on most recent research. Most importantly, the BBMD has the potential to incorporate prior information to make dose-response modeling more reliable and can provide distributional estimates for important quantities in dose-response assessment, which greatly facilitates the current trend for probabilistic risk assessment. https://doi.org/10.1289/EHP1289.
Benchmark dose analysis via nonparametric regression modeling
Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen
2013-01-01
Estimation of benchmark doses (BMDs) in quantitative risk assessment traditionally is based upon parametric dose-response modeling. It is a well-known concern, however, that if the chosen parametric model is uncertain and/or misspecified, inaccurate and possibly unsafe low-dose inferences can result. We describe a nonparametric approach for estimating BMDs with quantal-response data based on an isotonic regression method, and also study use of corresponding, nonparametric, bootstrap-based confidence limits for the BMD. We explore the confidence limits’ small-sample properties via a simulation study, and illustrate the calculations with an example from cancer risk assessment. It is seen that this nonparametric approach can provide a useful alternative for BMD estimation when faced with the problem of parametric model uncertainty. PMID:23683057
NASA Technical Reports Server (NTRS)
James, John T.; Lam, Chiu-wing; Scully, Robert R.
2013-01-01
Brief exposures of Apollo Astronauts to lunar dust occasionally elicited upper respiratory irritation; however, no limits were ever set for prolonged exposure ot lunar dust. Habitats for exploration, whether mobile of fixed must be designed to limit human exposure to lunar dust to safe levels. We have used a new technique we call Comparative Benchmark Dose Modeling to estimate safe exposure limits for lunar dust collected during the Apollo 14 mission.
Correlation of Noncancer Benchmark Doses in Short- and Long-Term Rodent Bioassays.
Kratchman, Jessica; Wang, Bing; Fox, John; Gray, George
2018-05-01
This study investigated whether, in the absence of chronic noncancer toxicity data, short-term noncancer toxicity data can be used to predict chronic toxicity effect levels by focusing on the dose-response relationship instead of a critical effect. Data from National Toxicology Program (NTP) technical reports have been extracted and modeled using the Environmental Protection Agency's Benchmark Dose Software. Best-fit, minimum benchmark dose (BMD), and benchmark dose lower limits (BMDLs) have been modeled for all NTP pathologist identified significant nonneoplastic lesions, final mean body weight, and mean organ weight of 41 chemicals tested by NTP between 2000 and 2012. Models were then developed at the chemical level using orthogonal regression techniques to predict chronic (two years) noncancer health effect levels using the results of the short-term (three months) toxicity data. The findings indicate that short-term animal studies may reasonably provide a quantitative estimate of a chronic BMD or BMDL. This can allow for faster development of human health toxicity values for risk assessment for chemicals that lack chronic toxicity data. © 2017 Society for Risk Analysis.
Evaluation of triclosan in Minnesota lakes and rivers: Part II - human health risk assessment.
Yost, Lisa J; Barber, Timothy R; Gentry, P Robinan; Bock, Michael J; Lyndall, Jennifer L; Capdevielle, Marie C; Slezak, Brian P
2017-08-01
Triclosan, an antimicrobial compound found in consumer products, has been detected in low concentrations in Minnesota municipal wastewater treatment plant (WWTP) effluent. This assessment evaluates potential health risks for exposure of adults and children to triclosan in Minnesota surface water, sediments, and fish. Potential exposures via fish consumption are considered for recreational or subsistence-level consumers. This assessment uses two chronic oral toxicity benchmarks, which bracket other available toxicity values. The first benchmark is a lower bound on a benchmark dose associated with a 10% risk (BMDL 10 ) of 47mg per kilogram per day (mg/kg-day) for kidney effects in hamsters. This value was identified as the most sensitive endpoint and species in a review by Rodricks et al. (2010) and is used herein to derive an estimated reference dose (RfD (Rodricks) ) of 0.47mg/kg-day. The second benchmark is a reference dose (RfD) of 0.047mg/kg-day derived from a no observed adverse effect level (NOAEL) of 10mg/kg-day for hepatic and hematopoietic effects in mice (Minnesota Department of Health [MDH] 2014). Based on conservative assumptions regarding human exposures to triclosan, calculated risk estimates are far below levels of concern. These estimates are likely to overestimate risks for potential receptors, particularly because sample locations were generally biased towards known discharges (i.e., WWTP effluent). Copyright © 2017 Elsevier Inc. All rights reserved.
Bhat, Virunya S; Hester, Susan D; Nesnow, Stephen; Eastmond, David A
2013-11-01
The ability to anchor chemical class-based gene expression changes to phenotypic lesions and to describe these changes as a function of dose and time informs mode-of-action determinations and improves quantitative risk assessments. Previous global expression profiling identified a 330-probe cluster differentially expressed and commonly responsive to 3 hepatotumorigenic conazoles (cyproconazole, epoxiconazole, and propiconazole) at 30 days. Extended to 2 more conazoles (triadimefon and myclobutanil), the present assessment encompasses 4 tumorigenic and 1 nontumorigenic conazole. Transcriptional benchmark dose levels (BMDL(T)) were estimated for a subset of the cluster with dose-responsive behavior and a ≥ 5-fold increase or decrease in signal intensity at the highest dose. These genes primarily encompassed CAR/RXR activation, P450 metabolism, liver hypertrophy- glutathione depletion, LPS/IL-1-mediated inhibition of RXR, and NRF2-mediated oxidative stress pathways. Median BMDL(T) estimates from the subset were concordant (within a factor of 2.4) with apical benchmark doses (BMDL(A)) for increased liver weight at 30 days for the 5 conazoles. The 30-day median BMDL(T) estimates were within one-half order of magnitude of the chronic BMDLA for hepatocellular tumors. Potency differences seen in the dose-responsive transcription of certain phase II metabolism, bile acid detoxification, and lipid oxidation genes mirrored each conazole's tumorigenic potency. The 30-day BMDL(T) corresponded to tumorigenic potency on a milligram per kilogram day basis with cyproconazole > epoxiconazole > propiconazole > triadimefon > myclobutanil (nontumorigenic). These results support the utility of measuring short-term gene expression changes to inform quantitative risk assessments from long-term exposures.
EPA's methodology for estimation of inhalation reference concentrations (RfCs) as benchmark estimates of the quantitative dose-response assessment of chronic noncancer toxicity for individual inhaled chemicals.
Introduction to benchmark dose methods and U.S. EPA's benchmark dose software (BMDS) version 2.1.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, J. Allen, E-mail: davis.allen@epa.gov; Gift, Jeffrey S.; Zhao, Q. Jay
2011-07-15
Traditionally, the No-Observed-Adverse-Effect-Level (NOAEL) approach has been used to determine the point of departure (POD) from animal toxicology data for use in human health risk assessments. However, this approach is subject to substantial limitations that have been well defined, such as strict dependence on the dose selection, dose spacing, and sample size of the study from which the critical effect has been identified. Also, the NOAEL approach fails to take into consideration the shape of the dose-response curve and other related information. The benchmark dose (BMD) method, originally proposed as an alternative to the NOAEL methodology in the 1980s, addressesmore » many of the limitations of the NOAEL method. It is less dependent on dose selection and spacing, and it takes into account the shape of the dose-response curve. In addition, the estimation of a BMD 95% lower bound confidence limit (BMDL) results in a POD that appropriately accounts for study quality (i.e., sample size). With the recent advent of user-friendly BMD software programs, including the U.S. Environmental Protection Agency's (U.S. EPA) Benchmark Dose Software (BMDS), BMD has become the method of choice for many health organizations world-wide. This paper discusses the BMD methods and corresponding software (i.e., BMDS version 2.1.1) that have been developed by the U.S. EPA, and includes a comparison with recently released European Food Safety Authority (EFSA) BMD guidance.« less
BMDExpress Data Viewer: A Visualization Tool to Analyze BMDExpress Datasets(SoTC)
Background: Benchmark Dose (BMD) modelling is a mathematical approach used to determine where a dose-response change begins to take place relative to controls following chemical exposure. BMDs are being increasingly applied in regulatory toxicology to estimate acceptable exposure...
BMDExpress Data Viewer: A Visualization Tool to Analyze BMDExpress Datasets (STC symposium)
Background: Benchmark Dose (BMD) modelling is a mathematical approach used to determine where a dose-response change begins to take place relative to controls following chemical exposure. BMDs are being increasingly applied in regulatory toxicology to estimate acceptable exposure...
Benchmark studies of induced radioactivity produced in LHC materials, Part II: Remanent dose rates.
Brugger, M; Khater, H; Mayer, S; Prinz, A; Roesler, S; Ulrici, L; Vincke, H
2005-01-01
A new method to estimate remanent dose rates, to be used with the Monte Carlo code FLUKA, was benchmarked against measurements from an experiment that was performed at the CERN-EU high-energy reference field facility. An extensive collection of samples of different materials were placed downstream of, and laterally to, a copper target, intercepting a positively charged mixed hadron beam with a momentum of 120 GeV c(-1). Emphasis was put on the reduction of uncertainties by taking measures such as careful monitoring of the irradiation parameters, using different instruments to measure dose rates, adopting detailed elemental analyses of the irradiated materials and making detailed simulations of the irradiation experiment. The measured and calculated dose rates are in good agreement.
Evaluating MoE and its Uncertainty and Variability for Food Contaminants (EuroTox presentation)
Margin of Exposure (MoE), is a metric for quantifying the relationship between exposure and hazard. Ideally, it is the ratio of the dose associated with hazard and an estimate of exposure. For example, hazard may be characterized by a benchmark dose (BMD), and, for food contami...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faillace, E.R.; Cheng, J.J.; Yu, C.
A series of benchmarking runs were conducted so that results obtained with the RESRAD code could be compared against those obtained with six pathway analysis models used to determine the radiation dose to an individual living on a radiologically contaminated site. The RESRAD computer code was benchmarked against five other computer codes - GENII-S, GENII, DECOM, PRESTO-EPA-CPG, and PATHRAE-EPA - and the uncodified methodology presented in the NUREG/CR-5512 report. Estimated doses for the external gamma pathway; the dust inhalation pathway; and the soil, food, and water ingestion pathways were calculated for each methodology by matching, to the extent possible, inputmore » parameters such as occupancy, shielding, and consumption factors.« less
Benchmark Dose for Urinary Cadmium based on a Marker of Renal Dysfunction: A Meta-Analysis
Woo, Hae Dong; Chiu, Weihsueh A.; Jo, Seongil; Kim, Jeongseon
2015-01-01
Background Low doses of cadmium can cause adverse health effects. Benchmark dose (BMD) and the one-sided 95% lower confidence limit of BMD (BMDL) to derive points of departure for urinary cadmium exposure have been estimated in several previous studies, but the methods to derive BMD and the estimated BMDs differ. Objectives We aimed to find the associated factors that affect BMD calculation in the general population, and to estimate the summary BMD for urinary cadmium using reported BMDs. Methods A meta-regression was performed and the pooled BMD/BMDL was estimated using studies reporting a BMD and BMDL, weighted by sample size, that were calculated from individual data based on markers of renal dysfunction. Results BMDs were highly heterogeneous across studies. Meta-regression analysis showed that a significant predictor of BMD was the cut-off point which denotes an abnormal level. Using the 95th percentile as a cut off, BMD5/BMDL5 estimates for 5% benchmark responses (BMR) of β2-microglobulinuria (β2-MG) estimated was 6.18/4.88 μg/g creatinine in conventional quantal analysis and 3.56/3.13 μg/g creatinine in the hybrid approach, and BMD5/BMDL5 estimates for 5% BMR of N-acetyl-β-d-glucosaminidase (NAG) was 10.31/7.61 μg/g creatinine in quantal analysis and 3.21/2.24 g/g creatinine in the hybrid approach. However, the meta-regression showed that BMD and BMDL were significantly associated with the cut-off point, but BMD calculation method did not significantly affect the results. The urinary cadmium BMDL5 of β2-MG was 1.9 μg/g creatinine in the lowest cut-off point group. Conclusion The BMD was significantly associated with the cut-off point defining the abnormal level of renal dysfunction markers. PMID:25970611
Rager, Julia E; Auerbach, Scott S; Chappell, Grace A; Martin, Elizabeth; Thompson, Chad M; Fry, Rebecca C
2017-10-16
Prenatal inorganic arsenic (iAs) exposure influences the expression of critical genes and proteins associated with adverse outcomes in newborns, in part through epigenetic mediators. The doses at which these genomic and epigenomic changes occur have yet to be evaluated in the context of dose-response modeling. The goal of the present study was to estimate iAs doses that correspond to changes in transcriptomic, proteomic, epigenomic, and integrated multi-omic signatures in human cord blood through benchmark dose (BMD) modeling. Genome-wide DNA methylation, microRNA expression, mRNA expression, and protein expression levels in cord blood were modeled against total urinary arsenic (U-tAs) levels from pregnant women exposed to varying levels of iAs. Dose-response relationships were modeled in BMDExpress, and BMDs representing 10% response levels were estimated. Overall, DNA methylation changes were estimated to occur at lower exposure concentrations in comparison to other molecular endpoints. Multi-omic module eigengenes were derived through weighted gene co-expression network analysis, representing co-modulated signatures across transcriptomic, proteomic, and epigenomic profiles. One module eigengene was associated with decreased gestational age occurring alongside increased iAs exposure. Genes/proteins within this module eigengene showed enrichment for organismal development, including potassium voltage-gated channel subfamily Q member 1 (KCNQ1), an imprinted gene showing differential methylation and expression in response to iAs. Modeling of this prioritized multi-omic module eigengene resulted in a BMD(BMDL) of 58(45) μg/L U-tAs, which was estimated to correspond to drinking water arsenic concentrations of 51(40) μg/L. Results are in line with epidemiological evidence supporting effects of prenatal iAs occurring at levels <100 μg As/L urine. Together, findings present a variety of BMD measures to estimate doses at which prenatal iAs exposure influences neonatal outcome-relevant transcriptomic, proteomic, and epigenomic profiles.
Do fungi need to be included within environmental radiation protection assessment models?
Guillén, J; Baeza, A; Beresford, N A; Wood, M D
2017-09-01
Fungi are used as biomonitors of forest ecosystems, having comparatively high uptakes of anthropogenic and naturally occurring radionuclides. However, whilst they are known to accumulate radionuclides they are not typically considered in radiological assessment tools for environmental (non-human biota) assessment. In this paper the total dose rate to fungi is estimated using the ERICA Tool, assuming different fruiting body geometries, a single ellipsoid and more complex geometries considering the different components of the fruit body and their differing radionuclide contents based upon measurement data. Anthropogenic and naturally occurring radionuclide concentrations from the Mediterranean ecosystem (Spain) were used in this assessment. The total estimated weighted dose rate was in the range 0.31-3.4 μGy/h (5 th -95 th percentile), similar to natural exposure rates reported for other wild groups. The total estimated dose was dominated by internal exposure, especially from 226 Ra and 210 Po. Differences in dose rate between complex geometries and a simple ellipsoid model were negligible. Therefore, the simple ellipsoid model is recommended to assess dose rates to fungal fruiting bodies. Fungal mycelium was also modelled assuming a long filament. Using these geometries, assessments for fungal fruiting bodies and mycelium under different scenarios (post-accident, planned release and existing exposure) were conducted, each being based on available monitoring data. The estimated total dose rate in each case was below the ERICA screening benchmark dose, except for the example post-accident existing exposure scenario (the Chernobyl Exclusion Zone) for which a dose rate in excess of 35 μGy/h was estimated for the fruiting body. Estimated mycelium dose rate in this post-accident existing exposure scenario was close to the 400 μGy/h benchmark for plants, although fungi are generally considered to be less radiosensitive than plants. Further research on appropriate mycelium geometries and their radionuclide content is required. Based on the assessments presented in this paper, there is no need to recommend that fungi should be added to the existing assessment tools and frameworks; if required some tools allow a geometry representing fungi to be created and used within a dose assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.
Shao, Kan; Small, Mitchell J
2011-10-01
A methodology is presented for assessing the information value of an additional dosage experiment in existing bioassay studies. The analysis demonstrates the potential reduction in the uncertainty of toxicity metrics derived from expanded studies, providing insights for future studies. Bayesian methods are used to fit alternative dose-response models using Markov chain Monte Carlo (MCMC) simulation for parameter estimation and Bayesian model averaging (BMA) is used to compare and combine the alternative models. BMA predictions for benchmark dose (BMD) are developed, with uncertainty in these predictions used to derive the lower bound BMDL. The MCMC and BMA results provide a basis for a subsequent Monte Carlo analysis that backcasts the dosage where an additional test group would have been most beneficial in reducing the uncertainty in the BMD prediction, along with the magnitude of the expected uncertainty reduction. Uncertainty reductions are measured in terms of reduced interval widths of predicted BMD values and increases in BMDL values that occur as a result of this reduced uncertainty. The methodology is illustrated using two existing data sets for TCDD carcinogenicity, fitted with two alternative dose-response models (logistic and quantal-linear). The example shows that an additional dose at a relatively high value would have been most effective for reducing the uncertainty in BMA BMD estimates, with predicted reductions in the widths of uncertainty intervals of approximately 30%, and expected increases in BMDL values of 5-10%. The results demonstrate that dose selection for studies that subsequently inform dose-response models can benefit from consideration of how these models will be fit, combined, and interpreted. © 2011 Society for Risk Analysis.
EPA and EFSA approaches for Benchmark Dose modeling
Benchmark dose (BMD) modeling has become the preferred approach in the analysis of toxicological dose-response data for the purpose of deriving human health toxicity values. The software packages most often used are Benchmark Dose Software (BMDS, developed by EPA) and PROAST (de...
BENCHMARK DOSE TECHNICAL GUIDANCE DOCUMENT ...
The purpose of this document is to provide guidance for the Agency on the application of the benchmark dose approach in determining the point of departure (POD) for health effects data, whether a linear or nonlinear low dose extrapolation is used. The guidance includes discussion on computation of benchmark doses and benchmark concentrations (BMDs and BMCs) and their lower confidence limits, data requirements, dose-response analysis, and reporting requirements. This guidance is based on today's knowledge and understanding, and on experience gained in using this approach.
ANALYSES OF NEUROBEHAVIORAL SCREENING DATA: BENCHMARK DOSE ESTIMATION.
Analysis of neurotoxicological screening data such as those of the functional observational battery (FOB) traditionally relies on analysis of variance (ANOVA) with repeated measurements, followed by determination of a no-adverse-effect level (NOAEL). The US EPA has proposed the ...
EPA's Benchmark Dose Modeling Software
The EPA developed the Benchmark Dose Software (BMDS) as a tool to help Agency risk assessors facilitate applying benchmark dose (BMD) method’s to EPA’s human health risk assessment (HHRA) documents. The application of BMD methods overcomes many well know limitations ...
Lachenmeier, Dirk W; Rehm, Jürgen
2015-01-30
A comparative risk assessment of drugs including alcohol and tobacco using the margin of exposure (MOE) approach was conducted. The MOE is defined as ratio between toxicological threshold (benchmark dose) and estimated human intake. Median lethal dose values from animal experiments were used to derive the benchmark dose. The human intake was calculated for individual scenarios and population-based scenarios. The MOE was calculated using probabilistic Monte Carlo simulations. The benchmark dose values ranged from 2 mg/kg bodyweight for heroin to 531 mg/kg bodyweight for alcohol (ethanol). For individual exposure the four substances alcohol, nicotine, cocaine and heroin fall into the "high risk" category with MOE < 10, the rest of the compounds except THC fall into the "risk" category with MOE < 100. On a population scale, only alcohol would fall into the "high risk" category, and cigarette smoking would fall into the "risk" category, while all other agents (opiates, cocaine, amphetamine-type stimulants, ecstasy, and benzodiazepines) had MOEs > 100, and cannabis had a MOE > 10,000. The toxicological MOE approach validates epidemiological and social science-based drug ranking approaches especially in regard to the positions of alcohol and tobacco (high risk) and cannabis (low risk).
Baumung, Claudia; Rehm, Jürgen; Franke, Heike; Lachenmeier, Dirk W.
2016-01-01
Nicotine was not included in previous efforts to identify the most important toxicants of tobacco smoke. A health risk assessment of nicotine for smokers of cigarettes was conducted using the margin of exposure (MOE) approach and results were compared to literature MOEs of various other tobacco toxicants. The MOE is defined as ratio between toxicological threshold (benchmark dose) and estimated human intake. Dose-response modelling of human and animal data was used to derive the benchmark dose. The MOE was calculated using probabilistic Monte Carlo simulations for daily cigarette smokers. Benchmark dose values ranged from 0.004 mg/kg bodyweight for symptoms of intoxication in children to 3 mg/kg bodyweight for mortality in animals; MOEs ranged from below 1 up to 7.6 indicating a considerable consumer risk. The dimension of the MOEs is similar to those of other tobacco toxicants with high concerns relating to adverse health effects such as acrolein or formaldehyde. Owing to the lack of toxicological data in particular relating to cancer, long term animal testing studies for nicotine are urgently necessary. There is immediate need of action concerning the risk of nicotine also with regard to electronic cigarettes and smokeless tobacco. PMID:27759090
Lachenmeier, Dirk W.; Rehm, Jürgen
2015-01-01
A comparative risk assessment of drugs including alcohol and tobacco using the margin of exposure (MOE) approach was conducted. The MOE is defined as ratio between toxicological threshold (benchmark dose) and estimated human intake. Median lethal dose values from animal experiments were used to derive the benchmark dose. The human intake was calculated for individual scenarios and population-based scenarios. The MOE was calculated using probabilistic Monte Carlo simulations. The benchmark dose values ranged from 2 mg/kg bodyweight for heroin to 531 mg/kg bodyweight for alcohol (ethanol). For individual exposure the four substances alcohol, nicotine, cocaine and heroin fall into the “high risk” category with MOE < 10, the rest of the compounds except THC fall into the “risk” category with MOE < 100. On a population scale, only alcohol would fall into the “high risk” category, and cigarette smoking would fall into the “risk” category, while all other agents (opiates, cocaine, amphetamine-type stimulants, ecstasy, and benzodiazepines) had MOEs > 100, and cannabis had a MOE > 10,000. The toxicological MOE approach validates epidemiological and social science-based drug ranking approaches especially in regard to the positions of alcohol and tobacco (high risk) and cannabis (low risk). PMID:25634572
77 FR 36533 - Notice of Availability of the Benchmark Dose Technical Guidance
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-19
... ENVIRONMENTAL PROTECTION AGENCY [FRL-9688-7] Notice of Availability of the Benchmark Dose Technical Guidance AGENCY: Environmental Protection Agency (EPA). ACTION: Notice of Availability. SUMMARY: The U.S. Environmental Protection Agency is announcing the availability of Benchmark Dose Technical...
Benchmark dose risk assessment software (BMDS) was designed by EPA to generate dose-response curves and facilitate the analysis, interpretation and synthesis of toxicological data. Partial results of QA/QC testing of the EPA benchmark dose software (BMDS) are presented. BMDS pr...
Application of Benchmark Dose Methodology to a Variety of Endpoints and Exposures
This latest beta version (1.1b) of the U.S. Environmental Protection Agency (EPA) Benchmark Dose Software (BMDS) is being distributed for public comment. The BMDS system is being developed as a tool to facilitate the application of benchmark dose (BMD) methods to EPA hazardous p...
BENCHMARK DOSES FOR CHEMICAL MIXTURES: EVALUATION OF A MIXTURE OF 18 PHAHS.
Benchmark doses (BMDs), defined as doses of a substance that are expected to result in a pre-specified level of "benchmark" response (BMR), have been used for quantifying the risk associated with exposure to environmental hazards. The lower confidence limit of the BMD is used as...
Poet, T S; Schlosser, P M; Rodriguez, C E; Parod, R J; Rodwell, D E; Kirman, C R
2016-04-01
The developmental effects of NMP are well studied in Sprague-Dawley rats following oral, inhalation, and dermal routes of exposure. Short-term and chronic occupational exposure limit (OEL) values were derived using an updated physiologically based pharmacokinetic (PBPK) model for NMP, along with benchmark dose modeling. Two suitable developmental endpoints were evaluated for human health risk assessment: (1) for acute exposures, the increased incidence of skeletal malformations, an effect noted only at oral doses that were toxic to the dam and fetus; and (2) for repeated exposures to NMP, changes in fetal/pup body weight. Where possible, data from multiple studies were pooled to increase the predictive power of the dose-response data sets. For the purposes of internal dose estimation, the window of susceptibility was estimated for each endpoint, and was used in the dose-response modeling. A point of departure value of 390 mg/L (in terms of peak NMP in blood) was calculated for skeletal malformations based on pooled data from oral and inhalation studies. Acceptable dose-response model fits were not obtained using the pooled data for fetal/pup body weight changes. These data sets were also assessed individually, from which the geometric mean value obtained from the inhalation studies (470 mg*hr/L), was used to derive the chronic OEL. A PBPK model for NMP in humans was used to calculate human equivalent concentrations corresponding to the internal dose point of departure values. Application of a net uncertainty factor of 20-21, which incorporates data-derived extrapolation factors, to the point of departure values yields short-term and chronic occupational exposure limit values of 86 and 24 ppm, respectively. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
van Wijngaarden, Edwin; Beck, Christopher; Shamlaye, Conrad F; Cernichiari, Elsa; Davidson, Philip W; Myers, Gary J; Clarkson, Thomas W
2006-09-01
Methyl mercury (MeHg) is highly toxic to the developing nervous system. Human exposure is mainly from fish consumption since small amounts are present in all fish. Findings of developmental neurotoxicity following high-level prenatal exposure to MeHg raised the question of whether children whose mothers consumed fish contaminated with background levels during pregnancy are at an increased risk of impaired neurological function. Benchmark doses determined from studies in New Zealand, and the Faroese and Seychelles Islands indicate that a level of 4-25 parts per million (ppm) measured in maternal hair may carry a risk to the infant. However, there are numerous sources of uncertainty that could affect the derivation of benchmark doses, and it is crucial to continue to investigate the most appropriate derivation of safe consumption levels. Earlier, we published the findings from benchmark analyses applied to the data collected on the Seychelles main cohort at the 66-month follow-up period. Here, we expand on the main cohort analyses by determining the benchmark doses (BMD) of MeHg level in maternal hair based on 643 Seychellois children for whom 26 different neurobehavioral endpoints were measured at 9 years of age. Dose-response models applied to these continuous endpoints incorporated a variety of covariates and included the k-power model, the Weibull model, and the logistic model. The average 95% lower confidence limit of the BMD (BMDL) across all 26 endpoints varied from 20.1 ppm (range=17.2-22.5) for the logistic model to 20.4 ppm (range=17.9-23.0) for the k-power model. These estimates are somewhat lower than those obtained after 66 months of follow-up. The Seychelles Child Development Study continues to provide a firm scientific basis for the derivation of safe levels of MeHg consumption.
A MULTIMODEL APPROACH FOR CALCULATING BENCHMARK DOSE
A Multimodel Approach for Calculating Benchmark Dose
Ramon I. Garcia and R. Woodrow Setzer
In the assessment of dose response, a number of plausible dose- response models may give fits that are consistent with the data. If no dose response formulation had been speci...
Introduction of risk size in the determination of uncertainty factor UFL in risk assessment
NASA Astrophysics Data System (ADS)
Xue, Jinling; Lu, Yun; Velasquez, Natalia; Yu, Ruozhen; Hu, Hongying; Liu, Zhengtao; Meng, Wei
2012-09-01
The methodology for using uncertainty factors in health risk assessment has been developed for several decades. A default value is usually applied for the uncertainty factor UFL, which is used to extrapolate from LOAEL (lowest observed adverse effect level) to NAEL (no adverse effect level). Here, we have developed a new method that establishes a linear relationship between UFL and the additional risk level at LOAEL based on the dose-response information, which represents a very important factor that should be carefully considered. This linear formula makes it possible to select UFL properly in the additional risk range from 5.3% to 16.2%. Also the results remind us that the default value 10 may not be conservative enough when the additional risk level at LOAEL exceeds 16.2%. Furthermore, this novel method not only provides a flexible UFL instead of the traditional default value, but also can ensure a conservative estimation of the UFL with fewer errors, and avoid the benchmark response selection involved in the benchmark dose method. These advantages can improve the estimation of the extrapolation starting point in the risk assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dourson, M.L.
The quantitative procedures associated with noncancer risk assessment include reference dose (RfD), benchmark dose, and severity modeling. The RfD, which is part of the EPA risk assessment guidelines, is an estimation of a level that is likely to be without any health risk to sensitive individuals. The RfD requires two major judgments: the first is choice of a critical effect(s) and its No Observed Adverse Effect Level (NOAEL); the second judgment is choice of an uncertainty factor. This paper discusses major assumptions and limitations of the RfD model.
Current modeling practice may lead to falsely high benchmark dose estimates.
Ringblom, Joakim; Johanson, Gunnar; Öberg, Mattias
2014-07-01
Benchmark dose (BMD) modeling is increasingly used as the preferred approach to define the point-of-departure for health risk assessment of chemicals. As data are inherently variable, there is always a risk to select a model that defines a lower confidence bound of the BMD (BMDL) that, contrary to expected, exceeds the true BMD. The aim of this study was to investigate how often and under what circumstances such anomalies occur under current modeling practice. Continuous data were generated from a realistic dose-effect curve by Monte Carlo simulations using four dose groups and a set of five different dose placement scenarios, group sizes between 5 and 50 animals and coefficients of variations of 5-15%. The BMD calculations were conducted using nested exponential models, as most BMD software use nested approaches. "Non-protective" BMDLs (higher than true BMD) were frequently observed, in some scenarios reaching 80%. The phenomenon was mainly related to the selection of the non-sigmoidal exponential model (Effect=a·e(b)(·dose)). In conclusion, non-sigmoid models should be used with caution as it may underestimate the risk, illustrating that awareness of the model selection process and sound identification of the point-of-departure is vital for health risk assessment. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Kim, Steven B; Kodell, Ralph L; Moon, Hojin
2014-03-01
In chemical and microbial risk assessments, risk assessors fit dose-response models to high-dose data and extrapolate downward to risk levels in the range of 1-10%. Although multiple dose-response models may be able to fit the data adequately in the experimental range, the estimated effective dose (ED) corresponding to an extremely small risk can be substantially different from model to model. In this respect, model averaging (MA) provides more robustness than a single dose-response model in the point and interval estimation of an ED. In MA, accounting for both data uncertainty and model uncertainty is crucial, but addressing model uncertainty is not achieved simply by increasing the number of models in a model space. A plausible set of models for MA can be characterized by goodness of fit and diversity surrounding the truth. We propose a diversity index (DI) to balance between these two characteristics in model space selection. It addresses a collective property of a model space rather than individual performance of each model. Tuning parameters in the DI control the size of the model space for MA. © 2013 Society for Risk Analysis.
Recommended approaches in the application of ...
ABSTRACT:Only a fraction of chemicals in commerce have been fully assessed for their potential hazards to human health due to difficulties involved in conventional regulatory tests. It has recently been proposed that quantitative transcriptomic data can be used to determine benchmark dose (BMD) and estimate a point of departure (POD). Several studies have shown that transcriptional PODs correlate with PODs derived from analysis of pathological changes, but there is no consensus on how the genes that are used to derive a transcriptional POD should be selected. Because of very large number of unrelated genes in gene expression data, the process of selecting subsets of informative genes is a major challenge. We used published microarray data from studies on rats exposed orally to multiple doses of six chemicals for 5, 14, 28, and 90 days. We evaluated eight different approaches to select genes for POD derivation and compared them to three previously proposed approaches. The relationship between transcriptional BMDs derived using these 11 approaches were compared with PODs derived from apical data that might be used in a human health risk assessment. We found that transcriptional benchmark dose values for all 11 approaches were remarkably aligned with different apical PODs, while a subset of between 3 and 8 of the approaches met standard statistical criteria across the 5-, 14-, 28-, and 90-day time points and thus qualify as effective estimates of apical PODs. Our r
RESULTS OF QA/QC TESTING OF EPA BENCHMARK DOSE SOFTWARE VERSION 1.2
EPA is developing benchmark dose software (BMDS) to support cancer and non-cancer dose-response assessments. Following the recent public review of BMDS version 1.1b, EPA developed a Hill model for evaluating continuous data, and improved the user interface and Multistage, Polyno...
Quality Assurance Testing of Version 1.3 of U.S. EPA Benchmark Dose Software (Presentation)
EPA benchmark dose software (BMDS) issued to evaluate chemical dose-response data in support of Agency risk assessments, and must therefore be dependable. Quality assurance testing methods developed for BMDS were designed to assess model dependability with respect to curve-fitt...
Benchmark dose and the three Rs. Part I. Getting more information from the same number of animals.
Slob, Wout
2014-08-01
Evaluating dose-response data using the Benchmark dose (BMD) approach rather than by the no observed adverse effect (NOAEL) approach implies a considerable step forward from the perspective of the Reduction, Replacement, and Refinement, three Rs, in particular the R of reduction: more information is obtained from the same number of animals, or, vice versa, similar information may be obtained from fewer animals. The first part of this twin paper focusses on the former, the second on the latter aspect. Regarding the former, the BMD approach provides more information from any given dose-response dataset in various ways. First, the BMDL (= BMD lower confidence bound) provides more information by its more explicit definition. Further, as compared to the NOAEL approach the BMD approach results in more statistical precision in the value of the point of departure (PoD), for deriving exposure limits. While part of the animals in the study do not directly contribute to the numerical value of a NOAEL, all animals are effectively used and do contribute to a BMDL. In addition, the BMD approach allows for combining similar datasets for the same chemical (e.g., both sexes) in a single analysis, which further increases precision. By combining a dose-response dataset with similar historical data for other chemicals, the precision can even be substantially increased. Further, the BMD approach results in more precise estimates for relative potency factors (RPFs, or TEFs). And finally, the BMD approach is not only more precise, it also allows for quantification of the precision in the BMD estimate, which is not possible in the NOAEL approach.
Benchmark dose for cadmium exposure and elevated N-acetyl-β-D-glucosaminidase: a meta-analysis.
Liu, CuiXia; Li, YuBiao; Zhu, ChunShui; Dong, ZhaoMin; Zhang, Kun; Zhao, YanBin; Xu, YiLu
2016-10-01
Cadmium (Cd) is a well-known nephrotoxic contaminant, and N-acetyl-β-D-glucosaminidase (NAG) is considered to be an early and sensitive marker of tubular dysfunction. The link between Cd exposure and NAG level enables us to derive the benchmark dose (BMD) of Cd. Although several reports have already documented urinary Cd (UCd)-NAG relationships and BMD estimations, high heterogeneities arise due to the sub-populations (age, gender, and ethnicity) and BMD methodologies being employed. To clarify the influences that these variables exert, firstly, a random effect meta-analysis was performed in this study to correlate the UCd and NAG based on 92 datasets collected from 30 publications. Later, this established correlation (Ln(NAG) = 0.51 × Ln(UCd) + 0.83) was applied to derive the UCd BMD 5 of 1.76 μg/g creatinine and 95 % lower confidence limit of BMD 5 (BMDL 5 ) of 1.67 μg/g creatinine. While the regressions for different age groups and genders differed slightly, it is age and not gender that significantly affects BMD estimations. Ethnic differences may require further investigation given that limited data is currently available. Based on a comprehensive and systematic literature review, this study is a new attempt to quantify the UCd-NAG link and estimate BMD.
Soller, Jeffrey A; Eftim, Sorina E; Nappier, Sharon P
2018-01-01
Understanding pathogen risks is a critically important consideration in the design of water treatment, particularly for potable reuse projects. As an extension to our published microbial risk assessment methodology to estimate infection risks associated with Direct Potable Reuse (DPR) treatment train unit process combinations, herein, we (1) provide an updated compilation of pathogen density data in raw wastewater and dose-response models; (2) conduct a series of sensitivity analyses to consider potential risk implications using updated data; (3) evaluate the risks associated with log credit allocations in the United States; and (4) identify reference pathogen reductions needed to consistently meet currently applied benchmark risk levels. Sensitivity analyses illustrated changes in cumulative annual risks estimates, the significance of which depends on the pathogen group driving the risk for a given treatment train. For example, updates to norovirus (NoV) raw wastewater values and use of a NoV dose-response approach, capturing the full range of uncertainty, increased risks associated with one of the treatment trains evaluated, but not the other. Additionally, compared to traditional log-credit allocation approaches, our results indicate that the risk methodology provides more nuanced information about how consistently public health benchmarks are achieved. Our results indicate that viruses need to be reduced by 14 logs or more to consistently achieve currently applied benchmark levels of protection associated with DPR. The refined methodology, updated model inputs, and log credit allocation comparisons will be useful to regulators considering DPR projects and design engineers as they consider which unit treatment processes should be employed for particular projects. Published by Elsevier Ltd.
Girard, Raphaële; Aupee, Martine; Erb, Martine; Bettinger, Anne; Jouve, Alice
2012-12-01
The 3ml volume currently used as the hand hygiene (HH) measure has been explored as the pertinent dose for an indirect indicator of HH compliance. A multicenter study was conducted in order to ascertain the required dose using different products. The average contact duration before drying was measured and compared with references. Effective hand coverage had to include the whole hand and the wrist. Two durations were chosen as points of reference: 30s, as given by guidelines, and the duration validated by the European standard EN 1500. Each product was to be tested, using standardized procedures, by three nosocomial infection prevention teams, for three different doses (3, 2 and 1.5ml). Data from 27 products and 1706 tests were analyzed. Depending on the product, the dose needed to ensure a 30-s contact duration in 75% of tests ranging from 2ml to more than 3ml, and to ensure a contact duration exceeding the EN 1500 times in 75% of tests ranging from 1.5ml to more than 3ml. The aftermath interpretation is the following: if different products are used, the volume utilized does not give an unbiased estimation of the HH compliance. Other compliance evaluation methods remain necessary for efficient benchmarking. Copyright © 2012 Ministry of Health, Saudi Arabia. Published by Elsevier Ltd. All rights reserved.
APPLICATION OF BENCHMARK DOSE METHODOLOGY TO DATA FROM PRENATAL DEVELOPMENTAL TOXICITY STUDIES
The benchmark dose (BMD) concept was applied to 246 conventional developmental toxicity datasets from government, industry and commercial laboratories. Five modeling approaches were used, two generic and three specific to developmental toxicity (DT models). BMDs for both quantal ...
Categorical Regression and Benchmark Dose Software 3.0
The objective of this full-day course is to provide participants with interactive training on the use of the U.S. Environmental Protection Agency’s (EPA) Benchmark Dose software (BMDS, version 3.0, released fall 2018) and Categorical Regression software (CatReg, version 3.1...
The USEPA's benchmark dose software (BMDS) version 1.2 has been available over the Internet since April, 2000 (epa.gov/ncea/bmds.htm), and has already been used in risk assessments of some significant environmental pollutants (e.g., diesel exhaust, dichloropropene, hexachlorocycl...
Experimental benchmarking of a Monte Carlo dose simulation code for pediatric CT
NASA Astrophysics Data System (ADS)
Li, Xiang; Samei, Ehsan; Yoshizumi, Terry; Colsher, James G.; Jones, Robert P.; Frush, Donald P.
2007-03-01
In recent years, there has been a desire to reduce CT radiation dose to children because of their susceptibility and prolonged risk for cancer induction. Concerns arise, however, as to the impact of dose reduction on image quality and thus potentially on diagnostic accuracy. To study the dose and image quality relationship, we are developing a simulation code to calculate organ dose in pediatric CT patients. To benchmark this code, a cylindrical phantom was built to represent a pediatric torso, which allows measurements of dose distributions from its center to its periphery. Dose distributions for axial CT scans were measured on a 64-slice multidetector CT (MDCT) scanner (GE Healthcare, Chalfont St. Giles, UK). The same measurements were simulated using a Monte Carlo code (PENELOPE, Universitat de Barcelona) with the applicable CT geometry including bowtie filter. The deviations between simulated and measured dose values were generally within 5%. To our knowledge, this work is one of the first attempts to compare measured radial dose distributions on a cylindrical phantom with Monte Carlo simulated results. It provides a simple and effective method for benchmarking organ dose simulation codes and demonstrates the potential of Monte Carlo simulation for investigating the relationship between dose and image quality for pediatric CT patients.
Combining uncertainty factors in deriving human exposure levels of noncarcinogenic toxicants.
Kodell, R L; Gaylor, D W
1999-01-01
Acceptable levels of human exposure to noncarcinogenic toxicants in environmental and occupational settings generally are derived by reducing experimental no-observed-adverse-effect levels (NOAELs) or benchmark doses (BDs) by a product of uncertainty factors (Barnes and Dourson, Ref. 1). These factors are presumed to ensure safety by accounting for uncertainty in dose extrapolation, uncertainty in duration extrapolation, differential sensitivity between humans and animals, and differential sensitivity among humans. The common default value for each uncertainty factor is 10. This paper shows how estimates of means and standard deviations of the approximately log-normal distributions of individual uncertainty factors can be used to estimate percentiles of the distribution of the product of uncertainty factors. An appropriately selected upper percentile, for example, 95th or 99th, of the distribution of the product can be used as a combined uncertainty factor to replace the conventional product of default factors.
De Bondt, Timo; Mulkens, Tom; Zanca, Federica; Pyfferoen, Lotte; Casselman, Jan W; Parizel, Paul M
2017-02-01
To benchmark regional standard practice for paediatric cranial CT-procedures in terms of radiation dose and acquisition parameters. Paediatric cranial CT-data were retrospectively collected during a 1-year period, in 3 different hospitals of the same country. A dose tracking system was used to automatically gather information. Dose (CTDI and DLP), scan length, amount of retakes and demographic data were stratified by age and clinical indication; appropriate use of child-specific protocols was assessed. In total, 296 paediatric cranial CT-procedures were collected. Although the median dose of each hospital was below national and international diagnostic reference level (DRL) for all age categories, statistically significant (p-value < 0.001) dose differences among hospitals were observed. The hospital with lowest dose levels showed smallest dose variability and used age-stratified protocols for standardizing paediatric head exams. Erroneous selection of adult protocols for children still occurred, mostly in the oldest age-group. Even though all hospitals complied with national and international DRLs, dose tracking and benchmarking showed that further dose optimization and standardization is possible by using age-stratified protocols for paediatric cranial CT. Moreover, having a dose tracking system revealed that adult protocols are still applied for paediatric CT, a practice that must be avoided. • Significant differences were observed in the delivered dose between age-groups and hospitals. • Using age-adapted scanning protocols gives a nearly linear dose increase. • Sharing dose-data can be a trigger for hospitals to reduce dose levels.
An Improved Method of Heterogeneity Compensation for the Convolution / Superposition Algorithm
NASA Astrophysics Data System (ADS)
Jacques, Robert; McNutt, Todd
2014-03-01
Purpose: To improve the accuracy of convolution/superposition (C/S) in heterogeneous material by developing a new algorithm: heterogeneity compensated superposition (HCS). Methods: C/S has proven to be a good estimator of the dose deposited in a homogeneous volume. However, near heterogeneities electron disequilibrium occurs, leading to the faster fall-off and re-buildup of dose. We propose to filter the actual patient density in a position and direction sensitive manner, allowing the dose deposited near interfaces to be increased or decreased relative to C/S. We implemented the effective density function as a multivariate first-order recursive filter and incorporated it into GPU-accelerated, multi-energetic C/S implementation. We compared HCS against C/S using the ICCR 2000 Monte-Carlo accuracy benchmark, 23 similar accuracy benchmarks and 5 patient cases. Results: Multi-energetic HCS increased the dosimetric accuracy for the vast majority of voxels; in many cases near Monte-Carlo results were achieved. We defined the per-voxel error, %|mm, as the minimum of the distance to agreement in mm and the dosimetric percentage error relative to the maximum MC dose. HCS improved the average mean error by 0.79 %|mm for the patient volumes; reducing the average mean error from 1.93 %|mm to 1.14 %|mm. Very low densities (i.e. < 0.1 g / cm3) remained problematic, but may be solvable with a better filter function. Conclusions: HCS improved upon C/S's density scaled heterogeneity correction with a position and direction sensitive density filter. This method significantly improved the accuracy of the GPU based algorithm reaching the accuracy levels of Monte Carlo based methods with performance in a few tenths of seconds per beam. Acknowledgement: Funding for this research was provided by the NSF Cooperative Agreement EEC9731748, Elekta / IMPAC Medical Systems, Inc. and the Johns Hopkins University. James Satterthwaite provided the Monte Carlo benchmark simulations.
Megias, Daniel; Phillips, Mark; Clifton-Hadley, Laura; Harron, Elizabeth; Eaton, David J; Sanghera, Paul; Whitfield, Gillian
2017-03-01
The HIPPO trial is a UK randomized Phase II trial of hippocampal sparing (HS) vs conventional whole-brain radiotherapy after surgical resection or radiosurgery in patients with favourable prognosis with 1-4 brain metastases. Each participating centre completed a planning benchmark case as part of the dedicated radiotherapy trials quality assurance programme (RTQA), promoting the safe and effective delivery of HS intensity-modulated radiotherapy (IMRT) in a multicentre trial setting. Submitted planning benchmark cases were reviewed using visualization for radiotherapy software (VODCA) evaluating plan quality and compliance in relation to the HIPPO radiotherapy planning and delivery guidelines. Comparison of the planning benchmark data highlighted a plan specified using dose to medium as an outlier by comparison with those specified using dose to water. Further evaluation identified that the reported plan statistics for dose to medium were lower as a result of the dose calculated at regions of PTV inclusive of bony cranium being lower relative to brain. Specification of dose to water or medium remains a source of potential ambiguity and it is essential that as part of a multicentre trial, consideration is given to reported differences, particularly in the presence of bone. Evaluation of planning benchmark data as part of an RTQA programme has highlighted an important feature of HS IMRT dosimetry dependent on dose being specified to water or medium, informing the development and undertaking of HS IMRT as part of the HIPPO trial. Advances in knowledge: The potential clinical impact of differences between dose to medium and dose to water are demonstrated for the first time, in the setting of HS whole-brain radiotherapy.
Maier, Andrew; Vincent, Melissa J; Parker, Ann; Gadagbui, Bernard K; Jayjock, Michael
2015-12-01
Asthma is a complex syndrome with significant consequences for those affected. The number of individuals affected is growing, although the reasons for the increase are uncertain. Ensuring the effective management of potential exposures follows from substantial evidence that exposure to some chemicals can increase the likelihood of asthma responses. We have developed a safety assessment approach tailored to the screening of asthma risks from residential consumer product ingredients as a proactive risk management tool. Several key features of the proposed approach advance the assessment resources often used for asthma issues. First, a quantitative health benchmark for asthma or related endpoints (irritation and sensitization) is provided that extends qualitative hazard classification methods. Second, a parallel structure is employed to include dose-response methods for asthma endpoints and methods for scenario specific exposure estimation. The two parallel tracks are integrated in a risk characterization step. Third, a tiered assessment structure is provided to accommodate different amounts of data for both the dose-response assessment (i.e., use of existing benchmarks, hazard banding, or the threshold of toxicological concern) and exposure estimation (i.e., use of empirical data, model estimates, or exposure categories). Tools building from traditional methods and resources have been adapted to address specific issues pertinent to asthma toxicology (e.g., mode-of-action and dose-response features) and the nature of residential consumer product use scenarios (e.g., product use patterns and exposure durations). A case study for acetic acid as used in various sentinel products and residential cleaning scenarios was developed to test the safety assessment methodology. In particular, the results were used to refine and verify relationships among tiered approaches such that each lower data tier in the approach provides a similar or greater margin of safety for a given scenario. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Bohl, Michael A; Goswami, Roopa; Strassner, Brett; Stanger, Paula
2016-08-01
The purpose of this investigation was to evaluate the potential of using the ACR's Dose Index Registry(®) to meet The Joint Commission's requirements to identify incidents in which the radiation dose index from diagnostic CT examinations exceeded the protocol's expected dose index range. In total, 10,970 records in the Dose Index Registry were statistically analyzed to establish both an upper and lower expected dose index for each protocol. All 2015 studies to date were then retrospectively reviewed to identify examinations whose total examination dose index exceeded the protocol's defined upper threshold. Each dose incident was then logged and reviewed per the new Joint Commission requirements. Facilities may leverage their participation in the ACR's Dose Index Registry to fully meet The Joint Commission's dose incident identification review and external benchmarking requirements. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Norris, Edward T.; Liu, Xin, E-mail: xinliu@mst.edu; Hsieh, Jiang
Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. Themore » CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer. Conclusions: The simulation results showed that the deterministic method can be effectively used to estimate the absorbed dose in a CTDI phantom. The accuracy of the discrete ordinates method was close to that of a Monte Carlo simulation, and the primary benefit of the discrete ordinates method lies in its rapid computation speed. It is expected that further optimization of this method in routine clinical CT dose estimation will improve its accuracy and speed.« less
Soeteman-Hernández, Lya G; Fellows, Mick D; Johnson, George E; Slob, Wout
2015-12-01
In this study, we explored the applicability of using in vitro micronucleus (MN) data from human lymphoblastoid TK6 cells to derive in vivo genotoxicity potency information. Nineteen chemicals covering a broad spectrum of genotoxic modes of action were tested in an in vitro MN test using TK6 cells using the same study protocol. Several of these chemicals were considered to need metabolic activation, and these were administered in the presence of S9. The Benchmark dose (BMD) approach was applied using the dose-response modeling program PROAST to estimate the genotoxic potency from the in vitro data. The resulting in vitro BMDs were compared with previously derived BMDs from in vivo MN and carcinogenicity studies. A proportional correlation was observed between the BMDs from the in vitro MN and the BMDs from the in vivo MN assays. Further, a clear correlation was found between the BMDs from in vitro MN and the associated BMDs for malignant tumors. Although these results are based on only 19 compounds, they show that genotoxicity potencies estimated from in vitro tests may result in useful information regarding in vivo genotoxic potency, as well as expected cancer potency. Extension of the number of compounds and further investigation of metabolic activation (S9) and of other toxicokinetic factors would be needed to validate our initial conclusions. However, this initial work suggests that this approach could be used for in vitro to in vivo extrapolations which would support the reduction of animals used in research (3Rs: replacement, reduction, and refinement). © The Author 2015. Published by Oxford University Press on behalf of the Society of Toxicology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suter, G.W. II; Mabrey, J.B.
1994-07-01
This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronicmore » Value (SCV), the lowest chronic values for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility.« less
Transcriptomic Dose-Response Analysis for Mode of Action ...
Microarray and RNA-seq technologies can play an important role in assessing the health risks associated with environmental exposures. The utility of gene expression data to predict hazard has been well documented. Early toxicogenomics studies used relatively high, single doses with minimal replication. Thus, they were not useful in understanding health risks at environmentally-relevant doses. Until the past decade, application of toxicogenomics in dose response assessment and determination of chemical mode of action has been limited. New transcriptomic biomarkers have evolved to detect chemical hazards in multiple tissues together with pathway methods to study biological effects across the full dose response range and critical time course. Comprehensive low dose datasets are now available and with the use of transcriptomic benchmark dose estimation techniques within a mode of action framework, the ability to incorporate informative genomic data into human health risk assessment has substantially improved. The key advantage to applying transcriptomic technology to risk assessment is both the sensitivity and comprehensive examination of direct and indirect molecular changes that lead to adverse outcomes. Book Chapter with topic on future application of toxicogenomics technologies for MoA and risk assessment
Accuracy of a simplified method for shielded gamma-ray skyshine sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bassett, M.S.; Shultis, J.K.
1989-11-01
Rigorous transport or Monte Carlo methods for estimating far-field gamma-ray skyshine doses generally are computationally intensive. consequently, several simplified techniques such as point-kernel methods and methods based on beam response functions have been proposed. For unshielded skyshine sources, these simplified methods have been shown to be quite accurate from comparisons to benchmark problems and to benchmark experimental results. For shielded sources, the simplified methods typically use exponential attenuation and photon buildup factors to describe the effect of the shield. However, the energy and directional redistribution of photons scattered in the shield is usually ignored, i.e., scattered photons are assumed tomore » emerge from the shield with the same energy and direction as the uncollided photons. The accuracy of this shield treatment is largely unknown due to the paucity of benchmark results for shielded sources. In this paper, the validity of such a shield treatment is assessed by comparison to a composite method, which accurately calculates the energy and angular distribution of photons penetrating the shield.« less
ORANGE: a Monte Carlo dose engine for radiotherapy.
van der Zee, W; Hogenbirk, A; van der Marck, S C
2005-02-21
This study presents data for the verification of ORANGE, a fast MCNP-based dose engine for radiotherapy treatment planning. In order to verify the new algorithm, it has been benchmarked against DOSXYZ and against measurements. For the benchmarking, first calculations have been done using the ICCR-XIII benchmark. Next, calculations have been done with DOSXYZ and ORANGE in five different phantoms (one homogeneous, two with bone equivalent inserts and two with lung equivalent inserts). The calculations have been done with two mono-energetic photon beams (2 MeV and 6 MeV) and two mono-energetic electron beams (10 MeV and 20 MeV). Comparison of the calculated data (from DOSXYZ and ORANGE) against measurements was possible for a realistic 10 MV photon beam and a realistic 15 MeV electron beam in a homogeneous phantom only. For the comparison of the calculated dose distributions and dose distributions against measurements, the concept of the confidence limit (CL) has been used. This concept reduces the difference between two data sets to a single number, which gives the deviation for 90% of the dose distributions. Using this concept, it was found that ORANGE was always within the statistical bandwidth with DOSXYZ and the measurements. The ICCR-XIII benchmark showed that ORANGE is seven times faster than DOSXYZ, a result comparable with other accelerated Monte Carlo dose systems when no variance reduction is used. As shown for XVMC, using variance reduction techniques has the potential for further acceleration. Using modern computer hardware, this brings the total calculation time for a dose distribution with 1.5% (statistical) accuracy within the clinical range (less then 10 min). This means that ORANGE can be a candidate for a dose engine in radiotherapy treatment planning.
Development of risk-based nanomaterial groups for occupational exposure control
NASA Astrophysics Data System (ADS)
Kuempel, E. D.; Castranova, V.; Geraci, C. L.; Schulte, P. A.
2012-09-01
Given the almost limitless variety of nanomaterials, it will be virtually impossible to assess the possible occupational health hazard of each nanomaterial individually. The development of science-based hazard and risk categories for nanomaterials is needed for decision-making about exposure control practices in the workplace. A possible strategy would be to select representative (benchmark) materials from various mode of action (MOA) classes, evaluate the hazard and develop risk estimates, and then apply a systematic comparison of new nanomaterials with the benchmark materials in the same MOA class. Poorly soluble particles are used here as an example to illustrate quantitative risk assessment methods for possible benchmark particles and occupational exposure control groups, given mode of action and relative toxicity. Linking such benchmark particles to specific exposure control bands would facilitate the translation of health hazard and quantitative risk information to the development of effective exposure control practices in the workplace. A key challenge is obtaining sufficient dose-response data, based on standard testing, to systematically evaluate the nanomaterials' physical-chemical factors influencing their biological activity. Categorization processes involve both science-based analyses and default assumptions in the absence of substance-specific information. Utilizing data and information from related materials may facilitate initial determinations of exposure control systems for nanomaterials.
A health risk benchmark for the neurologic effects of styrene: comparison with NOAEL/LOAEL approach.
Rabovsky, J; Fowles, J; Hill, M D; Lewis, D C
2001-02-01
Benchmark dose (BMD) analysis was used to estimate an inhalation benchmark concentration for styrene neurotoxicity. Quantal data on neuropsychologic test results from styrene-exposed workers [Mutti et al. (1984). American Journal of Industrial Medicine, 5, 275-286] were used to quantify neurotoxicity, defined as the percent of tested workers who responded abnormally to > or = 1, > or = 2, or > or = 3 out of a battery of eight tests. Exposure was based on previously published results on mean urinary mandelic- and phenylglyoxylic acid levels in the workers, converted to air styrene levels (15, 44, 74, or 115 ppm). Nonstyrene-exposed workers from the same region served as a control group. Maximum-likelihood estimates (MLEs) and BMDs at 5 and 10% response levels of the exposed population were obtained from log-normal analysis of the quantal data. The highest MLE was 9 ppm (BMD = 4 ppm) styrene and represents abnormal responses to > or = 3 tests by 10% of the exposed population. The most health-protective MLE was 2 ppm styrene (BMD = 0.3 ppm) and represents abnormal responses to > or = 1 test by 5% of the exposed population. A no observed adverse effect level/lowest observed adverse effect level (NOAEL/LOAEL) analysis of the same quantal data showed workers in all styrene exposure groups responded abnormally to > or = 1, > or = 2, or > or = 3 tests, compared to controls, and the LOAEL was 15 ppm. A comparison of the BMD and NOAEL/LOAEL analyses suggests that at air styrene levels below the LOAEL, a segment of the worker population may be adversely affected. The benchmark approach will be useful for styrene noncancer risk assessment purposes by providing a more accurate estimate of potential risk that should, in turn, help to reduce the uncertainty that is a common problem in setting exposure levels.
High-energy neutron depth-dose distribution experiment.
Ferenci, M S; Hertel, N E
2003-01-01
A unique set of high-energy neutron depth-dose benchmark experiments were performed at the Los Alamos Neutron Science Center/Weapons Neutron Research (LANSCE/WNR) complex. The experiments consisted of filtered neutron beams with energies up to 800 MeV impinging on a 30 x 30 x 30 cm3 liquid, tissue-equivalent phantom. The absorbed dose was measured in the phantom at various depths with tissue-equivalent ion chambers. This experiment is intended to serve as a benchmark experiment for the testing of high-energy radiation transport codes for the international radiation protection community.
Comparison of Vocal Vibration-Dose Measures for Potential-Damage Risk Criteria
Hunter, Eric J.
2015-01-01
Purpose Schoolteachers have become a benchmark population for the study of occupational voice use. A decade of vibration-dose studies on the teacher population allows a comparison to be made between specific dose measures for eventual assessment of damage risk. Method Vibration dosimetry is reformulated with the inclusion of collision stress. Two methods of estimating amplitude of vocal-fold vibration are compared to capture variations in vocal intensity. Energy loss from collision is added to the energy-dissipation dose. An equal-energy-dissipation criterion is defined and used on the teacher corpus as a potential-damage risk criterion. Results Comparison of time-, cycle-, distance-, and energy-dose calculations for 57 teachers reveals a progression in information content in the ability to capture variations in duration, speaking pitch, and vocal intensity. The energy-dissipation dose carries the greatest promise in capturing excessive tissue stress and collision but also the greatest liability, due to uncertainty in parameters. Cycle dose is least correlated with the other doses. Conclusion As a first guide to damage risk in excessive voice use, the equal-energy-dissipation dose criterion can be used to structure trade-off relations between loudness, adduction, and duration of speech. PMID:26172434
Demb, Joshua; Chu, Philip; Nelson, Thomas; Hall, David; Seibert, Anthony; Lamba, Ramit; Boone, John; Krishnam, Mayil; Cagnon, Christopher; Bostani, Maryam; Gould, Robert; Miglioretti, Diana; Smith-Bindman, Rebecca
2017-06-01
Radiation doses for computed tomography (CT) vary substantially across institutions. To assess the impact of institutional-level audit and collaborative efforts to share best practices on CT radiation doses across 5 University of California (UC) medical centers. In this before/after interventional study, we prospectively collected radiation dose metrics on all diagnostic CT examinations performed between October 1, 2013, and December 31, 2014, at 5 medical centers. Using data from January to March (baseline), we created audit reports detailing the distribution of radiation dose metrics for chest, abdomen, and head CT scans. In April, we shared reports with the medical centers and invited radiology professionals from the centers to a 1.5-day in-person meeting to review reports and share best practices. We calculated changes in mean effective dose 12 weeks before and after the audits and meeting, excluding a 12-week implementation period when medical centers could make changes. We compared proportions of examinations exceeding previously published benchmarks at baseline and following the audit and meeting, and calculated changes in proportion of examinations exceeding benchmarks. Of 158 274 diagnostic CT scans performed in the study period, 29 594 CT scans were performed in the 3 months before and 32 839 CT scans were performed 12 to 24 weeks after the audit and meeting. Reductions in mean effective dose were considerable for chest and abdomen. Mean effective dose for chest CT decreased from 13.2 to 10.7 mSv (18.9% reduction; 95% CI, 18.0%-19.8%). Reductions at individual medical centers ranged from 3.8% to 23.5%. The mean effective dose for abdominal CT decreased from 20.0 to 15.0 mSv (25.0% reduction; 95% CI, 24.3%-25.8%). Reductions at individual medical centers ranged from 10.8% to 34.7%. The number of CT scans that had an effective dose measurement that exceeded benchmarks was reduced considerably by 48% and 54% for chest and abdomen, respectively. After the audit and meeting, head CT doses varied less, although some institutions increased and some decreased mean head CT doses and the proportion above benchmarks. Reviewing institutional doses and sharing dose-optimization best practices resulted in lower radiation doses for chest and abdominal CT and more consistent doses for head CT.
Castorina, Rosemary; Bradman, Asa; McKone, Thomas E; Barr, Dana B; Harnly, Martha E; Eskenazi, Brenda
2003-01-01
Approximately 230,000 kg of organophosphate (OP) pesticides are applied annually in California's Salinas Valley. These activities have raised concerns about exposures to area residents. We collected three spot urine samples from pregnant women (between 1999 and 2001) enrolled in CHAMACOS (Center for the Health Assessment of Mothers and Children of Salinas), a longitudinal birth cohort study, and analyzed them for six dialkyl phosphate metabolites. We used urine from 446 pregnant women to estimate OP pesticide doses with two deterministic steady-state modeling methods: method 1, which assumed the metabolites were attributable entirely to a single diethyl or dimethyl OP pesticide; and method 2, which adapted U.S. Environmental Protection Agency (U.S. EPA) draft guidelines for cumulative risk assessment to estimate dose from a mixture of OP pesticides that share a common mechanism of toxicity. We used pesticide use reporting data for the Salinas Valley to approximate the mixture to which the women were exposed. Based on average OP pesticide dose estimates that assumed exposure to a single OP pesticide (method 1), between 0% and 36.1% of study participants' doses failed to attain a margin of exposure (MOE) of 100 relative to the U.S. EPA oral benchmark dose(10) (BMD(10)), depending on the assumption made about the parent compound. These BMD(10) values are doses expected to produce a 10% reduction in brain cholinesterase activity compared with background response in rats. Given the participants' average cumulative OP pesticide dose estimates (method 2) and regardless of the index chemical selected, we found that 14.8% of the doses failed to attain an MOE of 100 relative to the BMD(10) of the selected index. An uncertainty analysis of the pesticide mixture parameter, which is extrapolated from pesticide application data for the study area and not directly quantified for each individual, suggests that this point estimate could range from 1 to 34%. In future analyses, we will use pesticide-specific urinary metabolites, when available, to evaluate cumulative OP pesticide exposures. PMID:14527844
3D conditional generative adversarial networks for high-quality PET image estimation at low dose.
Wang, Yan; Yu, Biting; Wang, Lei; Zu, Chen; Lalush, David S; Lin, Weili; Wu, Xi; Zhou, Jiliu; Shen, Dinggang; Zhou, Luping
2018-07-01
Positron emission tomography (PET) is a widely used imaging modality, providing insight into both the biochemical and physiological processes of human body. Usually, a full dose radioactive tracer is required to obtain high-quality PET images for clinical needs. This inevitably raises concerns about potential health hazards. On the other hand, dose reduction may cause the increased noise in the reconstructed PET images, which impacts the image quality to a certain extent. In this paper, in order to reduce the radiation exposure while maintaining the high quality of PET images, we propose a novel method based on 3D conditional generative adversarial networks (3D c-GANs) to estimate the high-quality full-dose PET images from low-dose ones. Generative adversarial networks (GANs) include a generator network and a discriminator network which are trained simultaneously with the goal of one beating the other. Similar to GANs, in the proposed 3D c-GANs, we condition the model on an input low-dose PET image and generate a corresponding output full-dose PET image. Specifically, to render the same underlying information between the low-dose and full-dose PET images, a 3D U-net-like deep architecture which can combine hierarchical features by using skip connection is designed as the generator network to synthesize the full-dose image. In order to guarantee the synthesized PET image to be close to the real one, we take into account of the estimation error loss in addition to the discriminator feedback to train the generator network. Furthermore, a concatenated 3D c-GANs based progressive refinement scheme is also proposed to further improve the quality of estimated images. Validation was done on a real human brain dataset including both the normal subjects and the subjects diagnosed as mild cognitive impairment (MCI). Experimental results show that our proposed 3D c-GANs method outperforms the benchmark methods and achieves much better performance than the state-of-the-art methods in both qualitative and quantitative measures. Copyright © 2018 Elsevier Inc. All rights reserved.
Caoili, Salvador Eugenio C.
2014-01-01
B-cell epitope prediction can enable novel pharmaceutical product development. However, a mechanistically framed consensus has yet to emerge on benchmarking such prediction, thus presenting an opportunity to establish standards of practice that circumvent epistemic inconsistencies of casting the epitope prediction task as a binary-classification problem. As an alternative to conventional dichotomous qualitative benchmark data, quantitative dose-response data on antibody-mediated biological effects are more meaningful from an information-theoretic perspective in the sense that such effects may be expressed as probabilities (e.g., of functional inhibition by antibody) for which the Shannon information entropy (SIE) can be evaluated as a measure of informativeness. Accordingly, half-maximal biological effects (e.g., at median inhibitory concentrations of antibody) correspond to maximally informative data while undetectable and maximal biological effects correspond to minimally informative data. This applies to benchmarking B-cell epitope prediction for the design of peptide-based immunogens that elicit antipeptide antibodies with functionally relevant cross-reactivity. Presently, the Immune Epitope Database (IEDB) contains relatively few quantitative dose-response data on such cross-reactivity. Only a small fraction of these IEDB data is maximally informative, and many more of them are minimally informative (i.e., with zero SIE). Nevertheless, the numerous qualitative data in IEDB suggest how to overcome the paucity of informative benchmark data. PMID:24949474
BENCHMARK DOSE TECHNICAL GUIDANCE DOCUMENT ...
The U.S. EPA conducts risk assessments for an array of health effects that may result from exposure to environmental agents, and that require an analysis of the relationship between exposure and health-related outcomes. The dose-response assessment is essentially a two-step process, the first being the definition of a point of departure (POD), and the second extrapolation from the POD to low environmentally-relevant exposure levels. The benchmark dose (BMD) approach provides a more quantitative alternative to the first step in the dose-response assessment than the current NOAEL/LOAEL process for noncancer health effects, and is similar to that for determining the POD proposed for cancer endpoints. As the Agency moves toward harmonization of approaches for human health risk assessment, the dichotomy between cancer and noncancer health effects is being replaced by consideration of mode of action and whether the effects of concern are likely to be linear or nonlinear at low doses. Thus, the purpose of this project is to provide guidance for the Agency and the outside community on the application of the BMD approach in determining the POD for all types of health effects data, whether a linear or nonlinear low dose extrapolation is used. A guidance document is being developed under the auspices of EPA's Risk Assessment Forum. The purpose of this project is to provide guidance for the Agency and the outside community on the application of the benchmark dose (BMD) appr
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramos-Mendez, J; Faddegon, B; Perl, J
2015-06-15
Purpose: To develop and verify an extension to TOPAS for calculation of dose response models (TCP/NTCP). TOPAS wraps and extends Geant4. Methods: The TOPAS DICOM interface was extended to include structure contours, for subsequent calculation of DVH’s and TCP/NTCP. The following dose response models were implemented: Lyman-Kutcher-Burman (LKB), critical element (CE), population based critical volume (CV), parallel-serials, a sigmoid-based model of Niemierko for NTCP and TCP, and a Poisson-based model for TCP. For verification, results for the parallel-serial and Poisson models, with 6 MV x-ray dose distributions calculated with TOPAS and Pinnacle v9.2, were compared to data from the benchmarkmore » configuration of the AAPM Task Group 166 (TG166). We provide a benchmark configuration suitable for proton therapy along with results for the implementation of the Niemierko, CV and CE models. Results: The maximum difference in DVH calculated with Pinnacle and TOPAS was 2%. Differences between TG166 data and Monte Carlo calculations of up to 4.2%±6.1% were found for the parallel-serial model and up to 1.0%±0.7% for the Poisson model (including the uncertainty due to lack of knowledge of the point spacing in TG166). For CE, CV and Niemierko models, the discrepancies between the Pinnacle and TOPAS results are 74.5%, 34.8% and 52.1% when using 29.7 cGy point spacing, the differences being highly sensitive to dose spacing. On the other hand, with our proposed benchmark configuration, the largest differences were 12.05%±0.38%, 3.74%±1.6%, 1.57%±4.9% and 1.97%±4.6% for the CE, CV, Niemierko and LKB models, respectively. Conclusion: Several dose response models were successfully implemented with the extension module. Reference data was calculated for future benchmarking. Dose response calculated for the different models varied much more widely for the TG166 benchmark than for the proposed benchmark, which had much lower sensitivity to the choice of DVH dose points. This work was supported by National Cancer Institute Grant R01CA140735.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Al-Hallaq, Hania A., E-mail: halhallaq@radonc.uchicago.edu; Chmura, Steven J.; Salama, Joseph K.
Purpose: The NRG-BR001 trial is the first National Cancer Institute–sponsored trial to treat multiple (range 2-4) extracranial metastases with stereotactic body radiation therapy. Benchmark credentialing is required to ensure adherence to this complex protocol, in particular, for metastases in close proximity. The present report summarizes the dosimetric results and approval rates. Methods and Materials: The benchmark used anonymized data from a patient with bilateral adrenal metastases, separated by <5 cm of normal tissue. Because the planning target volume (PTV) overlaps with organs at risk (OARs), institutions must use the planning priority guidelines to balance PTV coverage (45 Gy in 3 fractions) againstmore » OAR sparing. Submitted plans were processed by the Imaging and Radiation Oncology Core and assessed by the protocol co-chairs by comparing the doses to targets, OARs, and conformity metrics using nonparametric tests. Results: Of 63 benchmarks submitted through October 2015, 94% were approved, with 51% approved at the first attempt. Most used volumetric arc therapy (VMAT) (78%), a single plan for both PTVs (90%), and prioritized the PTV over the stomach (75%). The median dose to 95% of the volume was 44.8 ± 1.0 Gy and 44.9 ± 1.0 Gy for the right and left PTV, respectively. The median dose to 0.03 cm{sup 3} was 14.2 ± 2.2 Gy to the spinal cord and 46.5 ± 3.1 Gy to the stomach. Plans that spared the stomach significantly reduced the dose to the left PTV and stomach. Conformity metrics were significantly better for single plans that simultaneously treated both PTVs with VMAT, intensity modulated radiation therapy, or 3-dimensional conformal radiation therapy compared with separate plans. No significant differences existed in the dose at 2 cm from the PTVs. Conclusions: Although most plans used VMAT, the range of conformity and dose falloff was large. The decision to prioritize either OARs or PTV coverage varied considerably, suggesting that the toxicity outcomes in the trial could be affected. Several benchmarks met the dose-volume histogram metrics but produced unacceptable plans owing to low conformity. Dissemination of a frequently-asked-questions document improved the approval rate at the first attempt. Benchmark credentialing was found to be a valuable tool for educating institutions about the protocol requirements.« less
Al-Hallaq, Hania A; Chmura, Steven J; Salama, Joseph K; Lowenstein, Jessica R; McNulty, Susan; Galvin, James M; Followill, David S; Robinson, Clifford G; Pisansky, Thomas M; Winter, Kathryn A; White, Julia R; Xiao, Ying; Matuszak, Martha M
2017-01-01
The NRG-BR001 trial is the first National Cancer Institute-sponsored trial to treat multiple (range 2-4) extracranial metastases with stereotactic body radiation therapy. Benchmark credentialing is required to ensure adherence to this complex protocol, in particular, for metastases in close proximity. The present report summarizes the dosimetric results and approval rates. The benchmark used anonymized data from a patient with bilateral adrenal metastases, separated by <5 cm of normal tissue. Because the planning target volume (PTV) overlaps with organs at risk (OARs), institutions must use the planning priority guidelines to balance PTV coverage (45 Gy in 3 fractions) against OAR sparing. Submitted plans were processed by the Imaging and Radiation Oncology Core and assessed by the protocol co-chairs by comparing the doses to targets, OARs, and conformity metrics using nonparametric tests. Of 63 benchmarks submitted through October 2015, 94% were approved, with 51% approved at the first attempt. Most used volumetric arc therapy (VMAT) (78%), a single plan for both PTVs (90%), and prioritized the PTV over the stomach (75%). The median dose to 95% of the volume was 44.8 ± 1.0 Gy and 44.9 ± 1.0 Gy for the right and left PTV, respectively. The median dose to 0.03 cm 3 was 14.2 ± 2.2 Gy to the spinal cord and 46.5 ± 3.1 Gy to the stomach. Plans that spared the stomach significantly reduced the dose to the left PTV and stomach. Conformity metrics were significantly better for single plans that simultaneously treated both PTVs with VMAT, intensity modulated radiation therapy, or 3-dimensional conformal radiation therapy compared with separate plans. No significant differences existed in the dose at 2 cm from the PTVs. Although most plans used VMAT, the range of conformity and dose falloff was large. The decision to prioritize either OARs or PTV coverage varied considerably, suggesting that the toxicity outcomes in the trial could be affected. Several benchmarks met the dose-volume histogram metrics but produced unacceptable plans owing to low conformity. Dissemination of a frequently-asked-questions document improved the approval rate at the first attempt. Benchmark credentialing was found to be a valuable tool for educating institutions about the protocol requirements. Copyright © 2016 Elsevier Inc. All rights reserved.
Russell, Louise B.; Pentakota, Sri Ram; Toscano, Cristiana Maria; Cosgriff, Ben; Sinha, Anushua
2016-01-01
Background. Despite longstanding infant vaccination programs in low- and middle-income countries (LMICs), pertussis continues to cause deaths in the youngest infants. A maternal monovalent acellular pertussis (aP) vaccine, in development, could prevent many of these deaths. We estimated infant pertussis mortality rates at which maternal vaccination would be a cost-effective use of public health resources in LMICs. Methods. We developed a decision model to evaluate the cost-effectiveness of maternal aP immunization plus routine infant vaccination vs routine infant vaccination alone in Bangladesh, Nigeria, and Brazil. For a range of maternal aP vaccine prices, one-way sensitivity analyses identified the infant pertussis mortality rates required to make maternal immunization cost-effective by alternative benchmarks ($100, 0.5 gross domestic product [GDP] per capita, and GDP per capita per disability-adjusted life-year [DALY]). Probabilistic sensitivity analysis provided uncertainty intervals for these mortality rates. Results. Infant pertussis mortality rates necessary to make maternal aP immunization cost-effective exceed the rates suggested by current evidence except at low vaccine prices and/or cost-effectiveness benchmarks at the high end of those considered in this report. For example, at a vaccine price of $0.50/dose, pertussis mortality would need to be 0.051 per 1000 infants in Bangladesh, and 0.018 per 1000 in Nigeria, to cost 0.5 per capita GDP per DALY. In Brazil, a middle-income country, at a vaccine price of $4/dose, infant pertussis mortality would need to be 0.043 per 1000 to cost 0.5 per capita GDP per DALY. Conclusions. For commonly used cost-effectiveness benchmarks, maternal aP immunization would be cost-effective in many LMICs only if the vaccine were offered at less than $1–$2/dose. PMID:27838677
Hasegawa, R; Hirata-Koizumi, M; Dourson, M; Parker, A; Hirose, A; Nakai, S; Kamata, E; Ema, M
2007-04-01
We comprehensively re-analyzed the toxicity data for 18 industrial chemicals from repeated oral exposures in newborn and young rats, which were previously published. Two new toxicity endpoints specific to this comparative analysis were identified, the first, the presumed no observed adverse effect level (pNOAEL) was estimated based on results of both main and dose-finding studies, and the second, the presumed unequivocally toxic level (pUETL) was defined as a clear toxic dose giving similar severity in both newborn and young rats. Based on the analyses of both pNOAEL and pUETL ratios between the different ages, newborn rats demonstrated greater susceptibility (at most 8-fold) to nearly two thirds of these 18 chemicals (mostly phenolic substances), and less or nearly equal sensitivity to the other chemicals. Exceptionally one chemical only showed toxicity in newborn rats. In addition, Benchmark Dose Lower Bound (BMDL) estimates were calculated as an alternative endpoint. Most BMDLs were comparable to their corresponding pNOAELs and the overall correlation coefficient was 0.904. We discussed how our results can be incorporated into chemical risk assessment approaches to protect pediatric health from direct oral exposure to chemicals.
Renner, Franziska
2016-09-01
Monte Carlo simulations are regarded as the most accurate method of solving complex problems in the field of dosimetry and radiation transport. In (external) radiation therapy they are increasingly used for the calculation of dose distributions during treatment planning. In comparison to other algorithms for the calculation of dose distributions, Monte Carlo methods have the capability of improving the accuracy of dose calculations - especially under complex circumstances (e.g. consideration of inhomogeneities). However, there is a lack of knowledge of how accurate the results of Monte Carlo calculations are on an absolute basis. A practical verification of the calculations can be performed by direct comparison with the results of a benchmark experiment. This work presents such a benchmark experiment and compares its results (with detailed consideration of measurement uncertainty) with the results of Monte Carlo calculations using the well-established Monte Carlo code EGSnrc. The experiment was designed to have parallels to external beam radiation therapy with respect to the type and energy of the radiation, the materials used and the kind of dose measurement. Because the properties of the beam have to be well known in order to compare the results of the experiment and the simulation on an absolute basis, the benchmark experiment was performed using the research electron accelerator of the Physikalisch-Technische Bundesanstalt (PTB), whose beam was accurately characterized in advance. The benchmark experiment and the corresponding Monte Carlo simulations were carried out for two different types of ionization chambers and the results were compared. Considering the uncertainty, which is about 0.7 % for the experimental values and about 1.0 % for the Monte Carlo simulation, the results of the simulation and the experiment coincide. Copyright © 2015. Published by Elsevier GmbH.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suter, G.W., II
1993-01-01
One of the initial stages in ecological risk assessment of hazardous waste sites is the screening of contaminants to determine which, if any, of them are worthy of further consideration; this process is termed contaminant screening. Screening is performed by comparing concentrations in ambient media to benchmark concentrations that are either indicative of a high likelihood of significant effects (upper screening benchmarks) or of a very low likelihood of significant effects (lower screening benchmarks). Exceedance of an upper screening benchmark indicates that the chemical in question is clearly of concern and remedial actions are likely to be needed. Exceedance ofmore » a lower screening benchmark indicates that a contaminant is of concern unless other information indicates that the data are unreliable or the comparison is inappropriate. Chemicals with concentrations below the lower benchmark are not of concern if the ambient data are judged to be adequate. This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronic Value (SCV), the lowest chronic values for fish and daphnids, the lowest EC20 for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. This report supersedes a prior aquatic benchmarks report (Suter and Mabrey 1994). It adds two new types of benchmarks. It also updates the benchmark values where appropriate, adds some new benchmark values, replaces secondary sources with primary sources, and provides more complete documentation of the sources and derivation of all values.« less
Clinical decision-making tools for exam selection, reporting and dose tracking.
Brink, James A
2014-10-01
Although many efforts have been made to reduce the radiation dose associated with individual medical imaging examinations to "as low as reasonably achievable," efforts to ensure such examinations are performed only when medically indicated and appropriate are equally if not more important. Variations in the use of ionizing radiation for medical imaging are concerning, regardless of whether they occur on a local, regional or national basis. Such variations among practices can be reduced with the use of decision support tools at the time of order entry. These tools help reduce radiation exposure among practices through the appropriate use of medical imaging. Similarly, adoption of best practices among imaging facilities can be promoted through tracking the radiation exposure among imaging patients. Practices can benchmark their aggregate radiation exposures for medical imaging through the use of dose index registries. However several variables must be considered when contemplating individual patient dose tracking. The specific dose measures and the variation among them introduced by variations in body habitus must be understood. Moreover the uncertainties in risk estimation from dose metrics related to age, gender and life expectancy must also be taken into account.
Bi, Jian
2010-01-01
As the desire to promote health increases, reductions of certain ingredients, for example, sodium, sugar, and fat in food products, are widely requested. However, the reduction is not risk free in sensory and marketing aspects. Over reduction may change the taste and influence the flavor of a product and lead to a decrease in consumer's overall liking or purchase intent for the product. This article uses the benchmark dose (BMD) methodology to determine an appropriate reduction. Calculations of BMD and one-sided lower confidence limit of BMD are illustrated. The article also discusses how to calculate BMD and BMDL for over dispersed binary data in replicated testing based on a corrected beta-binomial model. USEPA Benchmark Dose Software (BMDS) were used and S-Plus programs were developed. The method discussed in the article is originally used to determine an appropriate reduction of certain ingredients, for example, sodium, sugar, and fat in food products, considering both health reason and sensory or marketing risk.
Multiscale benchmarking of drug delivery vectors.
Summers, Huw D; Ware, Matthew J; Majithia, Ravish; Meissner, Kenith E; Godin, Biana; Rees, Paul
2016-10-01
Cross-system comparisons of drug delivery vectors are essential to ensure optimal design. An in-vitro experimental protocol is presented that separates the role of the delivery vector from that of its cargo in determining the cell response, thus allowing quantitative comparison of different systems. The technique is validated through benchmarking of the dose-response of human fibroblast cells exposed to the cationic molecule, polyethylene imine (PEI); delivered as a free molecule and as a cargo on the surface of CdSe nanoparticles and Silica microparticles. The exposure metrics are converted to a delivered dose with the transport properties of the different scale systems characterized by a delivery time, τ. The benchmarking highlights an agglomeration of the free PEI molecules into micron sized clusters and identifies the metric determining cell death as the total number of PEI molecules presented to cells, determined by the delivery vector dose and the surface density of the cargo. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Yibao; Yan Yulong; Nath, Ravinder
2012-08-01
Purpose: To develop a quantitative method for the estimation of kV cone beam computed tomography (kVCBCT) doses in pediatric patients undergoing image-guided radiotherapy. Methods and Materials: Forty-two children were retrospectively analyzed in subgroups of different scanned regions: one group in the head-and-neck and the other group in the pelvis. Critical structures in planning CT images were delineated on an Eclipse treatment planning system before being converted into CT phantoms for Monte Carlo simulations. A benchmarked EGS4 Monte Carlo code was used to calculate three-dimensional dose distributions of kVCBCT scans with full-fan high-quality head or half-fan pelvis protocols predefined by themore » manufacturer. Based on planning CT images and structures exported in DICOM RT format, occipital-frontal circumferences (OFC) were calculated for head-and-neck patients using DICOMan software. Similarly, hip circumferences (HIP) were acquired for the pelvic group. Correlations between mean organ doses and age, weight, OFC, and HIP values were analyzed with SigmaPlot software suite, where regression performances were analyzed with relative dose differences (RDD) and coefficients of determination (R{sup 2}). Results: kVCBCT-contributed mean doses to all critical structures decreased monotonically with studied parameters, with a steeper decrease in the pelvis than in the head. Empirical functions have been developed for a dose estimation of the major organs at risk in the head and pelvis, respectively. If evaluated with physical parameters other than age, a mean RDD of up to 7.9% was observed for all the structures in our population of 42 patients. Conclusions: kVCBCT doses are highly correlated with patient size. According to this study, weight can be used as a primary index for dose assessment in both head and pelvis scans, while OFC and HIP may serve as secondary indices for dose estimation in corresponding regions. With the proposed empirical functions, it is possible to perform an individualized quantitative dose assessment of kVCBCT scans.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mackillop, William J., E-mail: william.mackillop@krcc.on.ca; Kong, Weidong; Brundage, Michael
Purpose: Estimates of the appropriate rate of use of radiation therapy (RT) are required for planning and monitoring access to RT. Our objective was to compare estimates of the appropriate rate of use of RT derived from mathematical models, with the rate observed in a population of patients with optimal access to RT. Methods and Materials: The rate of use of RT within 1 year of diagnosis (RT{sub 1Y}) was measured in the 134,541 cases diagnosed in Ontario between November 2009 and October 2011. The lifetime rate of use of RT (RT{sub LIFETIME}) was estimated by the multicohort utilization tablemore » method. Poisson regression was used to evaluate potential barriers to access to RT and to identify a benchmark subpopulation with unimpeded access to RT. Rates of use of RT were measured in the benchmark subpopulation and compared with published evidence-based estimates of the appropriate rates. Results: The benchmark rate for RT{sub 1Y}, observed under conditions of optimal access, was 33.6% (95% confidence interval [CI], 33.0%-34.1%), and the benchmark for RT{sub LIFETIME} was 41.5% (95% CI, 41.2%-42.0%). Benchmarks for RT{sub LIFETIME} for 4 of 5 selected sites and for all cancers combined were significantly lower than the corresponding evidence-based estimates. Australian and Canadian evidence-based estimates of RT{sub LIFETIME} for 5 selected sites differed widely. RT{sub LIFETIME} in the overall population of Ontario was just 7.9% short of the benchmark but 20.9% short of the Australian evidence-based estimate of the appropriate rate. Conclusions: Evidence-based estimates of the appropriate lifetime rate of use of RT may overestimate the need for RT in Ontario.« less
SU-E-T-148: Benchmarks and Pre-Treatment Reviews: A Study of Quality Assurance Effectiveness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lowenstein, J; Nguyen, H; Roll, J
Purpose: To determine the impact benchmarks and pre-treatment reviews have on improving the quality of submitted clinical trial data. Methods: Benchmarks are used to evaluate a site’s ability to develop a treatment that meets a specific protocol’s treatment guidelines prior to placing their first patient on the protocol. A pre-treatment review is an actual patient placed on the protocol in which the dosimetry and contour volumes are evaluated to be per protocol guidelines prior to allowing the beginning of the treatment. A key component of these QA mechanisms is that sites are provided timely feedback to educate them on howmore » to plan per the protocol and prevent protocol deviations on patients accrued to a protocol. For both benchmarks and pre-treatment reviews a dose volume analysis (DVA) was performed using MIM softwareTM. For pre-treatment reviews a volume contour evaluation was also performed. Results: IROC Houston performed a QA effectiveness analysis of a protocol which required both benchmarks and pre-treatment reviews. In 70 percent of the patient cases submitted, the benchmark played an effective role in assuring that the pre-treatment review of the cases met protocol requirements. The 35 percent of sites failing the benchmark subsequently modified there planning technique to pass the benchmark before being allowed to submit a patient for pre-treatment review. However, in 30 percent of the submitted cases the pre-treatment review failed where the majority (71 percent) failed the DVA. 20 percent of sites submitting patients failed to correct their dose volume discrepancies indicated by the benchmark case. Conclusion: Benchmark cases and pre-treatment reviews can be an effective QA tool to educate sites on protocol guidelines and to minimize deviations. Without the benchmark cases it is possible that 65 percent of the cases undergoing a pre-treatment review would have failed to meet the protocols requirements.Support: U24-CA-180803.« less
Bibbo, Giovanni; Brown, Scott; Linke, Rebecca
2016-08-01
Diagnostic Reference Levels (DRL) of procedures involving ionizing radiation are important tools to optimizing radiation doses delivered to patients and in identifying cases where the levels of doses are unusually high. This is particularly important for paediatric patients undergoing computed tomography (CT) examinations as these examinations are associated with relatively high-dose. Paediatric CT studies, performed at our institution from January 2010 to March 2014, have been retrospectively analysed to determine the 75th and 95th percentiles of both the volume computed tomography dose index (CTDIvol ) and dose-length product (DLP) for the most commonly performed studies to: establish local diagnostic reference levels for paediatric computed tomography examinations performed at our institution, benchmark our DRL with national and international published paediatric values, and determine the compliance of CT radiographer with established protocols. The derived local 75th percentile DRL have been found to be acceptable when compared with those published by the Australian National Radiation Dose Register and two national children's hospitals, and at the international level with the National Reference Doses for the UK. The 95th percentiles of CTDIvol for the various CT examinations have been found to be acceptable values for the CT scanner Dose-Check Notification. Benchmarking CT radiographers shows that they follow the set protocols for the various examinations without significant variations in the machine setting factors. The derivation of DRL has given us the tool to evaluate and improve the performance of our CT service by improved compliance and a reduction in radiation dose to our paediatric patients. We have also been able to benchmark our performance with similar national and international institutions. © 2016 The Royal Australian and New Zealand College of Radiologists.
Dose assessment in environmental radiological protection: State of the art and perspectives.
Stark, Karolina; Goméz-Ros, José M; Vives I Batlle, Jordi; Lindbo Hansen, Elisabeth; Beaugelin-Seiller, Karine; Kapustka, Lawrence A; Wood, Michael D; Bradshaw, Clare; Real, Almudena; McGuire, Corynne; Hinton, Thomas G
2017-09-01
Exposure to radiation is a potential hazard to humans and the environment. The Fukushima accident reminded the world of the importance of a reliable risk management system that incorporates the dose received from radiation exposures. The dose to humans from exposure to radiation can be quantified using a well-defined system; its environmental equivalent, however, is still in a developmental state. Additionally, the results of several papers published over the last decade have been criticized because of poor dosimetry. Therefore, a workshop on environmental dosimetry was organized by the STAR (Strategy for Allied Radioecology) Network of Excellence to review the state of the art in environmental dosimetry and prioritize areas of methodological and guidance development. Herein, we report the key findings from that international workshop, summarise parameters that affect the dose animals and plants receive when exposed to radiation, and identify further research needs. Current dosimetry practices for determining environmental protection are based on simple screening dose assessments using knowledge of fundamental radiation physics, source-target geometry relationships, the influence of organism shape and size, and knowledge of how radionuclide distributions in the body and in the soil profile alter dose. In screening model calculations that estimate whole-body dose to biota the shapes of organisms are simply represented as ellipsoids, while recently developed complex voxel phantom models allow organ-specific dose estimates. We identified several research and guidance development priorities for dosimetry. For external exposures, the uncertainty in dose estimates due to spatially heterogeneous distributions of radionuclide contamination is currently being evaluated. Guidance is needed on the level of dosimetry that is required when screening benchmarks are exceeded and how to report exposure in dose-effect studies, including quantification of uncertainties. Further research is needed to establish whether and how dosimetry should account for differences in tissue physiology, organism life stages, seasonal variability (in ecology, physiology and radiation field), species life span, and the proportion of a population that is actually exposed. We contend that, although major advances have recently been made in environmental radiation protection, substantive improvements are required to reduce uncertainties and increase the reliability of environmental dosimetry. Copyright © 2017 Elsevier Ltd. All rights reserved.
High exposure to inorganic arsenic by food: the need for risk reduction.
Gundert-Remy, Ursula; Damm, Georg; Foth, Heidi; Freyberger, Alexius; Gebel, Thomas; Golka, Klaus; Röhl, Claudia; Schupp, Thomas; Wollin, Klaus-Michael; Hengstler, Jan Georg
2015-12-01
Arsenic is a human carcinogen that occurs ubiquitously in soil and water. Based on epidemiological studies, a benchmark dose (lower/higher bound estimate) between 0.3 and 8 μg/kg bw/day was estimated to cause a 1 % increased risk of lung, skin and bladder cancer. A recently published study by EFSA on dietary exposure to inorganic arsenic in the European population reported 95th percentiles (lower bound min to upper bound max) for different age groups in the same range as the benchmark dose. For toddlers, a highly exposed group, the highest values ranged between 0.61 and 2.09 µg arsenic/kg bw/day. For all other age classes, the margin of exposure is also small. This scenario calls for regulatory action to reduce arsenic exposure. One priority measure should be to reduce arsenic in food categories that contribute most to exposure. In the EFSA study the food categories 'milk and dairy products,' 'drinking water' and 'food for infants' represent major sources of inorganic arsenic for infants and also rice is an important source. Long-term strategies are required to reduce inorganic arsenic in these food groups. The reduced consumption of rice and rice products which has been recommended may be helpful for a minority of individuals consuming unusually high amounts of rice. However, it is only of limited value for the general European population, because the food categories 'grain-based processed products (non rice-based)' or 'milk and dairy products' contribute more to the exposure with inorganic arsenic than the food category 'rice.' A balanced regulatory activity focusing on the most relevant food categories is required. In conclusion, exposure to inorganic arsenic represents a risk to the health of the European population, particularly to young children. Regulatory measures to reduce exposure are urgently required.
NASA Astrophysics Data System (ADS)
Jansen, Jan T. M.; Shrimpton, Paul C.
2016-07-01
The ImPACT (imaging performance assessment of CT scanners) CT patient dosimetry calculator is still used world-wide to estimate organ and effective doses (E) for computed tomography (CT) examinations, although the tool is based on Monte Carlo calculations reflecting practice in the early 1990’s. Subsequent developments in CT scanners, definitions of E, anthropomorphic phantoms, computers and radiation transport codes, have all fuelled an urgent need for updated organ dose conversion factors for contemporary CT. A new system for such simulations has been developed and satisfactorily tested. Benchmark comparisons of normalised organ doses presently derived for three old scanners (General Electric 9800, Philips Tomoscan LX and Siemens Somatom DRH) are within 5% of published values. Moreover, calculated normalised values of CT Dose Index for these scanners are in reasonable agreement (within measurement and computational uncertainties of ±6% and ±1%, respectively) with reported standard measurements. Organ dose coefficients calculated for a contemporary CT scanner (Siemens Somatom Sensation 16) demonstrate potential deviations by up to around 30% from the surrogate values presently assumed (through a scanner matching process) when using the ImPACT CT Dosimetry tool for newer scanners. Also, illustrative estimates of E for some typical examinations and a range of anthropomorphic phantoms demonstrate the significant differences (by some 10’s of percent) that can arise when changing from the previously adopted stylised mathematical phantom to the voxel phantoms presently recommended by the International Commission on Radiological Protection (ICRP), and when following the 2007 ICRP recommendations (updated from 1990) concerning tissue weighting factors. Further simulations with the validated dosimetry system will provide updated series of dose coefficients for a wide range of contemporary scanners.
Mvundura, Mercy; Lorenson, Kristina; Chweya, Amos; Kigadye, Rosemary; Bartholomew, Kathryn; Makame, Mohammed; Lennon, T Patrick; Mwangi, Steven; Kirika, Lydia; Kamau, Peter; Otieno, Abner; Murunga, Peninah; Omurwa, Tom; Dafrossa, Lyimo; Kristensen, Debra
2015-05-28
Having data on the costs of the immunization system can provide decision-makers with information to benchmark the costs when evaluating the impact of new technologies or programmatic innovations. This paper estimated the supply chain and immunization service delivery costs and cost per dose in selected districts in Kenya and Tanzania. We also present operational data describing the supply chain and service delivery points (SDPs). To estimate the supply chain costs, we collected resource-use data for the cold chain, distribution system, and health worker time and per diems paid. We also estimated the service delivery costs, which included the time cost of health workers to provide immunization services, and per diems and transport costs for outreach sessions. Data on the annual quantities of vaccines distributed to each facility, and the occurrence and duration of stockouts were collected from stock registers. These data were collected from the national store, 2 regional and 4 district stores, and 12 SDPs in each country for 2012. Cost per dose for the supply chain and immunization service delivery were estimated. The average annual costs per dose at the SDPs were $0.34 (standard deviation (s.d.) $0.18) for Kenya when including only the vaccine supply chain costs, and $1.33 (s.d. $0.82) when including immunization service delivery costs. In Tanzania, these costs were $0.67 (s.d. $0.35) and $2.82 (s.d. $1.64), respectively. Both countries experienced vaccine stockouts in 2012, bacillus Calmette-Guérin vaccine being more likely to be stocked out in Kenya, and oral poliovirus vaccine in Tanzania. When stockouts happened, they usually lasted for at least one month. Tanzania made investments in 2011 in preparation for planned vaccine introductions, and their supply chain cost per dose is expected to decline with the new vaccine introductions. Immunization service delivery costs are a significant portion of the total costs at the SDPs. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, M; Chetty, I; Zhong, H
2014-06-01
Purpose: Tumor control probability (TCP) calculated with accumulated radiation doses may help design appropriate treatment margins. Image registration errors, however, may compromise the calculated TCP. The purpose of this study is to develop benchmark CT images to quantify registration-induced errors in the accumulated doses and their corresponding TCP. Methods: 4DCT images were registered from end-inhale (EI) to end-exhale (EE) using a “demons” algorithm. The demons DVFs were corrected by an FEM model to get realistic deformation fields. The FEM DVFs were used to warp the EI images to create the FEM-simulated images. The two images combined with the FEM DVFmore » formed a benchmark model. Maximum intensity projection (MIP) images, created from the EI and simulated images, were used to develop IMRT plans. Two plans with 3 and 5 mm margins were developed for each patient. With these plans, radiation doses were recalculated on the simulated images and warped back to the EI images using the FEM DVFs to get the accumulated doses. The Elastix software was used to register the FEM-simulated images to the EI images. TCPs calculated with the Elastix-accumulated doses were compared with those generated by the FEM to get the TCP error of the Elastix registrations. Results: For six lung patients, the mean Elastix registration error ranged from 0.93 to 1.98 mm. Their relative dose errors in PTV were between 0.28% and 6.8% for 3mm margin plans, and between 0.29% and 6.3% for 5mm-margin plans. As the PTV margin reduced from 5 to 3 mm, the mean TCP error of the Elastix-reconstructed doses increased from 2.0% to 2.9%, and the mean NTCP errors decreased from 1.2% to 1.1%. Conclusion: Patient-specific benchmark images can be used to evaluate the impact of registration errors on the computed TCPs, and may help select appropriate PTV margins for lung SBRT patients.« less
Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen
2017-01-01
Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children’s strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders’ NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children’s NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders’ NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children’s benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that, depending on children’s age and familiarity with the number range, these additional external benchmarks might need to be labeled. PMID:28713302
Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen
2017-01-01
Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children's strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders' NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children's NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders' NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children's benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that, depending on children's age and familiarity with the number range, these additional external benchmarks might need to be labeled.
Translational benchmark risk analysis
Piegorsch, Walter W.
2010-01-01
Translational development – in the sense of translating a mature methodology from one area of application to another, evolving area – is discussed for the use of benchmark doses in quantitative risk assessment. Illustrations are presented with traditional applications of the benchmark paradigm in biology and toxicology, and also with risk endpoints that differ from traditional toxicological archetypes. It is seen that the benchmark approach can apply to a diverse spectrum of risk management settings. This suggests a promising future for this important risk-analytic tool. Extensions of the method to a wider variety of applications represent a significant opportunity for enhancing environmental, biomedical, industrial, and socio-economic risk assessments. PMID:20953283
Tiao, J; Moore, L; Porgo, T V; Belcaid, A
2016-06-01
To assess whether the definition of an IHF used as an exclusion criterion influences the results of trauma center benchmarking. We conducted a multicenter retrospective cohort study with data from an integrated Canadian trauma system. The study population included all patients admitted between 1999 and 2010 to any of the 57 adult trauma centers. Seven definitions of IHF based on diagnostic codes, age, mechanism of injury, and secondary injuries, identified in a systematic review, were used. Trauma centers were benchmarked using risk-adjusted mortality estimates generated using the Trauma Risk Adjustment Model. The agreement between benchmarking results generated under different IHF definitions was evaluated with correlation coefficients on adjusted mortality estimates. Correlation coefficients >0.95 were considered to convey acceptable agreement. The study population consisted of 172,872 patients before exclusion of IHF and between 128,094 and 139,588 patients after exclusion. Correlation coefficients between risk-adjusted mortality estimates generated in populations including and excluding IHF varied between 0.86 and 0.90. Correlation coefficients of estimates generated under different definitions of IHF varied between 0.97 and 0.99, even when analyses were restricted to patients aged ≥65 years. Although the exclusion of patients with IHF has an influence on the results of trauma center benchmarking based on mortality, the definition of IHF in terms of diagnostic codes, age, mechanism of injury and secondary injury has no significant impact on benchmarking results. Results suggest that there is no need to obtain formal consensus on the definition of IHF for benchmarking activities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Grace L.; Department of Health Services Research, The University of Texas MD Anderson Cancer Center, Houston, Texas; Jiang, Jing
Purpose: High-quality treatment for intact cervical cancer requires external radiation therapy, brachytherapy, and chemotherapy, carefully sequenced and completed without delays. We sought to determine how frequently current treatment meets quality benchmarks and whether new technologies have influenced patterns of care. Methods and Materials: By searching diagnosis and procedure claims in MarketScan, an employment-based health care claims database, we identified 1508 patients with nonmetastatic, intact cervical cancer treated from 1999 to 2011, who were <65 years of age and received >10 fractions of radiation. Treatments received were identified using procedure codes and compared with 3 quality benchmarks: receipt of brachytherapy, receipt ofmore » chemotherapy, and radiation treatment duration not exceeding 63 days. The Cochran-Armitage test was used to evaluate temporal trends. Results: Seventy-eight percent of patients (n=1182) received brachytherapy, with brachytherapy receipt stable over time (Cochran-Armitage P{sub trend}=.15). Among patients who received brachytherapy, 66% had high–dose rate and 34% had low–dose rate treatment, although use of high–dose rate brachytherapy steadily increased to 75% by 2011 (P{sub trend}<.001). Eighteen percent of patients (n=278) received intensity modulated radiation therapy (IMRT), and IMRT receipt increased to 37% by 2011 (P{sub trend}<.001). Only 2.5% of patients (n=38) received IMRT in the setting of brachytherapy omission. Overall, 79% of patients (n=1185) received chemotherapy, and chemotherapy receipt increased to 84% by 2011 (P{sub trend}<.001). Median radiation treatment duration was 56 days (interquartile range, 47-65 days); however, duration exceeded 63 days in 36% of patients (n=543). Although 98% of patients received at least 1 benchmark treatment, only 44% received treatment that met all 3 benchmarks. With more stringent indicators (brachytherapy, ≥4 chemotherapy cycles, and duration not exceeding 56 days), only 25% of patients received treatment that met all benchmarks. Conclusion: In this cohort, most cervical cancer patients received treatment that did not comply with all 3 benchmarks for quality treatment. In contrast to increasing receipt of newer radiation technologies, there was little improvement in receipt of essential treatment benchmarks.« less
Pennington, David; Crettaz, Pierre; Tauxe, Annick; Rhomberg, Lorenz; Brand, Kevin; Jolliet, Olivier
2002-10-01
In Part 1 of this article we developed an approach for the calculation of cancer effect measures for life cycle assessment (LCA). In this article, we propose and evaluate the method for the screening of noncancer toxicological health effects. This approach draws on the noncancer health risk assessment concept of benchmark dose, while noting important differences with regulatory applications in the objectives of an LCA study. We adopt the centraltendency estimate of the toxicological effect dose inducing a 10% response over background, ED10, to provide a consistent point of departure for default linear low-dose response estimates (betaED10). This explicit estimation of low-dose risks, while necessary in LCA, is in marked contrast to many traditional procedures for noncancer assessments. For pragmatic reasons, mechanistic thresholds and nonlinear low-dose response curves were not implemented in the presented framework. In essence, for the comparative needs of LCA, we propose that one initially screens alternative activities or products on the degree to which the associated chemical emissions erode their margins of exposure, which may or may not be manifested as increases in disease incidence. We illustrate the method here by deriving the betaED10 slope factors from bioassay data for 12 chemicals and outline some of the possibilities for extrapolation from other more readily available measures, such as the no observable adverse effect levels (NOAEL), avoiding uncertainty factors that lead to inconsistent degrees of conservatism from chemical to chemical. These extrapolations facilitated the initial calculation of slope factors for an additional 403 compounds; ranging from 10(-6) to 10(3) (risk per mg/kg-day dose). The potential consequences of the effects are taken into account in a preliminary approach by combining the betaED10 with the severity measure disability adjusted life years (DALY), providing a screening-level estimate of the potential consequences associated with exposures, integrated over time and space, to a given mass of chemical released into the environment for use in LCA.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-19
..., estimates biological benchmarks, projects future population conditions, and recommends research and... the Assessment webinars are as follows: 1. Participants will employ assessment models to evaluate stock status, estimate population benchmarks and management criteria, and project future conditions. The...
NASA Astrophysics Data System (ADS)
Smekens, F.; Létang, J. M.; Noblet, C.; Chiavassa, S.; Delpon, G.; Freud, N.; Rit, S.; Sarrut, D.
2014-12-01
We propose the split exponential track length estimator (seTLE), a new kerma-based method combining the exponential variant of the TLE and a splitting strategy to speed up Monte Carlo (MC) dose computation for low energy photon beams. The splitting strategy is applied to both the primary and the secondary emitted photons, triggered by either the MC events generator for primaries or the photon interactions generator for secondaries. Split photons are replaced by virtual particles for fast dose calculation using the exponential TLE. Virtual particles are propagated by ray-tracing in voxelized volumes and by conventional MC navigation elsewhere. Hence, the contribution of volumes such as collimators, treatment couch and holding devices can be taken into account in the dose calculation. We evaluated and analysed the seTLE method for two realistic small animal radiotherapy treatment plans. The effect of the kerma approximation, i.e. the complete deactivation of electron transport, was investigated. The efficiency of seTLE against splitting multiplicities was also studied. A benchmark with analog MC and TLE was carried out in terms of dose convergence and efficiency. The results showed that the deactivation of electrons impacts the dose at the water/bone interface in high dose regions. The maximum and mean dose differences normalized to the dose at the isocenter were, respectively of 14% and 2% . Optimal splitting multiplicities were found to be around 300. In all situations, discrepancies in integral dose were below 0.5% and 99.8% of the voxels fulfilled a 1%/0.3 mm gamma index criterion. Efficiency gains of seTLE varied from 3.2 × 105 to 7.7 × 105 compared to analog MC and from 13 to 15 compared to conventional TLE. In conclusion, seTLE provides results similar to the TLE while increasing the efficiency by a factor between 13 and 15, which makes it particularly well-suited to typical small animal radiation therapy applications.
Johnson, T K; Vessella, R L
1989-07-01
Dosimetry calculations of monoclonal antibodies (MABs) are made difficult because the focus of radioactivity is targeted for a nonstandard volume in a nonstandard geometry, precluding straightforward application of the MIRD formalism. The MABDOS software addresses this shortcoming by interactive placement of a spherical perturbation into the Standard Man geometry for each tumor focus. S tables are calculated by a Monte Carlo simulation of photon transport for each organ system (including tumor) that localizes activity. Performance benchmarks are reported that measure the time required to simulate 60,000 photons for each penetrating radiation in the spectrum of 99mTc and 131I using the kidney as source organ. Results indicate that calculation times are probably prohibitive on current microcomputer platforms. Mini and supercomputers offer a realistic platform for MABDOS patient dosimetry estimates.
Roberts, D Allen; Ng, Marie; Ikilezi, Gloria; Gasasira, Anne; Dwyer-Lindgren, Laura; Fullman, Nancy; Nalugwa, Talemwa; Kamya, Moses; Gakidou, Emmanuela
2015-12-03
Globally, countries are increasingly prioritizing the reduction of health inequalities and provision of universal health coverage. While national benchmarking has become more common, such work at subnational levels is rare. The timely and rigorous measurement of local levels and trends in key health interventions and outcomes is vital to identifying areas of progress and detecting early signs of stalled or declining health system performance. Previous studies have yet to provide a comprehensive assessment of Uganda's maternal and child health (MCH) landscape at the subnational level. By triangulating a number of different data sources - population censuses, household surveys, and administrative data - we generated regional estimates of 27 key MCH outcomes, interventions, and socioeconomic indicators from 1990 to 2011. After calculating source-specific estimates of intervention coverage, we used a two-step statistical model involving a mixed-effects linear model as an input to Gaussian process regression to produce regional-level trends. We also generated national-level estimates and constructed an indicator of overall intervention coverage based on the average of 11 high-priority interventions. National estimates often veiled large differences in coverage levels and trends across Uganda's regions. Under-5 mortality declined dramatically, from 163 deaths per 1,000 live births in 1990 to 85 deaths per 1,000 live births in 2011, but a large gap between Kampala and the rest of the country persisted. Uganda rapidly scaled up a subset of interventions across regions, including household ownership of insecticide-treated nets, receipt of artemisinin-based combination therapies among children under 5, and pentavalent immunization. Conversely, most regions saw minimal increases, if not actual declines, in the coverage of indicators that required multiple contacts with the health system, such as four or more antenatal care visits, three doses of oral polio vaccine, and two doses of intermittent preventive therapy during pregnancy. Some of the regions with the lowest levels of overall intervention coverage in 1990, such as North and West Nile, saw marked progress by 2011; nonetheless, sizeable disparities remained between Kampala and the rest of the country. Countrywide, overall coverage increased from 40% in 1990 to 64% in 2011, but coverage in 2011 ranged from 57% to 70% across regions. The MCH landscape in Uganda has, for the most part, improved between 1990 and 2011. Subnational benchmarking quantified the persistence of geographic health inequalities and identified regions in need of additional health systems strengthening. The tracking and analysis of subnational health trends should be conducted regularly to better guide policy decisions and strengthen responsiveness to local health needs.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-13
..., describes the fisheries, evaluates the status of the stock, estimates biological benchmarks, projects future.... Participants will evaluate and recommend datasets appropriate for assessment analysis, employ assessment models to evaluate stock status, estimate population benchmarks and management criteria, and project future...
Faught, Austin M; Davidson, Scott E; Popple, Richard; Kry, Stephen F; Etzel, Carol; Ibbott, Geoffrey S; Followill, David S
2017-09-01
The Imaging and Radiation Oncology Core-Houston (IROC-H) Quality Assurance Center (formerly the Radiological Physics Center) has reported varying levels of compliance from their anthropomorphic phantom auditing program. IROC-H studies have suggested that one source of disagreement between institution submitted calculated doses and measurement is the accuracy of the institution's treatment planning system dose calculations and heterogeneity corrections used. In order to audit this step of the radiation therapy treatment process, an independent dose calculation tool is needed. Monte Carlo multiple source models for Varian flattening filter free (FFF) 6 MV and FFF 10 MV therapeutic x-ray beams were commissioned based on central axis depth dose data from a 10 × 10 cm 2 field size and dose profiles for a 40 × 40 cm 2 field size. The models were validated against open-field measurements in a water tank for field sizes ranging from 3 × 3 cm 2 to 40 × 40 cm 2 . The models were then benchmarked against IROC-H's anthropomorphic head and neck phantom and lung phantom measurements. Validation results, assessed with a ±2%/2 mm gamma criterion, showed average agreement of 99.9% and 99.0% for central axis depth dose data for FFF 6 MV and FFF 10 MV models, respectively. Dose profile agreement using the same evaluation technique averaged 97.8% and 97.9% for the respective models. Phantom benchmarking comparisons were evaluated with a ±3%/2 mm gamma criterion, and agreement averaged 90.1% and 90.8% for the respective models. Multiple source models for Varian FFF 6 MV and FFF 10 MV beams have been developed, validated, and benchmarked for inclusion in an independent dose calculation quality assurance tool for use in clinical trial audits. © 2017 American Association of Physicists in Medicine.
Angus, Simon D.; Piotrowska, Monika Joanna
2014-01-01
Multi-dose radiotherapy protocols (fraction dose and timing) currently used in the clinic are the product of human selection based on habit, received wisdom, physician experience and intra-day patient timetabling. However, due to combinatorial considerations, the potential treatment protocol space for a given total dose or treatment length is enormous, even for relatively coarse search; well beyond the capacity of traditional in-vitro methods. In constrast, high fidelity numerical simulation of tumor development is well suited to the challenge. Building on our previous single-dose numerical simulation model of EMT6/Ro spheroids, a multi-dose irradiation response module is added and calibrated to the effective dose arising from 18 independent multi-dose treatment programs available in the experimental literature. With the developed model a constrained, non-linear, search for better performing cadidate protocols is conducted within the vicinity of two benchmarks by genetic algorithm (GA) techniques. After evaluating less than 0.01% of the potential benchmark protocol space, candidate protocols were identified by the GA which conferred an average of 9.4% (max benefit 16.5%) and 7.1% (13.3%) improvement (reduction) on tumour cell count compared to the two benchmarks, respectively. Noticing that a convergent phenomenon of the top performing protocols was their temporal synchronicity, a further series of numerical experiments was conducted with periodic time-gap protocols (10 h to 23 h), leading to the discovery that the performance of the GA search candidates could be replicated by 17–18 h periodic candidates. Further dynamic irradiation-response cell-phase analysis revealed that such periodicity cohered with latent EMT6/Ro cell-phase temporal patterning. Taken together, this study provides powerful evidence towards the hypothesis that even simple inter-fraction timing variations for a given fractional dose program may present a facile, and highly cost-effecitive means of significantly improving clinical efficacy. PMID:25460164
Angus, Simon D; Piotrowska, Monika Joanna
2014-01-01
Multi-dose radiotherapy protocols (fraction dose and timing) currently used in the clinic are the product of human selection based on habit, received wisdom, physician experience and intra-day patient timetabling. However, due to combinatorial considerations, the potential treatment protocol space for a given total dose or treatment length is enormous, even for relatively coarse search; well beyond the capacity of traditional in-vitro methods. In constrast, high fidelity numerical simulation of tumor development is well suited to the challenge. Building on our previous single-dose numerical simulation model of EMT6/Ro spheroids, a multi-dose irradiation response module is added and calibrated to the effective dose arising from 18 independent multi-dose treatment programs available in the experimental literature. With the developed model a constrained, non-linear, search for better performing cadidate protocols is conducted within the vicinity of two benchmarks by genetic algorithm (GA) techniques. After evaluating less than 0.01% of the potential benchmark protocol space, candidate protocols were identified by the GA which conferred an average of 9.4% (max benefit 16.5%) and 7.1% (13.3%) improvement (reduction) on tumour cell count compared to the two benchmarks, respectively. Noticing that a convergent phenomenon of the top performing protocols was their temporal synchronicity, a further series of numerical experiments was conducted with periodic time-gap protocols (10 h to 23 h), leading to the discovery that the performance of the GA search candidates could be replicated by 17-18 h periodic candidates. Further dynamic irradiation-response cell-phase analysis revealed that such periodicity cohered with latent EMT6/Ro cell-phase temporal patterning. Taken together, this study provides powerful evidence towards the hypothesis that even simple inter-fraction timing variations for a given fractional dose program may present a facile, and highly cost-effecitive means of significantly improving clinical efficacy.
Radiological assessment for bauxite mining and alumina refining.
O'Connor, Brian H; Donoghue, A Michael; Manning, Timothy J H; Chesson, Barry J
2013-01-01
Two international benchmarks assess whether the mining and processing of ores containing Naturally Occurring Radioactive Material (NORM) require management under radiological regulations set by local jurisdictions. First, the 1 Bq/g benchmark for radionuclide head of chain activity concentration determines whether materials may be excluded from radiological regulation. Second, processes may be exempted from radiological regulation where occupational above-background exposures for members of the workforce do not exceed 1 mSv/year. This is also the upper-limit of exposure prescribed for members of the public. Alcoa of Australia Limited (Alcoa) has undertaken radiological evaluations of the mining and processing of bauxite from the Darling Range of Western Australia since the 1980s. Short-term monitoring projects have demonstrated that above-background exposures for workers do not exceed 1 mSv/year. A whole-of-year evaluation of above-background, occupational radiological doses for bauxite mining, alumina refining and residue operations was conducted during 2008/2009 as part of the Alcoa NORM Quality Assurance System (NQAS). The NQAS has been guided by publications from the International Commission on Radiological Protection (ICRP), the International Atomic Energy Agency (IAEA) and the Australian Radiation Protection and Nuclear Safety Agency (ARPANSA). The NQAS has been developed specifically in response to implementation of the Australian National Directory on Radiation Protection (NDRP). Positional monitoring was undertaken to increase the accuracy of natural background levels required for correction of occupational exposures. This is important in view of the small increments in exposure that occur in bauxite mining, alumina refining and residue operations relative to natural background. Positional monitoring was also undertaken to assess the potential for exposure in operating locations. Personal monitoring was undertaken to characterise exposures in Similar Exposure Groups (SEGs). The monitoring was undertaken over 12 months, to provide annual average assessments of above-background doses, thereby reducing temporal variations, especially for radon exposures. The monitoring program concentrated on gamma and radon exposures, rather than gross alpha exposures, as past studies have shown that gross alpha exposures from inhalable dust for most of the workforce are small in comparison to combined gamma and radon exposures. The natural background determinations were consistent with data in the literature for localities near Alcoa's mining, refining and residue operations in Western Australia, and also with UNSCEAR global data. Within the mining operations, there was further consistency between the above-background dose estimates and the local geochemistry, with slight elevation of dose levels in mining pits. Conservative estimates of above-background levels for the workforce have been made using an assumption of 100% occupancy (1920 hours per year) for the SEGs considered. Total incremental composite doses for individuals were clearly less than 1.0 mSv/year when gamma, radon progeny and gross alpha exposures were considered. This is despite the activity concentration of some materials being slightly higher than the benchmark of 1 Bq/g. The results are consistent with previous monitoring and demonstrate compliance with the 1 mSv/year exemption level within mining, refining and residue operations. These results will be of value to bauxite mines and alumina refineries elsewhere in the world.
BMDExpress Data Viewer: A Visualization Tool to Analyze BMDExpress Datasets
Regulatory agencies increasingly apply benchmark dose (BMD) modeling to determine points of departure in human risk assessments. BMDExpress applies BMD modeling to transcriptomics datasets and groups genes to biological processes and pathways for rapid assessment of doses at whic...
NASA Astrophysics Data System (ADS)
Hanssen, R. F.
2017-12-01
In traditional geodesy, one is interested in determining the coordinates, or the change in coordinates, of predefined benchmarks. These benchmarks are clearly identifiable and are especially established to be representative of the signal of interest. This holds, e.g., for leveling benchmarks, for triangulation/trilateration benchmarks, and for GNSS benchmarks. The desired coordinates are not identical to the basic measurements, and need to be estimated using robust estimation procedures, where the stochastic nature of the measurements is taken into account. For InSAR, however, the `benchmarks' are not predefined. In fact, usually we do not know where an effective benchmark is located, even though we can determine its dynamic behavior pretty well. This poses several significant problems. First, we cannot describe the quality of the measurements, unless we already know the dynamic behavior of the benchmark. Second, if we don't know the quality of the measurements, we cannot compute the quality of the estimated parameters. Third, rather harsh assumptions need to be made to produce a result. These (usually implicit) assumptions differ between processing operators and the used software, and are severely affected by the amount of available data. Fourth, the `relative' nature of the final estimates is usually not explicitly stated, which is particularly problematic for non-expert users. Finally, whereas conventional geodesy applies rigorous testing to check for measurement or model errors, this is hardly ever done in InSAR-geodesy. These problems make it rather impossible to provide a precise, reliable, repeatable, and `universal' InSAR product or service. Here we evaluate the requirements and challenges to move towards InSAR as a geodetically-proof product. In particular this involves the explicit inclusion of contextual information, as well as InSAR procedures, standards and a technical protocol, supported by the International Association of Geodesy and the international scientific community.
Benchmark Dose Software (BMDS) Development and ...
This report is intended to provide an overview of beta version 1.0 of the implementation of a model of repeated measures data referred to as the Toxicodiffusion model. The implementation described here represents the first steps towards integration of the Toxicodiffusion model into the EPA benchmark dose software (BMDS). This version runs from within BMDS 2.0 using an option screen for making model selection, as is done for other models in the BMDS 2.0 suite. This report is intended to provide an overview of beta version 1.0 of the implementation of a model of repeated measures data referred to as the Toxicodiffusion model.
Comparison of Vocal Vibration-Dose Measures for Potential-Damage Risk Criteria
ERIC Educational Resources Information Center
Titze, Ingo R.; Hunter, Eric J.
2015-01-01
Purpose: School-teachers have become a benchmark population for the study of occupational voice use. A decade of vibration-dose studies on the teacher population allows a comparison to be made between specific dose measures for eventual assessment of damage risk. Method: Vibration dosimetry is reformulated with the inclusion of collision stress.…
Sayers, Adrian; Crowther, Michael J; Judge, Andrew; Whitehouse, Michael R; Blom, Ashley W
2017-08-28
The use of benchmarks to assess the performance of implants such as those used in arthroplasty surgery is a widespread practice. It provides surgeons, patients and regulatory authorities with the reassurance that implants used are safe and effective. However, it is not currently clear how or how many implants should be statistically compared with a benchmark to assess whether or not that implant is superior, equivalent, non-inferior or inferior to the performance benchmark of interest.We aim to describe the methods and sample size required to conduct a one-sample non-inferiority study of a medical device for the purposes of benchmarking. Simulation study. Simulation study of a national register of medical devices. We simulated data, with and without a non-informative competing risk, to represent an arthroplasty population and describe three methods of analysis (z-test, 1-Kaplan-Meier and competing risks) commonly used in surgical research. We evaluate the performance of each method using power, bias, root-mean-square error, coverage and CI width. 1-Kaplan-Meier provides an unbiased estimate of implant net failure, which can be used to assess if a surgical device is non-inferior to an external benchmark. Small non-inferiority margins require significantly more individuals to be at risk compared with current benchmarking standards. A non-inferiority testing paradigm provides a useful framework for determining if an implant meets the required performance defined by an external benchmark. Current contemporary benchmarking standards have limited power to detect non-inferiority, and substantially larger samples sizes, in excess of 3200 procedures, are required to achieve a power greater than 60%. It is clear when benchmarking implant performance, net failure estimated using 1-KM is preferential to crude failure estimated by competing risk models. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Whole-body to tissue concentration ratios for use in biota dose assessments for animals.
Yankovich, Tamara L; Beresford, Nicholas A; Wood, Michael D; Aono, Tasuo; Andersson, Pål; Barnett, Catherine L; Bennett, Pamela; Brown, Justin E; Fesenko, Sergey; Fesenko, J; Hosseini, Ali; Howard, Brenda J; Johansen, Mathew P; Phaneuf, Marcel M; Tagami, Keiko; Takata, Hyoe; Twining, John R; Uchida, Shigeo
2010-11-01
Environmental monitoring programs often measure contaminant concentrations in animal tissues consumed by humans (e.g., muscle). By comparison, demonstration of the protection of biota from the potential effects of radionuclides involves a comparison of whole-body doses to radiological dose benchmarks. Consequently, methods for deriving whole-body concentration ratios based on tissue-specific data are required to make best use of the available information. This paper provides a series of look-up tables with whole-body:tissue-specific concentration ratios for non-human biota. Focus was placed on relatively broad animal categories (including molluscs, crustaceans, freshwater fishes, marine fishes, amphibians, reptiles, birds and mammals) and commonly measured tissues (specifically, bone, muscle, liver and kidney). Depending upon organism, whole-body to tissue concentration ratios were derived for between 12 and 47 elements. The whole-body to tissue concentration ratios can be used to estimate whole-body concentrations from tissue-specific measurements. However, we recommend that any given whole-body to tissue concentration ratio should not be used if the value falls between 0.75 and 1.5. Instead, a value of one should be assumed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Overton, J.H.; Jarabek, A.M.
1989-01-01
The U.S. EPA advocates the assessment of health-effects data and calculation of inhaled reference doses as benchmark values for gauging systemic toxicity to inhaled gases. The assessment often requires an inter- or intra-species dose extrapolation from no observed adverse effect level (NOAEL) exposure concentrations in animals to human equivalent NOAEL exposure concentrations. To achieve this, a dosimetric extrapolation procedure was developed based on the form or type of equations that describe the uptake and disposition of inhaled volatile organic compounds (VOCs) in physiologically-based pharmacokinetic (PB-PK) models. The procedure assumes allometric scaling of most physiological parameters and that the value ofmore » the time-integrated human arterial-blood concentration must be limited to no more than to that of experimental animals. The scaling assumption replaces the need for most parameter values and allows the derivation of a simple formula for dose extrapolation of VOCs that gives equivalent or more-conservative exposure concentrations values than those that would be obtained using a PB-PK model in which scaling was assumed.« less
Benchmarks for Psychotherapy Efficacy in Adult Major Depression
ERIC Educational Resources Information Center
Minami, Takuya; Wampold, Bruce E.; Serlin, Ronald C.; Kircher, John C.; Brown, George S.
2007-01-01
This study estimates pretreatment-posttreatment effect size benchmarks for the treatment of major depression in adults that may be useful in evaluating psychotherapy effectiveness in clinical practice. Treatment efficacy benchmarks for major depression were derived for 3 different types of outcome measures: the Hamilton Rating Scale for Depression…
Interim methods for development of inhalation reference concentrations. Draft report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blackburn, K.; Dourson, M.; Erdreich, L.
1990-08-01
An inhalation reference concentration (RfC) is an estimate of continuous inhalation exposure over a human lifetime that is unlikely to pose significant risk of adverse noncancer health effects and serves as a benchmark value for assisting in risk management decisions. Derivation of an RfC involves dose-response assessment of animal data to determine the exposure levels at which no significant increase in the frequency or severity of adverse effects between the exposed population and its appropriate control exists. The assessment requires an interspecies dose extrapolation from a no-observed-adverse-effect level (NOAEL) exposure concentration of an animal to a human equivalent NOAEL (NOAEL(HBC)).more » The RfC is derived from the NOAEL(HBC) by the application of generally order-of-magnitude uncertainty factors. Intermittent exposure scenarios in animals are extrapolated to chronic continuous human exposures. Relationships between external exposures and internal doses depend upon complex simultaneous and consecutive processes of absorption, distribution, metabolism, storage, detoxification, and elimination. To estimate NOAEL(HBC)s when chemical-specific physiologically-based pharmacokinetic models are not available, a dosimetric extrapolation procedure based on anatomical and physiological parameters of the exposed human and animal and the physical parameters of the toxic chemical has been developed which gives equivalent or more conservative exposure concentrations values than those that would be obtained with a PB-PK model.« less
Markowski, V P; Zareba, G; Stern, S; Cox, C; Weiss, B
2001-06-01
Pregnant Holtzman rats were exposed to a single oral dose of 0, 20, 60, or 180 ng/kg 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) on the 18th day of gestation. Their adult female offspring were trained to respond on a lever for brief opportunities to run in specially designed running wheels. Once they had begun responding on a fixed-ratio 1 (FR1) schedule of reinforcement, the fixed-ratio requirement for lever pressing was increased at five-session intervals to values of FR2, FR5, FR10, FR20, and FR30. We examined vaginal cytology after each behavior session to track estrous cyclicity. Under each of the FR values, perinatal TCDD exposure produced a significant dose-related reduction in the number of earned opportunities to run, the lever response rate, and the total number of revolutions in the wheel. Estrous cyclicity was not affected. Because of the consistent dose-response relationship at all FR values, we used the behavioral data to calculate benchmark doses based on displacements from modeled zero-dose performance of 1% (ED(01)) and 10% (ED(10)), as determined by a quadratic fit to the dose-response function. The mean ED(10) benchmark dose for earned run opportunities was 10.13 ng/kg with a 95% lower bound of 5.77 ng/kg. The corresponding ED(01) was 0.98 ng/kg with a 95% lower bound of 0.83 ng/kg. The mean ED(10) for total wheel revolutions was calculated as 7.32 ng/kg with a 95% lower bound of 5.41 ng/kg. The corresponding ED(01) was 0.71 ng/kg with a 95% lower bound of 0.60. These values should be viewed from the perspective of current human body burdens, whose average value, based on TCDD toxic equivalents, has been calculated as 13 ng/kg.
Thompson, Chad M; Gaylor, David W; Tachovsky, J Andrew; Perry, Camarie; Carakostas, Michael C; Haws, Laurie C
2013-12-01
Sulfolane is a widely used industrial solvent that is often used for gas treatment (sour gas sweetening; hydrogen sulfide removal from shale and coal processes, etc.), and in the manufacture of polymers and electronics, and may be found in pharmaceuticals as a residual solvent used in the manufacturing processes. Sulfolane is considered a high production volume chemical with worldwide production around 18 000-36 000 tons per year. Given that sulfolane has been detected as a contaminant in groundwater, an important potential route of exposure is tap water ingestion. Because there are currently no federal drinking water standards for sulfolane in the USA, we developed a noncancer oral reference dose (RfD) based on benchmark dose modeling, as well as a tap water screening value that is protective of ingestion. Review of the available literature suggests that sulfolane is not likely to be mutagenic, clastogenic or carcinogenic, or pose reproductive or developmental health risks except perhaps at very high exposure concentrations. RfD values derived using benchmark dose modeling were 0.01-0.04 mg kg(-1) per day, although modeling of developmental endpoints resulted in higher values, approximately 0.4 mg kg(-1) per day. The lowest, most conservative, RfD of 0.01 mg kg(-1) per day was based on reduced white blood cell counts in female rats. This RfD was used to develop a tap water screening level that is protective of ingestion, viz. 365 µg l(-1). It is anticipated that these values, along with the hazard identification and dose-response modeling described herein, should be informative for risk assessors and regulators interested in setting health-protective drinking water guideline values for sulfolane. Copyright © 2012 John Wiley & Sons, Ltd.
[Using fractional polynomials to estimate the safety threshold of fluoride in drinking water].
Pan, Shenling; An, Wei; Li, Hongyan; Yang, Min
2014-01-01
To study the dose-response relationship between fluoride content in drinking water and prevalence of dental fluorosis on the national scale, then to determine the safety threshold of fluoride in drinking water. Meta-regression analysis was applied to the 2001-2002 national endemic fluorosis survey data of key wards. First, fractional polynomial (FP) was adopted to establish fixed effect model, determining the best FP structure, after that restricted maximum likelihood (REML) was adopted to estimate between-study variance, then the best random effect model was established. The best FP structure was first-order logarithmic transformation. Based on the best random effect model, the benchmark dose (BMD) of fluoride in drinking water and its lower limit (BMDL) was calculated as 0.98 mg/L and 0.78 mg/L. Fluoride in drinking water can only explain 35.8% of the variability of the prevalence, among other influencing factors, ward type was a significant factor, while temperature condition and altitude were not. Fractional polynomial-based meta-regression method is simple, practical and can provide good fitting effect, based on it, the safety threshold of fluoride in drinking water of our country is determined as 0.8 mg/L.
DOSE-RESPONSE ASSESSMENT FOR DEVELOPMENTAL TOXICITY III. STATISTICAL MODELS
Although quantitative modeling has been central to cancer risk assessment for years, the concept of do@e-response modeling for developmental effects is relatively new. he benchmark dose (BMD) approach has been proposed for use with developmental (as well as other noncancer) endpo...
Comparison of Monte Carlo and analytical dose computations for intensity modulated proton therapy
NASA Astrophysics Data System (ADS)
Yepes, Pablo; Adair, Antony; Grosshans, David; Mirkovic, Dragan; Poenisch, Falk; Titt, Uwe; Wang, Qianxia; Mohan, Radhe
2018-02-01
To evaluate the effect of approximations in clinical analytical calculations performed by a treatment planning system (TPS) on dosimetric indices in intensity modulated proton therapy. TPS calculated dose distributions were compared with dose distributions as estimated by Monte Carlo (MC) simulations, calculated with the fast dose calculator (FDC) a system previously benchmarked to full MC. This study analyzed a total of 525 patients for four treatment sites (brain, head-and-neck, thorax and prostate). Dosimetric indices (D02, D05, D20, D50, D95, D98, EUD and Mean Dose) and a gamma-index analysis were utilized to evaluate the differences. The gamma-index passing rates for a 3%/3 mm criterion for voxels with a dose larger than 10% of the maximum dose had a median larger than 98% for all sites. The median difference for all dosimetric indices for target volumes was less than 2% for all cases. However, differences for target volumes as large as 10% were found for 2% of the thoracic patients. For organs at risk (OARs), the median absolute dose difference was smaller than 2 Gy for all indices and cohorts. However, absolute dose differences as large as 10 Gy were found for some small volume organs in brain and head-and-neck patients. This analysis concludes that for a fraction of the patients studied, TPS may overestimate the dose in the target by as much as 10%, while for some OARs the dose could be underestimated by as much as 10 Gy. Monte Carlo dose calculations may be needed to ensure more accurate dose computations to improve target coverage and sparing of OARs in proton therapy.
Benchmark Dose (BMD) modelling is a mathematical approach used to determine where a dose-response change begins to take place relative to controls following chemical exposure. BMDs are being increasingly applied in regulatory toxicology to determine points of departure. BMDExpres...
ABSTRACT The ability to anchor chemical class-based gene expression changes to phenotypic lesions and to describe these changes as a function of dose and time informs mode of action determinations and improves quantitative risk assessments. Previous transcription-based microarra...
Modification and benchmarking of SKYSHINE-III for use with ISFSI cask arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hertel, N.E.; Napolitano, D.G.
1997-12-01
Dry cask storage arrays are becoming more and more common at nuclear power plants in the United States. Title 10 of the Code of Federal Regulations, Part 72, limits doses at the controlled area boundary of these independent spent-fuel storage installations (ISFSI) to 0.25 mSv (25 mrem)/yr. The minimum controlled area boundaries of such a facility are determined by cask array dose calculations, which include direct radiation and radiation scattered by the atmosphere, also known as skyshine. NAC International (NAC) uses SKYSHINE-III to calculate the gamma-ray and neutron dose rates as a function of distance from ISFSI arrays. In thismore » paper, we present modifications to the SKYSHINE-III that more explicitly model cask arrays. In addition, we have benchmarked the radiation transport methods used in SKYSHINE-III against {sup 60}Co gamma-ray experiments and MCNP neutron calculations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, B; Kanal, K; Dickinson, R
2014-06-15
Purpose: We have implemented a commercially available Radiation Exposure Monitoring System (REMS) to enhance the processes of radiation dose data collection, analysis and alerting developed over the past decade at our sites of practice. REMS allows for consolidation of multiple radiation dose information sources and quicker alerting than previously developed processes. Methods: Thirty-nine x-ray producing imaging modalities were interfaced with the REMS: thirteen computed tomography scanners, sixteen angiography/interventional systems, nine digital radiography systems and one mammography system. A number of methodologies were used to provide dose data to the REMS: Modality Performed Procedure Step (MPPS) messages, DICOM Radiation Dose Structuredmore » Reports (RDSR), and DICOM header information. Once interfaced, the dosimetry information from each device underwent validation (first 15–20 exams) before release for viewing by end-users: physicians, medical physicists, technologists and administrators. Results: Before REMS, our diagnostic physics group pulled dosimetry data from seven disparate databases throughout the radiology, radiation oncology, cardiology, electrophysiology, anesthesiology/pain management and vascular surgery departments at two major medical centers and four associated outpatient clinics. With the REMS implementation, we now have one authoritative source of dose information for alerting, longitudinal analysis, dashboard/graphics generation and benchmarking. REMS provides immediate automatic dose alerts utilizing thresholds calculated through daily statistical analysis. This has streamlined our Closing the Loop process for estimated skin exposures in excess of our institutional specific substantial radiation dose level which relied on technologist notification of the diagnostic physics group and daily report from the radiology information system (RIS). REMS also automatically calculates the CT size-specific dose estimate (SSDE) as well as provides two-dimensional angulation dose maps for angiography/interventional procedures. Conclusion: REMS implementation has streamlined and consolidated the dosimetry data collection and analysis process at our institutions while eliminating manual entry error and providing immediate alerting and access to dosimetry data to both physicists and physicians. Brent Stewart has funded research through GE Healthcare.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renaud, M; Seuntjens, J; Roberge, D
Purpose: Assessing the performance and uncertainty of a pre-calculated Monte Carlo (PMC) algorithm for proton and electron transport running on graphics processing units (GPU). While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from recycling a limited number of tracks in the pre-generated track bank is missing from the literature. With a proper uncertainty analysis, an optimal pre-generated track bank size can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pre-generated for electrons and protons using EGSnrc and GEANT4, respectively. The PMC algorithm for track transport was implementedmore » on the CUDA programming framework. GPU-PMC dose distributions were compared to benchmark dose distributions simulated using general-purpose MC codes in the same conditions. A latent uncertainty analysis was performed by comparing GPUPMC dose values to a “ground truth” benchmark while varying the track bank size and primary particle histories. Results: GPU-PMC dose distributions and benchmark doses were within 1% of each other in voxels with dose greater than 50% of Dmax. In proton calculations, a submillimeter distance-to-agreement error was observed at the Bragg Peak. Latent uncertainty followed a Poisson distribution with the number of tracks per energy (TPE) and a track bank of 20,000 TPE produced a latent uncertainty of approximately 1%. Efficiency analysis showed a 937× and 508× gain over a single processor core running DOSXYZnrc for 16 MeV electrons in water and bone, respectively. Conclusion: The GPU-PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty below 1%. The track bank size necessary to achieve an optimal efficiency can be tuned based on the desired uncertainty. Coupled with a model to calculate dose contributions from uncharged particles, GPU-PMC is a candidate for inverse planning of modulated electron radiotherapy and scanned proton beams. This work was supported in part by FRSQ-MSSS (Grant No. 22090), NSERC RG (Grant No. 432290) and CIHR MOP (Grant No. MOP-211360)« less
Latent uncertainties of the precalculated track Monte Carlo method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renaud, Marc-André; Seuntjens, Jan; Roberge, David
Purpose: While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited numbermore » of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pregenerated for electrons and protons using EGSnrc and GEANT4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (CUDA) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a “ground truth” benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of D{sub max}. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Results: Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the maximum dose. In proton calculations, a small (≤1 mm) distance-to-agreement error was observed at the Bragg peak. Latent uncertainty was characterized for electrons and found to follow a Poisson distribution with the number of unique tracks per energy. A track bank of 12 energies and 60000 unique tracks per pregenerated energy in water had a size of 2.4 GB and achieved a latent uncertainty of approximately 1% at an optimal efficiency gain over DOSXYZnrc. Larger track banks produced a lower latent uncertainty at the cost of increased memory consumption. Using an NVIDIA GTX 590, efficiency analysis showed a 807 × efficiency increase over DOSXYZnrc for 16 MeV electrons in water and 508 × for 16 MeV electrons in bone. Conclusions: The PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty of 1% with a large efficiency gain over conventional MC codes. Before performing clinical dose calculations, models to calculate dose contributions from uncharged particles must be implemented. Following the successful implementation of these models, the PMC method will be evaluated as a candidate for inverse planning of modulated electron radiation therapy and scanned proton beams.« less
Latent uncertainties of the precalculated track Monte Carlo method.
Renaud, Marc-André; Roberge, David; Seuntjens, Jan
2015-01-01
While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited number of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Particle tracks were pregenerated for electrons and protons using EGSnrc and geant4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (cuda) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a "ground truth" benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of Dmax. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the maximum dose. In proton calculations, a small (≤ 1 mm) distance-to-agreement error was observed at the Bragg peak. Latent uncertainty was characterized for electrons and found to follow a Poisson distribution with the number of unique tracks per energy. A track bank of 12 energies and 60000 unique tracks per pregenerated energy in water had a size of 2.4 GB and achieved a latent uncertainty of approximately 1% at an optimal efficiency gain over DOSXYZnrc. Larger track banks produced a lower latent uncertainty at the cost of increased memory consumption. Using an NVIDIA GTX 590, efficiency analysis showed a 807 × efficiency increase over DOSXYZnrc for 16 MeV electrons in water and 508 × for 16 MeV electrons in bone. The PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty of 1% with a large efficiency gain over conventional MC codes. Before performing clinical dose calculations, models to calculate dose contributions from uncharged particles must be implemented. Following the successful implementation of these models, the PMC method will be evaluated as a candidate for inverse planning of modulated electron radiation therapy and scanned proton beams.
Liao, Hehuan; Krometis, Leigh-Anne H; Kline, Karen
2016-05-01
Within the United States, elevated levels of fecal indicator bacteria (FIB) remain the leading cause of surface water-quality impairments requiring formal remediation plans under the federal Clean Water Act's Total Maximum Daily Load (TMDL) program. The sufficiency of compliance with numerical FIB criteria as the targeted endpoint of TMDL remediation plans may be questionable given poor correlations between FIB and pathogenic microorganisms and varying degrees of risk associated with exposure to different fecal pollution sources (e.g. human vs animal). The present study linked a watershed-scale FIB fate and transport model with a dose-response model to continuously predict human health risks via quantitative microbial risk assessment (QMRA), for comparison to regulatory benchmarks. This process permitted comparison of risks associated with different fecal pollution sources in an impaired urban watershed in order to identify remediation priorities. Results indicate that total human illness risks were consistently higher than the regulatory benchmark of 36 illnesses/1000 people for the study watershed, even when the predicted FIB levels were in compliance with the Escherichia coli geometric mean standard of 126CFU/100mL. Sanitary sewer overflows were associated with the greatest risk of illness. This is of particular concern, given increasing indications that sewer leakage is ubiquitous in urban areas, yet not typically fully accounted for during TMDL development. Uncertainty analysis suggested the accuracy of risk estimates would be improved by more detailed knowledge of site-specific pathogen presence and densities. While previous applications of the QMRA process to impaired waterways have mostly focused on single storm events or hypothetical situations, the continuous modeling framework presented in this study could be integrated into long-term water quality management planning, especially the United States' TMDL program, providing greater clarity to watershed stakeholders and decision-makers. Copyright © 2016 Elsevier B.V. All rights reserved.
Fisher, Nicholas S.; Beaugelin-Seiller, Karine; Hinton, Thomas G.; Baumann, Zofia; Madigan, Daniel J.; Garnier-Laplace, Jacqueline
2013-01-01
Radioactive isotopes originating from the damaged Fukushima nuclear reactor in Japan following the earthquake and tsunami in March 2011 were found in resident marine animals and in migratory Pacific bluefin tuna (PBFT). Publication of this information resulted in a worldwide response that caused public anxiety and concern, although PBFT captured off California in August 2011 contained activity concentrations below those from naturally occurring radionuclides. To link the radioactivity to possible health impairments, we calculated doses, attributable to the Fukushima-derived and the naturally occurring radionuclides, to both the marine biota and human fish consumers. We showed that doses in all cases were dominated by the naturally occurring alpha-emitter 210Po and that Fukushima-derived doses were three to four orders of magnitude below 210Po-derived doses. Doses to marine biota were about two orders of magnitude below the lowest benchmark protection level proposed for ecosystems (10 µGy⋅h−1). The additional dose from Fukushima radionuclides to humans consuming tainted PBFT in the United States was calculated to be 0.9 and 4.7 µSv for average consumers and subsistence fishermen, respectively. Such doses are comparable to, or less than, the dose all humans routinely obtain from naturally occurring radionuclides in many food items, medical treatments, air travel, or other background sources. Although uncertainties remain regarding the assessment of cancer risk at low doses of ionizing radiation to humans, the dose received from PBFT consumption by subsistence fishermen can be estimated to result in two additional fatal cancer cases per 10,000,000 similarly exposed people. PMID:23733934
SU-E-T-577: Commissioning of a Deterministic Algorithm for External Photon Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, T; Finlay, J; Mesina, C
Purpose: We report commissioning results for a deterministic algorithm for external photon beam treatment planning. A deterministic algorithm solves the radiation transport equations directly using a finite difference method, thus improve the accuracy of dose calculation, particularly under heterogeneous conditions with results similar to that of Monte Carlo (MC) simulation. Methods: Commissioning data for photon energies 6 – 15 MV includes the percentage depth dose (PDD) measured at SSD = 90 cm and output ratio in water (Spc), both normalized to 10 cm depth, for field sizes between 2 and 40 cm and depths between 0 and 40 cm. Off-axismore » ratio (OAR) for the same set of field sizes was used at 5 depths (dmax, 5, 10, 20, 30 cm). The final model was compared with the commissioning data as well as additional benchmark data. The benchmark data includes dose per MU determined for 17 points for SSD between 80 and 110 cm, depth between 5 and 20 cm, and lateral offset of up to 16.5 cm. Relative comparisons were made in a heterogeneous phantom made of cork and solid water. Results: Compared to the commissioning beam data, the agreement are generally better than 2% with large errors (up to 13%) observed in the buildup regions of the FDD and penumbra regions of the OAR profiles. The overall mean standard deviation is 0.04% when all data are taken into account. Compared to the benchmark data, the agreements are generally better than 2%. Relative comparison in heterogeneous phantom is in general better than 4%. Conclusion: A commercial deterministic algorithm was commissioned for megavoltage photon beams. In a homogeneous medium, the agreement between the algorithm and measurement at the benchmark points is generally better than 2%. The dose accuracy for a deterministic algorithm is better than a convolution algorithm in heterogeneous medium.« less
Arnold, Scott M; Collins, Michael A; Graham, Cynthia; Jolly, Athena T; Parod, Ralph J; Poole, Alan; Schupp, Thomas; Shiotsuka, Ronald N; Woolhiser, Michael R
2012-12-01
Polyurethanes (PU) are polymers made from diisocyanates and polyols for a variety of consumer products. It has been suggested that PU foam may contain trace amounts of residual toluene diisocyanate (TDI) monomers and present a health risk. To address this concern, the exposure scenario and health risks posed by sleeping on a PU foam mattress were evaluated. Toxicity benchmarks for key non-cancer endpoints (i.e., irritation, sensitization, respiratory tract effects) were determined by dividing points of departure by uncertainty factors. The cancer benchmark was derived using the USEPA Benchmark Dose Software. Results of previous migration and emission data of TDI from PU foam were combined with conservative exposure factors to calculate upper-bound dermal and inhalation exposures to TDI as well as a lifetime average daily dose to TDI from dermal exposure. For each non-cancer endpoint, the toxicity benchmark was divided by the calculated exposure to determine the margin of safety (MOS), which ranged from 200 (respiratory tract) to 3×10(6) (irritation). Although available data indicate TDI is not carcinogenic, a theoretical excess cancer risk (1×10(-7)) was calculated. We conclude from this assessment that sleeping on a PU foam mattress does not pose TDI-related health risks to consumers. Copyright © 2012 Elsevier Inc. All rights reserved.
Radiation breakage of DNA: a model based on random-walk chromatin structure
NASA Technical Reports Server (NTRS)
Ponomarev, A. L.; Sachs, R. K.
2001-01-01
Monte Carlo computer software, called DNAbreak, has recently been developed to analyze observed non-random clustering of DNA double strand breaks in chromatin after exposure to densely ionizing radiation. The software models coarse-grained configurations of chromatin and radiation tracks, small-scale details being suppressed in order to obtain statistical results for larger scales, up to the size of a whole chromosome. We here give an analytic counterpart of the numerical model, useful for benchmarks, for elucidating the numerical results, for analyzing the assumptions of a more general but less mechanistic "randomly-located-clusters" formalism, and, potentially, for speeding up the calculations. The equations characterize multi-track DNA fragment-size distributions in terms of one-track action; an important step in extrapolating high-dose laboratory results to the much lower doses of main interest in environmental or occupational risk estimation. The approach can utilize the experimental information on DNA fragment-size distributions to draw inferences about large-scale chromatin geometry during cell-cycle interphase.
Berger, Thomas; Bilski, Paweł; Hajek, Michael; Puchalska, Monika; Reitz, Günther
2013-12-01
Astronauts working and living in space are exposed to considerably higher doses and different qualities of ionizing radiation than people on Earth. The multilateral MATROSHKA (MTR) experiment, coordinated by the German Aerospace Center, represents the most comprehensive effort to date in radiation protection dosimetry in space using an anthropomorphic upper-torso phantom used for radiotherapy treatment planning. The anthropomorphic upper-torso phantom maps the radiation distribution as a simulated human body installed outside (MTR-1) and inside different compartments (MTR-2A: Pirs; MTR-2B: Zvezda) of the Russian Segment of the International Space Station. Thermoluminescence dosimeters arranged in a 2.54 cm orthogonal grid, at the site of vital organs and on the surface of the phantom allow for visualization of the absorbed dose distribution with superior spatial resolution. These results should help improve the estimation of radiation risks for long-term human space exploration and support benchmarking of radiation transport codes.
Dose-Response Analysis of RNA-Seq Profiles in Archival ...
Use of archival resources has been limited to date by inconsistent methods for genomic profiling of degraded RNA from formalin-fixed paraffin-embedded (FFPE) samples. RNA-sequencing offers a promising way to address this problem. Here we evaluated transcriptomic dose responses using RNA-sequencing in paired FFPE and frozen (FROZ) samples from two archival studies in mice, one 20 years old. Experimental treatments included 3 different doses of di(2-ethylhexyl)phthalate or dichloroacetic acid for the recently archived and older studies, respectively. Total RNA was ribo-depleted and sequenced using the Illumina HiSeq platform. In the recently archived study, FFPE samples had 35% lower total counts compared to FROZ samples but high concordance in fold-change values of differentially expressed genes (DEGs) (r2 = 0.99), highly enriched pathways (90% overlap with FROZ), and benchmark dose estimates for preselected target genes (2% difference vs FROZ). In contrast, older FFPE samples had markedly lower total counts (3% of FROZ) and poor concordance in global DEGs and pathways. However, counts from FFPE and FROZ samples still positively correlated (r2 = 0.84 across all transcripts) and showed comparable dose responses for more highly expressed target genes. These findings highlight potential applications and issues in using RNA-sequencing data from FFPE samples. Recently archived FFPE samples were highly similar to FROZ samples in sequencing q
The US EPA’s N-Methyl Carbamate (NMC) Cumulative Risk assessment was based on the effect on acetylcholine esterase (AChE) activity of exposure to 10 NMC pesticides through dietary, drinking water, and residential exposures, assuming the effects of joint exposure to NMCs is dose-...
75 FR 40729 - Residues of Quaternary Ammonium Compounds, N-Alkyl (C12-14
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-14
.... Systemic toxicity occurs after absorption and distribution of the chemical to tissues in the body. Such... identified (the LOAEL) or a Benchmark Dose (BMD) approach is sometimes used for risk assessment. Uncertainty.... No systemic effects observed up to 20 mg/ kg/day, highest dose of technical that could be tested...
The ability to anchor chemical class-based gene expression changes to phenotypic lesions and to describe these changes as a function of dose and time can inform mode of action and improve quantitative risk assessment. Previous research identified a 330-gene cluster commonly resp...
For more than three decades chronic studies in rodents have been the benchmark for assessing the potential long-term toxicity, and particularly the carcinogenicity, of chemicals. With doses typically administered for about 2 years (18 months to lifetime), the rodent bioassay has ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fujii, K; UCLA School of Medicine, Los Angeles, CA; Bostani, M
Purpose: The aim of this study was to collect CT dose index data from adult head exams to establish benchmarks based on either: (a) values pooled from all head exams or (b) values for specific protocols. One part of this was to investigate differences in scan frequency and CT dose index data for inpatients versus outpatients. Methods: We collected CT dose index data (CTDIvol) from adult head CT examinations performed at our medical facilities from Jan 1st to Dec 31th, 2014. Four of these scanners were used for inpatients, the other five were used for outpatients. All scanners used Tubemore » Current Modulation. We used X-ray dose management software to mine dose index data and evaluate CTDIvol for 15807 inpatients and 4263 outpatients undergoing Routine Brain, Sinus, Facial/Mandible, Temporal Bone, CTA Brain and CTA Brain-Neck protocols, and combined across all protocols. Results: For inpatients, Routine Brain series represented 84% of total scans performed. For outpatients, Sinus scans represented the largest fraction (36%). The CTDIvol (mean ± SD) across all head protocols was 39 ± 30 mGy (min-max: 3.3–540 mGy). The CTDIvol for Routine Brain was 51 ± 6.2 mGy (min-max: 36–84 mGy). The values for Sinus were 24 ± 3.2 mGy (min-max: 13–44 mGy) and for Facial/Mandible were 22 ± 4.3 mGy (min-max: 14–46 mGy). The mean CTDIvol for inpatients and outpatients was similar across protocols with one exception (CTA Brain-Neck). Conclusion: There is substantial dose variation when results from all protocols are pooled together; this is primarily a function of the differences in technical factors of the protocols themselves. When protocols are analyzed separately, there is much less variability. While analyzing pooled data affords some utility, reviewing protocols segregated by clinical indication provides greater opportunity for optimization and establishing useful benchmarks.« less
Fukushima Daiichi Radionuclide Inventories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardoni, Jeffrey N.; Jankovsky, Zachary Kyle
Radionuclide inventories are generated to permit detailed analyses of the Fukushima Daiichi meltdowns. This is necessary information for severe accident calculations, dose calculations, and source term and consequence analyses. Inventories are calculated using SCALE6 and compared to values predicted by international researchers supporting the OECD/NEA's Benchmark Study on the Accident at Fukushima Daiichi Nuclear Power Station (BSAF). Both sets of inventory information are acceptable for best-estimate analyses of the Fukushima reactors. Consistent nuclear information for severe accident codes, including radionuclide class masses and core decay powers, are also derived from the SCALE6 analyses. Key nuclide activity ratios are calculated asmore » functions of burnup and nuclear data in order to explore the utility for nuclear forensics and support future decommissioning efforts.« less
An Update of Recent Phits Code
NASA Astrophysics Data System (ADS)
Sihver, Lembit; Sato, Tatsuhiko; Niita, Koji; Iwase, Hiroshi; Iwamoto, Yosuke; Matsuda, Norihiro; Nakashima, Hiroshi; Sakamoto, Yukio; Gustafsson, Katarina; Mancusi, Davide
We will first present the current status of the General-Purpose Particle and Heavy-Ion Transport code System (PHITS). In particular, we will describe benchmarking of calculated cross sections against measurements; we will introduce a relativistically covariant version of JQMD, called R- JQMD, that features an improved ground-state initialization algorithm, and we will show heavyion charge-changing cross sections simulated with R-JQMD and compare them to experimental data and to results predicted by the JQMD model. We will also show calculations of dose received by aircrews and personnel in space from cosmic radiation. In recent years, many countries have issued regulations or recommendations to set annual dose limitations for aircrews. Since estimation of cosmic-ray spectra in the atmosphere is an essential issue for the evaluation of aviation doses we have calculated these spectra using PHITS. The accuracy of the simulation, which has well been verified by experimental data taken under various conditions, will be presented together with a software called EXPACS-V, that can visualize the cosmic-ray dose rates at ground level or at a certain altitude on the map of Google Earth, using the PHITS based Analytical Radiation Model in the Atmosphere (PARMA). PARMA can instantaneously calculate the cosmic-ray spectra anywhere in the world by specifying the atmospheric depth, the vertical cut-off rigidity and the force-field potential. For the purpose of examining the applicability of PHITS to the shielding design in space, the absorbed doses in a tissue equivalent water phantom inside an imaginary space vessel has been estimated for different shielding materials of different thicknesses. The results confirm previous results which indicate that PHITS is a suitable tool when performing shielding design studies of spacecrafts. Finally we have used PHITS for the calculations of depth-dose distributions in MATROSHKA, which is an ESA project dedicated to determining the radiation load on astronauts within and outside the International Space Station (ISS).
A Signal-to-Noise Crossover Dose as the Point of Departure for Health Risk Assessment
Portier, Christopher J.; Krewski, Daniel
2011-01-01
Background: The U.S. National Toxicology Program (NTP) cancer bioassay database provides an opportunity to compare both existing and new approaches to determining points of departure (PoDs) for establishing reference doses (RfDs). Objectives: The aims of this study were a) to investigate the risk associated with the traditional PoD used in human health risk assessment [the no observed adverse effect level (NOAEL)]; b) to present a new approach based on the signal-to-noise crossover dose (SNCD); and c) to compare the SNCD and SNCD-based RfD with PoDs and RfDs based on the NOAEL and benchmark dose (BMD) approaches. Methods: The complete NTP database was used as the basis for these analyses, which were performed using the Hill model. We determined NOAELs and estimated corresponding extra risks. Lower 95% confidence bounds on the BMD (BMDLs) corresponding to extra risks of 1%, 5%, and 10% (BMDL01, BMDL05, and BMDL10, respectively) were also estimated. We introduce the SNCD as a new PoD, defined as the dose where the additional risk is equal to the “background noise” (the difference between the upper and lower bounds of the two-sided 90% confidence interval on absolute risk) or a specified fraction thereof. Results: The median risk at the NOAEL was approximately 10%, and the default uncertainty factor (UF = 100) was considered most applicable to the BMDL10. Therefore, we chose a target risk of 1/1,000 (0.1/100) to derive an SNCD-based RfD by linear extrapolation. At the median, this approach provided the same RfD as the BMDL10 divided by the default UF. Conclusions: Under a standard BMD approach, the BMDL10 is considered to be the most appropriate PoD. The SNCD approach, which is based on the lowest dose at which the signal can be reliably detected, warrants further development as a PoD for human health risk assessment. PMID:21813365
An approach to estimate body dimensions through constant body ratio benchmarks.
Chao, Wei-Cheng; Wang, Eric Min-Yang
2010-12-01
Building a new anthropometric database is a difficult and costly job that requires considerable manpower and time. However, most designers and engineers do not know how to convert old anthropometric data into applicable new data with minimal errors and costs (Wang et al., 1999). To simplify the process of converting old anthropometric data into useful new data, this study analyzed the available data in paired body dimensions in an attempt to determine constant body ratio (CBR) benchmarks that are independent of gender and age. In total, 483 CBR benchmarks were identified and verified from 35,245 ratios analyzed. Additionally, 197 estimation formulae, taking as inputs 19 easily measured body dimensions, were built using 483 CBR benchmarks. Based on the results for 30 recruited participants, this study determined that the described approach is more accurate and cost-effective than alternative techniques. Copyright © 2010 Elsevier Ltd. All rights reserved.
Beyer, W. Nelson; Chen, Yu; Henry, Paula; May, Thomas; Mosby, David; Rattner, Barnett A.; Shearn-Bochsler, Valerie I.; Sprague, Daniel; Weber, John
2014-01-01
This study relates tissue concentrations and toxic effects of Pb in Japanese quail (Coturnix japonica) to the dietary exposure of soil-borne Pb associated with mining and smelting. From 0% to 12% contaminated soil, by weight, was added to 5 experimental diets (0.12 to 382 mg Pb/kg, dry wt) and fed to the quail for 6 weeks. Benchmark doses associated with a 50% reduction in delta-aminolevulinic acid dehydratase activity were 0.62 mg Pb/kg in the blood, dry wt, and 27 mg Pb/kg in the diet. Benchmark doses associated with a 20% increase in the concentration of erythrocyte protoporphyrin were 2.7 mg Pb/kg in the blood and 152 mg Pb/kg in the diet. The quail showed no other signs of toxicity (histopathological lesions, alterations in plasma–testosterone concentration, and body and organ weights). The relation of the blood Pb concentration to the soil Pb concentration was linear, with a slope of 0.013 mg Pb/kg of blood (dry wt) divided by mg Pb/kg of diet. We suggest that this slope is potentially useful in ecological risk assessments on birds in the same way that the intake slope factor is an important parameter in risk assessments of children exposed to Pb. The slope may also be used in a tissue-residue approach as an additional line of evidence in ecological risk assessment, supplementary to an estimate of hazard based on dietary toxicity reference values.
NASA Astrophysics Data System (ADS)
Bu, Zhongming; Zhang, Yinping; Mmereki, Daniel; Yu, Wei; Li, Baizhan
2016-02-01
Six phthalates - dimethyl phthalate (DMP), diethyl phthalate (DEP), di(isobutyl) phthalate (DiBP), di(n-butyl) phthalate (DnBP), butyl benzyl phthalate (BBzP) and di(2-ethylhexyl) phthalate (DEHP) - in indoor gas-phase and dust samples were measured in thirty residential apartments for the first time in Chongqing, China. Monte-Carlo simulation was used to estimate preschool children's exposure via inhalation, non-dietary ingestion and dermal absorption based on gas-phase and dust concentrations. Risk assessment was evaluated by comparing the modeled exposure doses with child-specific benchmarks specified in California's Proposition 65. The detection frequency for all the targeted phthalates was more than 80% except for BBzP. DMP was the most predominant compound in the gas-phase (median = 0.91 μg/m3 and 0.82 μg/m3 in living rooms and bedrooms, respectively), and DEHP was the most predominant compound in the dust samples (median = 1543 μg/g and 1450 μg/g in living rooms and bedrooms, respectively). Correlation analysis suggests that indoor DiBP and DnBP might come from the same emission sources. The simulations showed that the median DEHP daily intake was 3.18-4.28 μg/day/kg-bw in all age groups, suggesting that it was the greatest of the targeted phthalates. The risk assessment indicated that the exposure doses of DnBP and DEHP exceeded the child-specific benchmarks in more than 90% of preschool children in Chongqing. Therefore, from a children's health perspective, efforts should focus on controlling indoor phthalate concentrations and exposures.
Risk assessment of skin lightening cosmetics containing hydroquinone.
Matsumoto, Mariko; Todo, Hiroaki; Akiyama, Takumi; Hirata-Koizumi, Mutsuko; Sugibayashi, Kenji; Ikarashi, Yoshiaki; Ono, Atsushi; Hirose, Akihiko; Yokoyama, Kazuhito
2016-11-01
Following reports on potential risks of hydroquinone (HQ), HQ for skin lightening has been banned or restricted in Europe and the US. In contrast, HQ is not listed as a prohibited or limited ingredient for cosmetic use in Japan, and many HQ cosmetics are sold without restriction. To assess the risk of systemic effects of HQ, we examined the rat skin permeation rates of four HQ (0.3%, 1.0%, 2.6%, and 3.3%) cosmetics. The permeation coefficients ranged from 1.2 × 10 -9 to 3.1 × 10 -7 cm/s, with the highest value superior than the HQ aqueous solution (1.6 × 10 -7 cm/s). After dermal application of the HQ cosmetics to rats, HQ in plasma was detected only in the treatment by highest coefficient cosmetic. Absorbed HQ levels treated with this highest coefficient cosmetic in humans were estimated by numerical methods, and we calculated the margin of exposure (MOE) for the estimated dose (0.017 mg/kg-bw/day in proper use) to a benchmark dose for rat renal tubule adenomas. The MOE of 559 is judged to be in a range safe for the consumer. However, further consideration may be required for regulation of cosmetic ingredients. Copyright © 2016 Elsevier Inc. All rights reserved.
Ali, F; Waker, A J; Waller, E J
2014-10-01
Tissue-equivalent proportional counters (TEPC) can potentially be used as a portable and personal dosemeter in mixed neutron and gamma-ray fields, but what hinders this use is their typically large physical size. To formulate compact TEPC designs, the use of a Monte Carlo transport code is necessary to predict the performance of compact designs in these fields. To perform this modelling, three candidate codes were assessed: MCNPX 2.7.E, FLUKA 2011.2 and PHITS 2.24. In each code, benchmark simulations were performed involving the irradiation of a 5-in. TEPC with monoenergetic neutron fields and a 4-in. wall-less TEPC with monoenergetic gamma-ray fields. The frequency and dose mean lineal energies and dose distributions calculated from each code were compared with experimentally determined data. For the neutron benchmark simulations, PHITS produces data closest to the experimental values and for the gamma-ray benchmark simulations, FLUKA yields data closest to the experimentally determined quantities. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A Simulation Environment for Benchmarking Sensor Fusion-Based Pose Estimators.
Ligorio, Gabriele; Sabatini, Angelo Maria
2015-12-19
In-depth analysis and performance evaluation of sensor fusion-based estimators may be critical when performed using real-world sensor data. For this reason, simulation is widely recognized as one of the most powerful tools for algorithm benchmarking. In this paper, we present a simulation framework suitable for assessing the performance of sensor fusion-based pose estimators. The systems used for implementing the framework were magnetic/inertial measurement units (MIMUs) and a camera, although the addition of further sensing modalities is straightforward. Typical nuisance factors were also included for each sensor. The proposed simulation environment was validated using real-life sensor data employed for motion tracking. The higher mismatch between real and simulated sensors was about 5% of the measured quantity (for the camera simulation), whereas a lower correlation was found for an axis of the gyroscope (0.90). In addition, a real benchmarking example of an extended Kalman filter for pose estimation from MIMU and camera data is presented.
NASA Astrophysics Data System (ADS)
Lin, Yi-Chun; Huang, Tseng-Te; Liu, Yuan-Hao; Chen, Wei-Lin; Chen, Yen-Fu; Wu, Shu-Wei; Nievaart, Sander; Jiang, Shiang-Huei
2015-06-01
The paired ionization chambers (ICs) technique is commonly employed to determine neutron and photon doses in radiology or radiotherapy neutron beams, where neutron dose shows very strong dependence on the accuracy of accompanying high energy photon dose. During the dose derivation, it is an important issue to evaluate the photon and electron response functions of two commercially available ionization chambers, denoted as TE(TE) and Mg(Ar), used in our reactor based epithermal neutron beam. Nowadays, most perturbation corrections for accurate dose determination and many treatment planning systems are based on the Monte Carlo technique. We used general purposed Monte Carlo codes, MCNP5, EGSnrc, FLUKA or GEANT4 for benchmark verifications among them and carefully measured values for a precise estimation of chamber current from absorbed dose rate of cavity gas. Also, energy dependent response functions of two chambers were calculated in a parallel beam with mono-energies from 20 keV to 20 MeV photons and electrons by using the optimal simple spherical and detailed IC models. The measurements were performed in the well-defined (a) four primary M-80, M-100, M120 and M150 X-ray calibration fields, (b) primary 60Co calibration beam, (c) 6 MV and 10 MV photon, (d) 6 MeV and 18 MeV electron LINACs in hospital and (e) BNCT clinical trials neutron beam. For the TE(TE) chamber, all codes were almost identical over the whole photon energy range. In the Mg(Ar) chamber, MCNP5 showed lower response than other codes for photon energy region below 0.1 MeV and presented similar response above 0.2 MeV (agreed within 5% in the simple spherical model). With the increase of electron energy, the response difference between MCNP5 and other codes became larger in both chambers. Compared with the measured currents, MCNP5 had the difference from the measurement data within 5% for the 60Co, 6 MV, 10 MV, 6 MeV and 18 MeV LINACs beams. But for the Mg(Ar) chamber, the derivations reached 7.8-16.5% below 120 kVp X-ray beams. In this study, we were especially interested in BNCT doses where low energy photon contribution is less to ignore, MCNP model is recognized as the most suitable to simulate wide photon-electron and neutron energy distributed responses of the paired ICs. Also, MCNP provides the best prediction of BNCT source adjustment by the detector's neutron and photon responses.
Estimate of safe human exposure levels for lunar dust based on comparative benchmark dose modeling.
James, John T; Lam, Chiu-Wing; Santana, Patricia A; Scully, Robert R
2013-04-01
Brief exposures of Apollo astronauts to lunar dust occasionally elicited upper respiratory irritation; however, no limits were ever set for prolonged exposure to lunar dust. The United States and other space faring nations intend to return to the moon for extensive exploration within a few decades. In the meantime, habitats for that exploration, whether mobile or fixed, must be designed to limit human exposure to lunar dust to safe levels. Herein we estimate safe exposure limits for lunar dust collected during the Apollo 14 mission. We instilled three respirable-sized (∼2 μ mass median diameter) lunar dusts (two ground and one unground) and two standard dusts of widely different toxicities (quartz and TiO₂) into the respiratory system of rats. Rats in groups of six were given 0, 1, 2.5 or 7.5 mg of the test dust in a saline-Survanta® vehicle, and biochemical and cellular biomarkers of toxicity in lung lavage fluid were assayed 1 week and one month after instillation. By comparing the dose--response curves of sensitive biomarkers, we estimated safe exposure levels for astronauts and concluded that unground lunar dust and dust ground by two different methods were not toxicologically distinguishable. The safe exposure estimates were 1.3 ± 0.4 mg/m³ (jet-milled dust), 1.0 ± 0.5 mg/m³ (ball-milled dust) and 0.9 ± 0.3 mg/m³ (unground, natural dust). We estimate that 0.5-1 mg/m³ of lunar dust is safe for periodic human exposures during long stays in habitats on the lunar surface.
Manktelow, Bradley N; Seaton, Sarah E; Evans, T Alun
2016-12-01
There is an increasing use of statistical methods, such as funnel plots, to identify poorly performing healthcare providers. Funnel plots comprise the construction of control limits around a benchmark and providers with outcomes falling outside the limits are investigated as potential outliers. The benchmark is usually estimated from observed data but uncertainty in this estimate is usually ignored when constructing control limits. In this paper, the use of funnel plots in the presence of uncertainty in the value of the benchmark is reviewed for outcomes from a Binomial distribution. Two methods to derive the control limits are shown: (i) prediction intervals; (ii) tolerance intervals Tolerance intervals formally include the uncertainty in the value of the benchmark while prediction intervals do not. The probability properties of 95% control limits derived using each method were investigated through hypothesised scenarios. Neither prediction intervals nor tolerance intervals produce funnel plot control limits that satisfy the nominal probability characteristics when there is uncertainty in the value of the benchmark. This is not necessarily to say that funnel plots have no role to play in healthcare, but that without the development of intervals satisfying the nominal probability characteristics they must be interpreted with care. © The Author(s) 2014.
Jaramillo, Eduardo; Melnick, Daniel; Baez, Juan Carlos; Montecino, Henry; Lagos, Nelson A; Acuña, Emilio; Manzano, Mario; Camus, Patricio A
2017-01-01
The April 1st 2014 Iquique earthquake (MW 8.1) occurred along the northern Chile margin where the Nazca plate is subducted below the South American continent. The last great megathrust earthquake here, in 1877 of Mw ~8.8 opened a seismic gap, which was only partly closed by the 2014 earthquake. Prior to the earthquake in 2013, and shortly after it we compared data from leveled benchmarks, deployed campaign GPS instruments, continuous GPS stations and estimated sea levels using the upper vertical level of rocky shore benthic organisms including algae, barnacles, and mussels. Land-level changes estimated from mean elevations of benchmarks indicate subsidence along a ~100-km stretch of coast, ranging from 3 to 9 cm at Corazones (18°30'S) to between 30 and 50 cm at Pisagua (19°30'S). About 15 cm of uplift was measured along the southern part of the rupture at Chanabaya (20°50'S). Land-level changes obtained from benchmarks and campaign GPS were similar at most sites (mean difference 3.7±3.2 cm). Higher differences however, were found between benchmarks and continuous GPS (mean difference 8.5±3.6 cm), possibly because sites were not collocated and separated by several kilometers. Subsidence estimated from the upper limits of intertidal fauna at Pisagua ranged between 40 to 60 cm, in general agreement with benchmarks and GPS. At Chanavaya, the magnitude and sense of displacement of the upper marine limit was variable across species, possibly due to species-dependent differences in ecology. Among the studied species, measurements on lithothamnioid calcareous algae most closely matched those made with benchmarks and GPS. When properly calibrated, rocky shore benthic species may be used to accurately measure land-level changes along coasts affected by subduction earthquakes. Our calibration of those methods will improve their accuracy when applied to coasts lacking pre-earthquake data and in estimating deformation during pre-instrumental earthquakes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Donald L.; Hilohi, C. Michael; Spelic, David C.
2012-10-15
Purpose: To determine patient radiation doses from interventional cardiology procedures in the U.S and to suggest possible initial values for U.S. benchmarks for patient radiation dose from selected interventional cardiology procedures [fluoroscopically guided diagnostic cardiac catheterization and percutaneous coronary intervention (PCI)]. Methods: Patient radiation dose metrics were derived from analysis of data from the 2008 to 2009 Nationwide Evaluation of X-ray Trends (NEXT) survey of cardiac catheterization. This analysis used deidentified data and did not require review by an IRB. Data from 171 facilities in 30 states were analyzed. The distributions (percentiles) of radiation dose metrics were determined for diagnosticmore » cardiac catheterizations, PCI, and combined diagnostic and PCI procedures. Confidence intervals for these dose distributions were determined using bootstrap resampling. Results: Percentile distributions (advisory data sets) and possible preliminary U.S. reference levels (based on the 75th percentile of the dose distributions) are provided for cumulative air kerma at the reference point (K{sub a,r}), cumulative air kerma-area product (P{sub KA}), fluoroscopy time, and number of cine runs. Dose distributions are sufficiently detailed to permit dose audits as described in National Council on Radiation Protection and Measurements Report No. 168. Fluoroscopy times are consistent with those observed in European studies, but P{sub KA} is higher in the U.S. Conclusions: Sufficient data exist to suggest possible initial benchmarks for patient radiation dose for certain interventional cardiology procedures in the U.S. Our data suggest that patient radiation dose in these procedures is not optimized in U.S. practice.« less
Johnson, George E.; Battaion, Hannah L.; Slob, Wout; Gollapudi, B.
2017-01-01
There is growing interest in quantitative analysis of in vivo genetic toxicity dose‐response data, and use of point‐of‐departure (PoD) metrics such as the benchmark dose (BMD) for human health risk assessment (HHRA). Currently, multiple transgenic rodent (TGR) assay variants, employing different rodent strains and reporter transgenes, are used for the assessment of chemically‐induced genotoxic effects in vivo. However, regulatory issues arise when different PoD values (e.g., lower BMD confidence intervals or BMDLs) are obtained for the same compound across different TGR assay variants. This study therefore employed the BMD approach to examine the ability of different TGR variants to yield comparable genotoxic potency estimates. Review of over 2000 dose‐response datasets identified suitably‐matched dose‐response data for three compounds (ethyl methanesulfonate or EMS, N‐ethyl‐N‐nitrosourea or ENU, and dimethylnitrosamine or DMN) across four commonly‐used murine TGR variants (Muta™Mouse lacZ, Muta™Mouse cII, gpt delta and BigBlue® lacI). Dose‐response analyses provided no conclusive evidence that TGR variant choice significantly influences the derived genotoxic potency estimate. This conclusion was reliant upon taking into account the importance of comparing BMD confidence intervals as opposed to directly comparing PoD values (e.g., comparing BMDLs). Comparisons with earlier works suggested that with respect to potency determination, tissue choice is potentially more important than choice of TGR assay variant. Scoring multiple tissues selected on the basis of supporting toxicokinetic information is therefore recommended. Finally, we used typical within‐group variances to estimate preliminary endpoint‐specific benchmark response (BMR) values across several TGR variants/tissues. We discuss why such values are required for routine use of genetic toxicity PoDs for HHRA. Environ. Mol. Mutagen. 58:632–643, 2017. © 2017 Her Majesty the Queen in Right of Canada. Environmental and Molecular Mutagenesis Published by Wiley Periodicals, Inc. PMID:28945287
Gatti, Daniel M.; Morgan, Daniel L.; Kissling, Grace E.; Shockley, Keith R.; Knudsen, Gabriel A.; Shepard, Kim G.; Price, Herman C.; King, Deborah; Witt, Kristine L.; Pedersen, Lars C.; Munger, Steven C.; Svenson, Karen L.; Churchill, Gary A.
2014-01-01
Background Inhalation of benzene at levels below the current exposure limit values leads to hematotoxicity in occupationally exposed workers. Objective We sought to evaluate Diversity Outbred (DO) mice as a tool for exposure threshold assessment and to identify genetic factors that influence benzene-induced genotoxicity. Methods We exposed male DO mice to benzene (0, 1, 10, or 100 ppm; 75 mice/exposure group) via inhalation for 28 days (6 hr/day for 5 days/week). The study was repeated using two independent cohorts of 300 animals each. We measured micronuclei frequency in reticulocytes from peripheral blood and bone marrow and applied benchmark concentration modeling to estimate exposure thresholds. We genotyped the mice and performed linkage analysis. Results We observed a dose-dependent increase in benzene-induced chromosomal damage and estimated a benchmark concentration limit of 0.205 ppm benzene using DO mice. This estimate is an order of magnitude below the value estimated using B6C3F1 mice. We identified a locus on Chr 10 (31.87 Mb) that contained a pair of overexpressed sulfotransferases that were inversely correlated with genotoxicity. Conclusions The genetically diverse DO mice provided a reproducible response to benzene exposure. The DO mice display interindividual variation in toxicity response and, as such, may more accurately reflect the range of response that is observed in human populations. Studies using DO mice can localize genetic associations with high precision. The identification of sulfotransferases as candidate genes suggests that DO mice may provide additional insight into benzene-induced genotoxicity. Citation French JE, Gatti DM, Morgan DL, Kissling GE, Shockley KR, Knudsen GA, Shepard KG, Price HC, King D, Witt KL, Pedersen LC, Munger SC, Svenson KL, Churchill GA. 2015. Diversity Outbred mice identify population-based exposure thresholds and genetic factors that influence benzene-induced genotoxicity. Environ Health Perspect 123:237–245; http://dx.doi.org/10.1289/ehp.1408202 PMID:25376053
Energy saving in WWTP: Daily benchmarking under uncertainty and data availability limitations.
Torregrossa, D; Schutz, G; Cornelissen, A; Hernández-Sancho, F; Hansen, J
2016-07-01
Efficient management of Waste Water Treatment Plants (WWTPs) can produce significant environmental and economic benefits. Energy benchmarking can be used to compare WWTPs, identify targets and use these to improve their performance. Different authors have performed benchmark analysis on monthly or yearly basis but their approaches suffer from a time lag between an event, its detection, interpretation and potential actions. The availability of on-line measurement data on many WWTPs should theoretically enable the decrease of the management response time by daily benchmarking. Unfortunately this approach is often impossible because of limited data availability. This paper proposes a methodology to perform a daily benchmark analysis under database limitations. The methodology has been applied to the Energy Online System (EOS) developed in the framework of the project "INNERS" (INNovative Energy Recovery Strategies in the urban water cycle). EOS calculates a set of Key Performance Indicators (KPIs) for the evaluation of energy and process performances. In EOS, the energy KPIs take in consideration the pollutant load in order to enable the comparison between different plants. For example, EOS does not analyse the energy consumption but the energy consumption on pollutant load. This approach enables the comparison of performances for plants with different loads or for a single plant under different load conditions. The energy consumption is measured by on-line sensors, while the pollutant load is measured in the laboratory approximately every 14 days. Consequently, the unavailability of the water quality parameters is the limiting factor in calculating energy KPIs. In this paper, in order to overcome this limitation, the authors have developed a methodology to estimate the required parameters and manage the uncertainty in the estimation. By coupling the parameter estimation with an interval based benchmark approach, the authors propose an effective, fast and reproducible way to manage infrequent inlet measurements. Its use enables benchmarking on a daily basis and prepares the ground for further investigation. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Sands, Michelle M.; Borrego, David; Maynard, Matthew R.; Bahadori, Amir A.; Bolch, Wesley E.
2017-11-01
One of the hazards faced by space crew members in low-Earth orbit or in deep space is exposure to ionizing radiation. It has been shown previously that while differences in organ-specific and whole-body risk estimates due to body size variations are small for highly-penetrating galactic cosmic rays, large differences in these quantities can result from exposure to shorter-range trapped proton or solar particle event radiations. For this reason, it is desirable to use morphometrically accurate computational phantoms representing each astronaut for a risk analysis, especially in the case of a solar particle event. An algorithm was developed to automatically sculpt and scale the UF adult male and adult female hybrid reference phantom to the individual outer body contour of a given astronaut. This process begins with the creation of a laser-measured polygon mesh model of the astronaut's body contour. Using the auto-scaling program and selecting several anatomical landmarks, the UF adult male or female phantom is adjusted to match the laser-measured outer body contour of the astronaut. A dosimetry comparison study was conducted to compare the organ dose accuracy of both the autoscaled phantom and that based upon a height-weight matched phantom from the UF/NCI Computational Phantom Library. Monte Carlo methods were used to simulate the environment of the August 1972 and February 1956 solar particle events. Using a series of individual-specific voxel phantoms as a local benchmark standard, autoscaled phantom organ dose estimates were shown to provide a 1% and 10% improvement in organ dose accuracy for a population of females and males, respectively, as compared to organ doses derived from height-weight matched phantoms from the UF/NCI Computational Phantom Library. In addition, this slight improvement in organ dose accuracy from the autoscaled phantoms is accompanied by reduced computer storage requirements and a more rapid method for individualized phantom generation when compared to the UF/NCI Computational Phantom Library.
NASA Astrophysics Data System (ADS)
Williamson, Jeffrey F.
2006-09-01
This paper briefly reviews the evolution of brachytherapy dosimetry from 1900 to the present. Dosimetric practices in brachytherapy fall into three distinct eras: During the era of biological dosimetry (1900-1938), radium pioneers could only specify Ra-226 and Rn-222 implants in terms of the mass of radium encapsulated within the implanted sources. Due to the high energy of its emitted gamma rays and the long range of its secondary electrons in air, free-air chambers could not be used to quantify the output of Ra-226 sources in terms of exposure. Biological dosimetry, most prominently the threshold erythema dose, gained currency as a means of intercomparing radium treatments with exposure-calibrated orthovoltage x-ray units. The classical dosimetry era (1940-1980) began with successful exposure standardization of Ra-226 sources by Bragg-Gray cavity chambers. Classical dose-computation algorithms, based upon 1-D buildup factor measurements and point-source superposition computational algorithms, were able to accommodate artificial radionuclides such as Co-60, Ir-192, and Cs-137. The quantitative dosimetry era (1980- ) arose in response to the increasing utilization of low energy K-capture radionuclides such as I-125 and Pd-103 for which classical approaches could not be expected to estimate accurate correct doses. This led to intensive development of both experimental (largely TLD-100 dosimetry) and Monte Carlo dosimetry techniques along with more accurate air-kerma strength standards. As a result of extensive benchmarking and intercomparison of these different methods, single-seed low-energy radionuclide dose distributions are now known with a total uncertainty of 3%-5%.
Benchmarking the minimum Electron Beam (eBeam) dose required for the sterilization of space foods
NASA Astrophysics Data System (ADS)
Bhatia, Sohini S.; Wall, Kayley R.; Kerth, Chris R.; Pillai, Suresh D.
2018-02-01
As manned space missions extend in length, the safety, nutrition, acceptability, and shelf life of space foods are of paramount importance to NASA. Since food and mealtimes play a key role in reducing stress and boredom of prolonged missions, the quality of food in terms of appearance, flavor, texture, and aroma can have significant psychological ramifications on astronaut performance. The FDA, which oversees space foods, currently requires a minimum dose of 44 kGy for irradiated space foods. The underlying hypothesis was that commercial sterility of space foods could be achieved at a significantly lower dose, and this lowered dose would positively affect the shelf life of the product. Electron beam processed beef fajitas were used as an example NASA space food to benchmark the minimum eBeam dose required for sterility. A 15 kGy dose was able to achieve an approximately 10 log reduction in Shiga-toxin-producing Escherichia coli bacteria, and a 5 log reduction in Clostridium sporogenes spores. Furthermore, accelerated shelf life testing (ASLT) to determine sensory and quality characteristics under various conditions was conducted. Using Multidimensional gas-chromatography-olfactometry-mass spectrometry (MDGC-O-MS), numerous volatiles were shown to be dependent on the dose applied to the product. Furthermore, concentrations of off -flavor aroma compounds such as dimethyl sulfide were decreased at the reduced 15 kGy dose. The results suggest that the combination of conventional cooking combined with eBeam processing (15 kGy) can achieve the safety and shelf-life objectives needed for long duration space-foods.
The grout/glass performance assessment code system (GPACS) with verification and benchmarking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepho, M.G.; Sutherland, W.H.; Rittmann, P.D.
1994-12-01
GPACS is a computer code system for calculating water flow (unsaturated or saturated), solute transport, and human doses due to the slow release of contaminants from a waste form (in particular grout or glass) through an engineered system and through a vadose zone to an aquifer, well and river. This dual-purpose document is intended to serve as a user`s guide and verification/benchmark document for the Grout/Glass Performance Assessment Code system (GPACS). GPACS can be used for low-level-waste (LLW) Glass Performance Assessment and many other applications including other low-level-waste performance assessments and risk assessments. Based on all the cses presented, GPACSmore » is adequate (verified) for calculating water flow and contaminant transport in unsaturated-zone sediments and for calculating human doses via the groundwater pathway.« less
Dose-response algorithms for water-borne Pseudomonas aeruginosa folliculitis.
Roser, D J; Van Den Akker, B; Boase, S; Haas, C N; Ashbolt, N J; Rice, S A
2015-05-01
We developed two dose-response algorithms for P. aeruginosa pool folliculitis using bacterial and lesion density estimates, associated with undetectable, significant, and almost certain folliculitis. Literature data were fitted to Furumoto & Mickey's equations, developed for plant epidermis-invading pathogens: N l = A ln(1 + BC) (log-linear model); P inf = 1-e(-r c C) (exponential model), where A and B are 2.51644 × 107 lesions/m2 and 2.28011 × 10-11 c.f.u./ml P. aeruginosa, respectively; C = pathogen density (c.f.u./ml), N l = folliculitis lesions/m2, P inf = probability of infection, and r C = 4·3 × 10-7 c.f.u./ml P. aeruginosa. Outbreak data indicates these algorithms apply to exposure durations of 41 ± 25 min. Typical water quality benchmarks (≈10-2 c.f.u./ml) appear conservative but still useful as the literature indicated repeated detection likely implies unstable control barriers and bacterial bloom potential. In future, culture-based outbreak testing should be supplemented with quantitative polymerase chain reaction and organic carbon assays, and quantification of folliculitis aetiology to better understand P. aeruginosa risks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan Chan Tseung, H; Ma, J; Ma, D
2015-06-15
Purpose: To demonstrate the feasibility of fast Monte Carlo (MC) based biological planning for the treatment of thyroid tumors in spot-scanning proton therapy. Methods: Recently, we developed a fast and accurate GPU-based MC simulation of proton transport that was benchmarked against Geant4.9.6 and used as the dose calculation engine in a clinically-applicable GPU-accelerated IMPT optimizer. Besides dose, it can simultaneously score the dose-averaged LET (LETd), which makes fast biological dose (BD) estimates possible. To convert from LETd to BD, we used a linear relation based on cellular irradiation data. Given a thyroid patient with a 93cc tumor volume, we createdmore » a 2-field IMPT plan in Eclipse (Varian Medical Systems). This plan was re-calculated with our MC to obtain the BD distribution. A second 5-field plan was made with our in-house optimizer, using pre-generated MC dose and LETd maps. Constraints were placed to maintain the target dose to within 25% of the prescription, while maximizing the BD. The plan optimization and calculation of dose and LETd maps were performed on a GPU cluster. The conventional IMPT and biologically-optimized plans were compared. Results: The mean target physical and biological doses from our biologically-optimized plan were, respectively, 5% and 14% higher than those from the MC re-calculation of the IMPT plan. Dose sparing to critical structures in our plan was also improved. The biological optimization, including the initial dose and LETd map calculations, can be completed in a clinically viable time (∼30 minutes) on a cluster of 25 GPUs. Conclusion: Taking advantage of GPU acceleration, we created a MC-based, biologically optimized treatment plan for a thyroid patient. Compared to a standard IMPT plan, a 5% increase in the target’s physical dose resulted in ∼3 times as much increase in the BD. Biological planning was thus effective in escalating the target BD.« less
Qualitative and quantitative approaches in the dose-response assessment of genotoxic carcinogens.
Fukushima, Shoji; Gi, Min; Kakehashi, Anna; Wanibuchi, Hideki; Matsumoto, Michiharu
2016-05-01
Qualitative and quantitative approaches are important issues in field of carcinogenic risk assessment of the genotoxic carcinogens. Herein, we provide quantitative data on low-dose hepatocarcinogenicity studies for three genotoxic hepatocarcinogens: 2-amino-3,8-dimethylimidazo[4,5-f]quinoxaline (MeIQx), 2-amino-3-methylimidazo[4,5-f]quinoline (IQ) and N-nitrosodiethylamine (DEN). Hepatocarcinogenicity was examined by quantitative analysis of glutathione S-transferase placental form (GST-P) positive foci, which are the preneoplastic lesions in rat hepatocarcinogenesis and the endpoint carcinogenic marker in the rat liver medium-term carcinogenicity bioassay. We also examined DNA damage and gene mutations which occurred through the initiation stage of carcinogenesis. For the establishment of points of departure (PoD) from which the cancer-related risk can be estimated, we analyzed the above events by quantitative no-observed-effect level and benchmark dose approaches. MeIQx at low doses induced formation of DNA-MeIQx adducts; somewhat higher doses caused elevation of 8-hydroxy-2'-deoxyquanosine levels; at still higher doses gene mutations occurred; and the highest dose induced formation of GST-P positive foci. These data indicate that early genotoxic events in the pathway to carcinogenesis showed the expected trend of lower PoDs for earlier events in the carcinogenic process. Similarly, only the highest dose of IQ caused an increase in the number of GST-P positive foci in the liver, while IQ-DNA adduct formation was observed with low doses. Moreover, treatment with DEN at low doses had no effect on development of GST-P positive foci in the liver. These data on PoDs for the markers contribute to understand whether genotoxic carcinogens have a threshold for their carcinogenicity. The most appropriate approach to use in low dose-response assessment must be approved on the basis of scientific judgment. © The Author 2015. Published by Oxford University Press on behalf of the UK Environmental Mutagen Society. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Pasler, Marlies; Kaas, Jochem; Perik, Thijs; Geuze, Job; Dreindl, Ralf; Künzler, Thomas; Wittkamper, Frits; Georg, Dietmar
2015-12-01
To systematically evaluate machine specific quality assurance (QA) for volumetric modulated arc therapy (VMAT) based on log files by applying a dynamic benchmark plan. A VMAT benchmark plan was created and tested on 18 Elekta linacs (13 MLCi or MLCi2, 5 Agility) at 4 different institutions. Linac log files were analyzed and a delivery robustness index was introduced. For dosimetric measurements an ionization chamber array was used. Relative dose deviations were assessed by mean gamma for each control point and compared to the log file evaluation. Fourteen linacs delivered the VMAT benchmark plan, while 4 linacs failed by consistently terminating the delivery. The mean leaf error (±1SD) was 0.3±0.2 mm for all linacs. Large MLC maximum errors up to 6.5 mm were observed at reversal positions. Delivery robustness index accounting for MLC position correction (0.8-1.0) correlated with delivery time (80-128 s) and depended on dose rate performance. Dosimetric evaluation indicated in general accurate plan reproducibility with γ(mean)(±1 SD)=0.4±0.2 for 1 mm/1%. However single control point analysis revealed larger deviations and attributed well to log file analysis. The designed benchmark plan helped identify linac related malfunctions in dynamic mode for VMAT. Log files serve as an important additional QA measure to understand and visualize dynamic linac parameters. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Fanning, Julia L.; Schwarz, Gregory E.; Lewis, William C.
2001-01-01
A benchmark irrigation monitoring network of farms located in a 32-county area in southwestern Georgia was established in 1995 to improve estimates of irrigation water use. A stratified random sample of 500 permitted irrigators was selected from a data base--maintained by the Georgia Department of Natural Resources, Georgia Environmental Protection Division, Water Resources Management Branch--to obtain 180 voluntary participants in the study area. Site-specific irrigation data were collected at each farm using running-time totalizers and noninvasive flowmeters. Data were collected and compiled for 50 farms for 1995 and 130 additional farms for the 1996 growing season--a total of 180 farms. Irrigation data collected during the 1996 growing season were compiled for 180 benchmark farms and used to develop a statistical model to estimate irrigation water use in 32 counties in southwestern Georgia. The estimates derived were developed from using a statistical approach know as "bootstrap analysis" that allows for the estimation of precision. Five model components--whether-to-irrigate, acres irrigated, crop selected, seasonal-irrigation scheduling, and the amount of irrigation applied--compose the irrigation model and were developed to reflect patterns in the data collected at Benchmark Farms Study area sites. The model estimated that peak irrigation for all counties in the study area occurred during July with significant irrigation also occurring during May, June, and August. Irwin and Tift were the most irrigated and Schley and Houston were the least irrigated counties in the study area. High irrigation intensity primarily was located along the eastern border of the study area; whereas, low irrigation intensity was located in the southwestern quadrant where ground water was the dominant irrigation source. Crop-level estimates showed sizable variations across crops and considerable uncertainty for all crops other than peanuts and pecans. Counties having the most irrigated acres showed higher variations in annual irrigation than counties having the least irrigated acres. The Benchmark Farms Study model estimates were higher than previous irrigation estimates, with 20 percent of the bias a result of underestimating irrigation acreage in earlier studies. Model estimates showed evidence of an upward bias of about 15 percent with the likely cause being a misrepresented inches-applied model. A better understanding of the causes of bias in the model could be determined with a larger irrigation sample size and increased substantially by automating the reporting of monthly totalizer amounts.
An international dosimetry exchange for BNCT part II: computational dosimetry normalizations.
Riley, K J; Binns, P J; Harling, O K; Albritton, J R; Kiger, W S; Rezaei, A; Sköld, K; Seppälä, T; Savolainen, S; Auterinen, I; Marek, M; Viererbl, L; Nievaart, V A; Moss, R L
2008-12-01
The meaningful sharing and combining of clinical results from different centers in the world performing boron neutron capture therapy (BNCT) requires improved precision in dose specification between programs. To this end absorbed dose normalizations were performed for the European clinical centers at the Joint Research Centre of the European Commission, Petten (The Netherlands), Nuclear Research Institute, Rez (Czech Republic), VTT, Espoo (Finland), and Studsvik, Nyköping (Sweden). Each European group prepared a treatment plan calculation that was bench-marked against Massachusetts Institute of Technology (MIT) dosimetry performed in a large, water-filled phantom to uniformly evaluate dose specifications with an estimated precision of +/-2%-3%. These normalizations were compared with those derived from an earlier exchange between Brookhaven National Laboratory (BNL) and MIT in the USA. Neglecting the uncertainties related to biological weighting factors, large variations between calculated and measured dose are apparent that depend upon the 10B uptake in tissue. Assuming a boron concentration of 15 microg g(-1) in normal tissue, differences in the evaluated maximum dose to brain for the same nominal specification of 10 Gy(w) at the different facilities range between 7.6 and 13.2 Gy(w) in the trials using boronophenylalanine (BPA) as the boron delivery compound and between 8.9 and 11.1 Gy(w) in the two boron sulfhydryl (BSH) studies. Most notably, the value for the same specified dose of 10 Gy(w) determined at the different participating centers using BPA is significantly higher than at BNL by 32% (MIT), 43% (VTT), 49% (JRC), and 74% (Studsvik). Conversion of dose specification is now possible between all active participants and should be incorporated into future multi-center patient analyses.
Gamma irradiator dose mapping simulation using the MCNP code and benchmarking with dosimetry.
Sohrabpour, M; Hassanzadeh, M; Shahriari, M; Sharifzadeh, M
2002-10-01
The Monte Carlo transport code, MCNP, has been applied in simulating dose rate distribution in the IR-136 gamma irradiator system. Isodose curves, cumulative dose values, and system design data such as throughputs, over-dose-ratios, and efficiencies have been simulated as functions of product density. Simulated isodose curves, and cumulative dose values were compared with dosimetry values obtained using polymethyle-methacrylate, Fricke, ethanol-chlorobenzene, and potassium dichromate dosimeters. The produced system design data were also found to agree quite favorably with those of the system manufacturer's data. MCNP has thus been found to be an effective transport code for handling of various dose mapping excercises for gamma irradiators.
Benchmarking methods and data sets for ligand enrichment assessment in virtual screening.
Xia, Jie; Tilahun, Ermias Lemma; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon
2015-01-01
Retrospective small-scale virtual screening (VS) based on benchmarking data sets has been widely used to estimate ligand enrichments of VS approaches in the prospective (i.e. real-world) efforts. However, the intrinsic differences of benchmarking sets to the real screening chemical libraries can cause biased assessment. Herein, we summarize the history of benchmarking methods as well as data sets and highlight three main types of biases found in benchmarking sets, i.e. "analogue bias", "artificial enrichment" and "false negative". In addition, we introduce our recent algorithm to build maximum-unbiased benchmarking sets applicable to both ligand-based and structure-based VS approaches, and its implementations to three important human histone deacetylases (HDACs) isoforms, i.e. HDAC1, HDAC6 and HDAC8. The leave-one-out cross-validation (LOO CV) demonstrates that the benchmarking sets built by our algorithm are maximum-unbiased as measured by property matching, ROC curves and AUCs. Copyright © 2014 Elsevier Inc. All rights reserved.
Benchmarking Methods and Data Sets for Ligand Enrichment Assessment in Virtual Screening
Xia, Jie; Tilahun, Ermias Lemma; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon
2014-01-01
Retrospective small-scale virtual screening (VS) based on benchmarking data sets has been widely used to estimate ligand enrichments of VS approaches in the prospective (i.e. real-world) efforts. However, the intrinsic differences of benchmarking sets to the real screening chemical libraries can cause biased assessment. Herein, we summarize the history of benchmarking methods as well as data sets and highlight three main types of biases found in benchmarking sets, i.e. “analogue bias”, “artificial enrichment” and “false negative”. In addition, we introduced our recent algorithm to build maximum-unbiased benchmarking sets applicable to both ligand-based and structure-based VS approaches, and its implementations to three important human histone deacetylase (HDAC) isoforms, i.e. HDAC1, HDAC6 and HDAC8. The Leave-One-Out Cross-Validation (LOO CV) demonstrates that the benchmarking sets built by our algorithm are maximum-unbiased in terms of property matching, ROC curves and AUCs. PMID:25481478
Bercu, Joel P; Jolly, Robert A; Flagella, Kelly M; Baker, Thomas K; Romero, Pedro; Stevens, James L
2010-12-01
In order to determine a threshold for nongenotoxic carcinogens, the traditional risk assessment approach has been to identify a mode of action (MOA) with a nonlinear dose-response. The dose-response for one or more key event(s) linked to the MOA for carcinogenicity allows a point of departure (POD) to be selected from the most sensitive effect dose or no-effect dose. However, this can be challenging because multiple MOAs and key events may exist for carcinogenicity and oftentimes extensive research is required to elucidate the MOA. In the present study, a microarray analysis was conducted to determine if a POD could be identified following short-term oral rat exposure with two nongenotoxic rodent carcinogens, fenofibrate and methapyrilene, using a benchmark dose analysis of genes aggregated in Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways and Gene Ontology (GO) biological processes, which likely encompass key event(s) for carcinogenicity. The gene expression response for fenofibrate given to rats for 2days was consistent with its MOA and known key events linked to PPARα activation. The temporal response from daily dosing with methapyrilene demonstrated biological complexity with waves of pathways/biological processes occurring over 1, 3, and 7days; nonetheless, the benchmark dose values were consistent over time. When comparing the dose-response of toxicogenomic data to tumorigenesis or precursor events, the toxicogenomics POD was slightly below any effect level. Our results suggest that toxicogenomic analysis using short-term studies can be used to identify a threshold for nongenotoxic carcinogens based on evaluation of potential key event(s) which then can be used within a risk assessment framework. Copyright © 2010 Elsevier Inc. All rights reserved.
A new numerical benchmark of a freshwater lens
NASA Astrophysics Data System (ADS)
Stoeckl, L.; Walther, M.; Graf, T.
2016-04-01
A numerical benchmark for 2-D variable-density flow and solute transport in a freshwater lens is presented. The benchmark is based on results of laboratory experiments conducted by Stoeckl and Houben (2012) using a sand tank on the meter scale. This benchmark describes the formation and degradation of a freshwater lens over time as it can be found under real-world islands. An error analysis gave the appropriate spatial and temporal discretization of 1 mm and 8.64 s, respectively. The calibrated parameter set was obtained using the parameter estimation tool PEST. Comparing density-coupled and density-uncoupled results showed that the freshwater-saltwater interface position is strongly dependent on density differences. A benchmark that adequately represents saltwater intrusion and that includes realistic features of coastal aquifers or freshwater lenses was lacking. This new benchmark was thus developed and is demonstrated to be suitable to test variable-density groundwater models applied to saltwater intrusion investigations.
Benchmarking the Integration of WAVEWATCH III Results into HAZUS-MH: Preliminary Results
NASA Technical Reports Server (NTRS)
Berglund, Judith; Holland, Donald; McKellip, Rodney; Sciaudone, Jeff; Vickery, Peter; Wang, Zhanxian; Ying, Ken
2005-01-01
The report summarizes the results from the preliminary benchmarking activities associated with the use of WAVEWATCH III (WW3) results in the HAZUS-MH MR1 flood module. Project partner Applied Research Associates (ARA) is integrating the WW3 model into HAZUS. The current version of HAZUS-MH predicts loss estimates from hurricane-related coastal flooding by using values of surge only. Using WW3, wave setup can be included with surge. Loss estimates resulting from the use of surge-only and surge-plus-wave-setup were compared. This benchmarking study is preliminary because the HAZUS-MH MR1 flood module was under development at the time of the study. In addition, WW3 is not scheduled to be fully integrated with HAZUS-MH and available for public release until 2008.
Benchmarking routine psychological services: a discussion of challenges and methods.
Delgadillo, Jaime; McMillan, Dean; Leach, Chris; Lucock, Mike; Gilbody, Simon; Wood, Nick
2014-01-01
Policy developments in recent years have led to important changes in the level of access to evidence-based psychological treatments. Several methods have been used to investigate the effectiveness of these treatments in routine care, with different approaches to outcome definition and data analysis. To present a review of challenges and methods for the evaluation of evidence-based treatments delivered in routine mental healthcare. This is followed by a case example of a benchmarking method applied in primary care. High, average and poor performance benchmarks were calculated through a meta-analysis of published data from services working under the Improving Access to Psychological Therapies (IAPT) Programme in England. Pre-post treatment effect sizes (ES) and confidence intervals were estimated to illustrate a benchmarking method enabling services to evaluate routine clinical outcomes. High, average and poor performance ES for routine IAPT services were estimated to be 0.91, 0.73 and 0.46 for depression (using PHQ-9) and 1.02, 0.78 and 0.52 for anxiety (using GAD-7). Data from one specific IAPT service exemplify how to evaluate and contextualize routine clinical performance against these benchmarks. The main contribution of this report is to summarize key recommendations for the selection of an adequate set of psychometric measures, the operational definition of outcomes, and the statistical evaluation of clinical performance. A benchmarking method is also presented, which may enable a robust evaluation of clinical performance against national benchmarks. Some limitations concerned significant heterogeneity among data sources, and wide variations in ES and data completeness.
Jang, Cheng-Shin; Liang, Ching-Ping
2018-01-01
Taiwan is surrounded by oceans, and therefore numerous pleasure beaches attract millions of tourists annually to participate in recreational swimming activities. However, impaired water quality because of fecal pollution poses a potential threat to the tourists' health. This study probabilistically characterized the health risks associated with recreational swimming engendered by waterborne enterococci at 13 Taiwanese beaches by using quantitative microbial risk assessment. First, data on enterococci concentrations at coastal beaches monitored by the Taiwan Environmental Protection Administration were reproduced using nonparametric Monte Carlo simulation (MCS). The ingestion volumes of recreational swimming based on uniform and gamma distributions were subsequently determined using MCS. Finally, after the distribution combination of the two parameters, the beta-Poisson dose-response function was employed to quantitatively estimate health risks to recreational swimmers. Moreover, various levels of risk to recreational swimmers were classified and spatially mapped to explore feasible recreational and environmental management strategies at the beaches. The study results revealed that although the health risks associated with recreational swimming did not exceed an acceptable benchmark of 0.019 illnesses daily at all beaches, they approached to this benchmark at certain beaches. Beaches with relatively high risks are located in Northwestern Taiwan owing to the current movements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, G; Wang, L
Purpose: The unintended radiation dose to organs at risk (OAR) can be contributed from imaging guidance procedures as well as from leakage and scatter of therapeutic beams. This study compares the imaging dose with the unintended out-of-field therapeutic dose to patient sensitive organs. Methods: The Monte Carlo EGSnrc user codes, BEAMnrc and DOSXYZnrc, were used to simulate kV X-ray sources from imaging devices as well as the therapeutic IMRT/VMAT beams and to calculate doses to target and OARs on patient treatment planning CT images. The accuracy of the Monte Carlo simulations was benchmarked against measurements in phantoms. The dose-volume histogrammore » was utilized in analyzing the patient organ doses. Results: The dose resulting from Standard Head kV-CBCT scans to bone and soft tissues ranges from 0.7 to 1.1 cGy and from 0.03 to 0.3 cGy, respectively. The dose resulting from Thorax scans on the chest to bone and soft tissues ranges from 1.1 to 1.8 cGy and from 0.3 to 0.6 cGy, respectively. The dose resulting from Pelvis scans on the abdomen to bone and soft tissues range from 3.2 to 4.2 cGy and from 1.2 to 2.2 cGy, respectively. The out-of-field doses to OAR are sensitive to the distance between the treated target and the OAR. For a typical Head-and-Neck IMRT/VMAT treatment the out-of-field doses to eyes are 1–3% of the target dose, or 2–6 cGy per fraction. Conclusion: The imaging doses to OAR are predictable based on the imaging protocols used when OARs are within the imaged volume and can be estimated and accounted for by using tabulated values. The unintended out-of-field doses are proportional to the target dose, strongly depend on the distance between the treated target and OAR, and are generally higher comparing to the imaging dose. This work was partially supported by Varian research grant VUMC40590.« less
Review of the GMD Benchmark Event in TPL-007-1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backhaus, Scott N.; Rivera, Michael Kelly
2015-07-21
Los Alamos National Laboratory (LANL) examined the approaches suggested in NERC Standard TPL-007-1 for defining the geo-electric field for the Benchmark Geomagnetic Disturbance (GMD) Event. Specifically; 1. Estimating 100-year exceedance geo-electric field magnitude; The scaling of the GMD Benchmark Event to geomagnetic latitudes below 60 degrees north; and 3. The effect of uncertainties in earth conductivity data on the conversion from geomagnetic field to geo-electric field. This document summarizes the review and presents recommendations for consideration
Strategies for Estimating Discrete Quantities.
ERIC Educational Resources Information Center
Crites, Terry W.
1993-01-01
Describes the benchmark and decomposition-recomposition estimation strategies and presents five techniques to develop students' estimation ability. Suggests situations involving quantities of candy and popcorn in which the teacher can model those strategies for the students. (MDH)
A measurement-based generalized source model for Monte Carlo dose simulations of CT scans
Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun
2018-01-01
The goal of this study is to develop a generalized source model (GSM) for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology. PMID:28079526
A measurement-based generalized source model for Monte Carlo dose simulations of CT scans
NASA Astrophysics Data System (ADS)
Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun
2017-03-01
The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.
Lachenmeier, Dirk W; Steffen, Christian; el-Atma, Oliver; Maixner, Sibylle; Löbell-Behrends, Sigrid; Kohl-Himmelseher, Matthias
2012-11-01
The decision criterion for the demarcation between foods and medicinal products in the EU is the significant "pharmacological action". Based on six examples of substances with ambivalent status, the benchmark dose (BMD) method is evaluated to provide a threshold for pharmacological action. Using significant dose-response models from literature clinical trial data or epidemiology, the BMD values were 63mg/day for caffeine, 5g/day for alcohol, 6mg/day for lovastatin, 769mg/day for glucosamine sulfate, 151mg/day for Ginkgo biloba extract, and 0.4mg/day for melatonin. The examples for caffeine and alcohol validate the approach because intake above BMD clearly exhibits pharmacological action. Nevertheless, due to uncertainties in dose-response modelling as well as the need for additional uncertainty factors to consider differences in sensitivity within the human population, a "borderline range" on the dose-response curve remains. "Pharmacological action" has proven to be not very well suited as binary decision criterion between foods and medicinal product. The European legislator should rethink the definition of medicinal products, as the current situation based on complicated case-by-case decisions on pharmacological action leads to an unregulated market flooded with potentially illegal food supplements. Copyright © 2012 Elsevier Inc. All rights reserved.
Melnick, Daniel; Baez, Juan Carlos; Montecino, Henry; Lagos, Nelson A.; Acuña, Emilio; Manzano, Mario; Camus, Patricio A.
2017-01-01
The April 1st 2014 Iquique earthquake (MW 8.1) occurred along the northern Chile margin where the Nazca plate is subducted below the South American continent. The last great megathrust earthquake here, in 1877 of Mw ~8.8 opened a seismic gap, which was only partly closed by the 2014 earthquake. Prior to the earthquake in 2013, and shortly after it we compared data from leveled benchmarks, deployed campaign GPS instruments, continuous GPS stations and estimated sea levels using the upper vertical level of rocky shore benthic organisms including algae, barnacles, and mussels. Land-level changes estimated from mean elevations of benchmarks indicate subsidence along a ~100-km stretch of coast, ranging from 3 to 9 cm at Corazones (18°30’S) to between 30 and 50 cm at Pisagua (19°30’S). About 15 cm of uplift was measured along the southern part of the rupture at Chanabaya (20°50’S). Land-level changes obtained from benchmarks and campaign GPS were similar at most sites (mean difference 3.7±3.2 cm). Higher differences however, were found between benchmarks and continuous GPS (mean difference 8.5±3.6 cm), possibly because sites were not collocated and separated by several kilometers. Subsidence estimated from the upper limits of intertidal fauna at Pisagua ranged between 40 to 60 cm, in general agreement with benchmarks and GPS. At Chanavaya, the magnitude and sense of displacement of the upper marine limit was variable across species, possibly due to species—dependent differences in ecology. Among the studied species, measurements on lithothamnioid calcareous algae most closely matched those made with benchmarks and GPS. When properly calibrated, rocky shore benthic species may be used to accurately measure land-level changes along coasts affected by subduction earthquakes. Our calibration of those methods will improve their accuracy when applied to coasts lacking pre-earthquake data and in estimating deformation during pre–instrumental earthquakes. PMID:28333998
Benchmark solutions for the galactic heavy-ion transport equations with energy and spatial coupling
NASA Technical Reports Server (NTRS)
Ganapol, Barry D.; Townsend, Lawrence W.; Lamkin, Stanley L.; Wilson, John W.
1991-01-01
Nontrivial benchmark solutions are developed for the galactic heavy ion transport equations in the straightahead approximation with energy and spatial coupling. Analytical representations of the ion fluxes are obtained for a variety of sources with the assumption that the nuclear interaction parameters are energy independent. The method utilizes an analytical LaPlace transform inversion to yield a closed form representation that is computationally efficient. The flux profiles are then used to predict ion dose profiles, which are important for shield design studies.
NASA Technical Reports Server (NTRS)
Ganapol, Barry D.; Townsend, Lawrence W.; Wilson, John W.
1989-01-01
Nontrivial benchmark solutions are developed for the galactic ion transport (GIT) equations in the straight-ahead approximation. These equations are used to predict potential radiation hazards in the upper atmosphere and in space. Two levels of difficulty are considered: (1) energy independent, and (2) spatially independent. The analysis emphasizes analytical methods never before applied to the GIT equations. Most of the representations derived have been numerically implemented and compared to more approximate calculations. Accurate ion fluxes are obtained (3 to 5 digits) for nontrivial sources. For monoenergetic beams, both accurate doses and fluxes are found. The benchmarks presented are useful in assessing the accuracy of transport algorithms designed to accommodate more complex radiation protection problems. In addition, these solutions can provide fast and accurate assessments of relatively simple shield configurations.
Suwazono, Yasushi; Dochi, Mirei; Kobayashi, Etsuko; Oishi, Mitsuhiro; Okubo, Yasushi; Tanaka, Kumihiko; Sakata, Kouichi
2008-12-01
The objective of this study was to calculate benchmark durations and lower 95% confidence limits for benchmark durations of working hours associated with subjective fatigue symptoms by applying the benchmark dose approach while adjusting for job-related stress using multiple logistic regression analyses. A self-administered questionnaire was completed by 3,069 male and 412 female daytime workers (age 18-67 years) in a Japanese steel company. The eight dependent variables in the Cumulative Fatigue Symptoms Index were decreased vitality, general fatigue, physical disorders, irritability, decreased willingness to work, anxiety, depressive feelings, and chronic tiredness. Independent variables were daily working hours, four subscales (job demand, job control, interpersonal relationship, and job suitability) of the Brief Job Stress Questionnaire, and other potential covariates. Using significant parameters for working hours and those for other covariates, the benchmark durations of working hours were calculated for the corresponding Index property. Benchmark response was set at 5% or 10%. Assuming a condition of worst job stress, the benchmark duration/lower 95% confidence limit for benchmark duration of working hours per day with a benchmark response of 5% or 10% were 10.0/9.4 or 11.7/10.7 (irritability) and 9.2/8.9 or 10.4/9.8 (chronic tiredness) in men and 8.9/8.4 or 9.8/8.9 (chronic tiredness) in women. The threshold amounts of working hours for fatigue symptoms under the worst job-related stress were very close to the standard daily working hours in Japan. The results strongly suggest that special attention should be paid to employees whose working hours exceed threshold amounts based on individual levels of job-related stress.
NASA Astrophysics Data System (ADS)
Lee, Yi-Kang
2017-09-01
Nuclear decommissioning takes place in several stages due to the radioactivity in the reactor structure materials. A good estimation of the neutron activation products distributed in the reactor structure materials impacts obviously on the decommissioning planning and the low-level radioactive waste management. Continuous energy Monte-Carlo radiation transport code TRIPOLI-4 has been applied on radiation protection and shielding analyses. To enhance the TRIPOLI-4 application in nuclear decommissioning activities, both experimental and computational benchmarks are being performed. To calculate the neutron activation of the shielding and structure materials of nuclear facilities, the knowledge of 3D neutron flux map and energy spectra must be first investigated. To perform this type of neutron deep penetration calculations with the Monte Carlo transport code, variance reduction techniques are necessary in order to reduce the uncertainty of the neutron activation estimation. In this study, variance reduction options of the TRIPOLI-4 code were used on the NAIADE 1 light water shielding benchmark. This benchmark document is available from the OECD/NEA SINBAD shielding benchmark database. From this benchmark database, a simplified NAIADE 1 water shielding model was first proposed in this work in order to make the code validation easier. Determination of the fission neutron transport was performed in light water for penetration up to 50 cm for fast neutrons and up to about 180 cm for thermal neutrons. Measurement and calculation results were benchmarked. Variance reduction options and their performance were discussed and compared.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, JM; Samei, E; Departments of Physics, Electrical and Computer Engineering, and Biomedical Engineering, and Medical Physics Graduate Program, Duke University, Durham, NC
2016-06-15
Purpose: Recent legislative and accreditation requirements have driven rapid development and implementation of CT radiation dose monitoring solutions. Institutions must determine how to improve quality, safety, and consistency of their clinical performance. The purpose of this work was to design a strategy and meaningful characterization of results from an in-house, clinically-deployed dose monitoring solution. Methods: A dose monitoring platform was designed by our imaging physics group that focused on extracting protocol parameters, dose metrics, and patient demographics and size. Compared to most commercial solutions, which focus on individual exam alerts and global thresholds, the program sought to characterize overall consistencymore » and targeted thresholds based on eight analytic interrogations. Those were based on explicit questions related to protocol application, national benchmarks, protocol and size-specific dose targets, operational consistency, outliers, temporal trends, intra-system variability, and consistent use of electronic protocols. Using historical data since the start of 2013, 95% and 99% intervals were used to establish yellow and amber parameterized dose alert thresholds, respectively, as a function of protocol, scanner, and size. Results: Quarterly reports have been generated for three hospitals for 3 quarters of 2015 totaling 27880, 28502, 30631 exams, respectively. Four adult and two pediatric protocols were higher than external institutional benchmarks. Four protocol dose levels were being inconsistently applied as a function of patient size. For the three hospitals, the minimum and maximum amber outlier percentages were [1.53%,2.28%], [0.76%,1.8%], [0.94%,1.17%], respectively. Compared with the electronic protocols, 10 protocols were found to be used with some inconsistency. Conclusion: Dose monitoring can satisfy requirements with global alert thresholds and patient dose records, but the real value is in optimizing patient-specific protocols, balancing image quality trade-offs that dose-reduction strategies promise, and improving the performance and consistency of a clinical operation. Data plots that capture patient demographics and scanner performance demonstrate that value.« less
Implementation and verification of global optimization benchmark problems
NASA Astrophysics Data System (ADS)
Posypkin, Mikhail; Usov, Alexander
2017-12-01
The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its' gradient at a given point and the interval estimates of a function and its' gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.
Savanna elephant numbers are only a quarter of their expected values
Robson, Ashley S.; Trimble, Morgan J.; Purdon, Andrew; Young-Overton, Kim D.; Pimm, Stuart L.; van Aarde, Rudi J.
2017-01-01
Savannas once constituted the range of many species that human encroachment has now reduced to a fraction of their former distribution. Many survive only in protected areas. Poaching reduces the savanna elephant, even where protected, likely to the detriment of savanna ecosystems. While resources go into estimating elephant populations, an ecological benchmark by which to assess counts is lacking. Knowing how many elephants there are and how many poachers kill is important, but on their own, such data lack context. We collated savanna elephant count data from 73 protected areas across the continent estimated to hold ~50% of Africa’s elephants and extracted densities from 18 broadly stable population time series. We modeled these densities using primary productivity, water availability, and an index of poaching as predictors. We then used the model to predict stable densities given current conditions and poaching for all 73 populations. Next, to generate ecological benchmarks, we predicted such densities for a scenario of zero poaching. Where historical data are available, they corroborate or exceed benchmarks. According to recent counts, collectively, the 73 savanna elephant populations are at 75% of the size predicted based on current conditions and poaching levels. However, populations are at <25% of ecological benchmarks given a scenario of zero poaching (~967,000)—a total deficit of ~730,000 elephants. Populations in 30% of the 73 protected areas were <5% of their benchmarks, and the median current density as a percentage of ecological benchmark across protected areas was just 13%. The ecological context provided by these benchmark values, in conjunction with ongoing census projects, allow efficient targeting of conservation efforts. PMID:28414784
Edler, Lutz; Hart, Andy; Greaves, Peter; Carthew, Philip; Coulet, Myriam; Boobis, Alan; Williams, Gary M; Smith, Benjamin
2014-08-01
This article addresses a number of concepts related to the selection and modelling of carcinogenicity data for the calculation of a Margin of Exposure. It follows up on the recommendations put forward by the International Life Sciences Institute - European branch in 2010 on the application of the Margin of Exposure (MoE) approach to substances in food that are genotoxic and carcinogenic. The aims are to provide practical guidance on the relevance of animal tumour data for human carcinogenic hazard assessment, appropriate selection of tumour data for Benchmark Dose Modelling, and approaches for dealing with the uncertainty associated with the selection of data for modelling and, consequently, the derived Point of Departure (PoD) used to calculate the MoE. Although the concepts outlined in this article are interrelated, the background expertise needed to address each topic varies. For instance, the expertise needed to make a judgement on biological relevance of a specific tumour type is clearly different to that needed to determine the statistical uncertainty around the data used for modelling a benchmark dose. As such, each topic is dealt with separately to allow those with specialised knowledge to target key areas of guidance and provide a more in-depth discussion on each subject for those new to the concept of the Margin of Exposure approach. Copyright © 2013 ILSI Europe. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Bäumer, C.; Janson, M.; Timmermann, B.; Wulff, J.
2018-04-01
To assess if apertures shall be mounted upstream or downstream of a range shifting block if these field-shaping devices are combined with the pencil-beam scanning delivery technique (PBS). The lateral dose fall-off served as a benchmark parameter. Both options realizing PBS-with-apertures were compared to the uniform scanning mode. We also evaluated the difference regarding the out-of-field dose caused by interactions of protons in beam-shaping devices. The potential benefit of the downstream configuration over the upstream configuration was estimated analytically. Guided by this theoretical evaluation a mechanical adapter was developed which transforms the upstream configuration provided by the proton machine vendor to a downstream configuration. Transversal dose profiles were calculated with the Monte-Carlo based dose engine of the commercial treatment planning system RayStation 6. Two-dimensional dose planes were measured with an ionization chamber array and a scintillation detector at different depths and compared to the calculation. Additionally, a clinical example for the irradiation of the orbit was compared for both PBS options and a uniform scanning treatment plan. Assuming the same air gap the lateral dose fall-off at the field edge at a few centimeter depth is 20% smaller for the aperture-downstream configuration than for the upstream one. For both options of PBS-with-apertures the dose fall-off is larger than in uniform scanning delivery mode if the minimum accelerator energy is 100 MeV. The RayStation treatment planning system calculated the width of the lateral dose fall-off with an accuracy of typically 0.1 mm–0.3 mm. Although experiments and calculations indicate a ranking of the three delivery options regarding lateral dose fall-off, there seems to be a limited impact on a multi-field treatment plan.
Using chemical benchmarking to determine the persistence of chemicals in a Swedish lake.
Zou, Hongyan; Radke, Michael; Kierkegaard, Amelie; MacLeod, Matthew; McLachlan, Michael S
2015-02-03
It is challenging to measure the persistence of chemicals under field conditions. In this work, two approaches for measuring persistence in the field were compared: the chemical mass balance approach, and a novel chemical benchmarking approach. Ten pharmaceuticals, an X-ray contrast agent, and an artificial sweetener were studied in a Swedish lake. Acesulfame K was selected as a benchmark to quantify persistence using the chemical benchmarking approach. The 95% confidence intervals of the half-life for transformation in the lake system ranged from 780-5700 days for carbamazepine to <1-2 days for ketoprofen. The persistence estimates obtained using the benchmarking approach agreed well with those from the mass balance approach (1-21% difference), indicating that chemical benchmarking can be a valid and useful method to measure the persistence of chemicals under field conditions. Compared to the mass balance approach, the benchmarking approach partially or completely eliminates the need to quantify mass flow of chemicals, so it is particularly advantageous when the quantification of mass flow of chemicals is difficult. Furthermore, the benchmarking approach allows for ready comparison and ranking of the persistence of different chemicals.
Thought Experiment to Examine Benchmark Performance for Fusion Nuclear Data
NASA Astrophysics Data System (ADS)
Murata, Isao; Ohta, Masayuki; Kusaka, Sachie; Sato, Fuminobu; Miyamaru, Hiroyuki
2017-09-01
There are many benchmark experiments carried out so far with DT neutrons especially aiming at fusion reactor development. These integral experiments seemed vaguely to validate the nuclear data below 14 MeV. However, no precise studies exist now. The author's group thus started to examine how well benchmark experiments with DT neutrons can play a benchmarking role for energies below 14 MeV. Recently, as a next phase, to generalize the above discussion, the energy range was expanded to the entire region. In this study, thought experiments with finer energy bins have thus been conducted to discuss how to generally estimate performance of benchmark experiments. As a result of thought experiments with a point detector, the sensitivity for a discrepancy appearing in the benchmark analysis is "equally" due not only to contribution directly conveyed to the deterctor, but also due to indirect contribution of neutrons (named (A)) making neutrons conveying the contribution, indirect controbution of neutrons (B) making the neutrons (A) and so on. From this concept, it would become clear from a sensitivity analysis in advance how well and which energy nuclear data could be benchmarked with a benchmark experiment.
NASA Astrophysics Data System (ADS)
Jaboulay, Jean-Charles; Brun, Emeric; Hugot, François-Xavier; Huynh, Tan-Dat; Malouch, Fadhel; Mancusi, Davide; Tsilanizara, Aime
2017-09-01
After fission or fusion reactor shutdown the activated structure emits decay photons. For maintenance operations the radiation dose map must be established in the reactor building. Several calculation schemes have been developed to calculate the shutdown dose rate. These schemes are widely developed in fusion application and more precisely for the ITER tokamak. This paper presents the rigorous-two-steps scheme implemented at CEA. It is based on the TRIPOLI-4® Monte Carlo code and the inventory code MENDEL. The ITER shutdown dose rate benchmark has been carried out, results are in a good agreement with the other participant.
Mojżeszek, N; Farah, J; Kłodowska, M; Ploc, O; Stolarczyk, L; Waligórski, M P R; Olko, P
2017-02-01
To measure the environmental doses from stray neutrons in the vicinity of a solid slab phantom as a function of beam energy, field size and modulation width, using the proton pencil beam scanning (PBS) technique. Measurements were carried out using two extended range WENDI-II rem-counters and three tissue equivalent proportional counters. Detectors were suitably placed at different distances around the RW3 slab phantom. Beam irradiation parameters were varied to cover the clinical ranges of proton beam energies (100-220MeV), field sizes ((2×2)-(20×20)cm 2 ) and modulation widths (0-15cm). For pristine proton peak irradiations, large variations of neutron H ∗ (10)/D were observed with changes in beam energy and field size, while these were less dependent on modulation widths. H ∗ (10)/D for pristine proton pencil beams varied between 0.04μSvGy -1 at beam energy 100MeV and a (2×2)cm 2 field at 2.25m distance and 90° angle with respect to the beam axis, and 72.3μSvGy -1 at beam energy 200MeV and a (20×20) cm 2 field at 1m distance along the beam axis. The obtained results will be useful in benchmarking Monte Carlo calculations of proton radiotherapy in PBS mode and in estimating the exposure to stray radiation of the patient. Such estimates may be facilitated by the obtained best-fitted simple analytical formulae relating the stray neutron doses at points of interest with beam irradiation parameters. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.
Xu, Qingyang; Zhang, Chengjin; Zhang, Li
2014-01-01
Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.
A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization
Xu, Qingyang; Zhang, Chengjin; Zhang, Li
2014-01-01
Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA. PMID:24892059
Cornelius, Iwan; Guatelli, Susanna; Fournier, Pauline; Crosbie, Jeffrey C; Sanchez Del Rio, Manuel; Bräuer-Krisch, Elke; Rosenfeld, Anatoly; Lerch, Michael
2014-05-01
Microbeam radiation therapy (MRT) is a synchrotron-based radiotherapy modality that uses high-intensity beams of spatially fractionated radiation to treat tumours. The rapid evolution of MRT towards clinical trials demands accurate treatment planning systems (TPS), as well as independent tools for the verification of TPS calculated dose distributions in order to ensure patient safety and treatment efficacy. Monte Carlo computer simulation represents the most accurate method of dose calculation in patient geometries and is best suited for the purpose of TPS verification. A Monte Carlo model of the ID17 biomedical beamline at the European Synchrotron Radiation Facility has been developed, including recent modifications, using the Geant4 Monte Carlo toolkit interfaced with the SHADOW X-ray optics and ray-tracing libraries. The code was benchmarked by simulating dose profiles in water-equivalent phantoms subject to irradiation by broad-beam (without spatial fractionation) and microbeam (with spatial fractionation) fields, and comparing against those calculated with a previous model of the beamline developed using the PENELOPE code. Validation against additional experimental dose profiles in water-equivalent phantoms subject to broad-beam irradiation was also performed. Good agreement between codes was observed, with the exception of out-of-field doses and toward the field edge for larger field sizes. Microbeam results showed good agreement between both codes and experimental results within uncertainties. Results of the experimental validation showed agreement for different beamline configurations. The asymmetry in the out-of-field dose profiles due to polarization effects was also investigated, yielding important information for the treatment planning process in MRT. This work represents an important step in the development of a Monte Carlo-based independent verification tool for treatment planning in MRT.
BIOREL: the benchmark resource to estimate the relevance of the gene networks.
Antonov, Alexey V; Mewes, Hans W
2006-02-06
The progress of high-throughput methodologies in functional genomics has lead to the development of statistical procedures to infer gene networks from various types of high-throughput data. However, due to the lack of common standards, the biological significance of the results of the different studies is hard to compare. To overcome this problem we propose a benchmark procedure and have developed a web resource (BIOREL), which is useful for estimating the biological relevance of any genetic network by integrating different sources of biological information. The associations of each gene from the network are classified as biologically relevant or not. The proportion of genes in the network classified as "relevant" is used as the overall network relevance score. Employing synthetic data we demonstrated that such a score ranks the networks fairly in respect to the relevance level. Using BIOREL as the benchmark resource we compared the quality of experimental and theoretically predicted protein interaction data.
Solution of the neutronics code dynamic benchmark by finite element method
NASA Astrophysics Data System (ADS)
Avvakumov, A. V.; Vabishchevich, P. N.; Vasilev, A. O.; Strizhov, V. F.
2016-10-01
The objective is to analyze the dynamic benchmark developed by Atomic Energy Research for the verification of best-estimate neutronics codes. The benchmark scenario includes asymmetrical ejection of a control rod in a water-type hexagonal reactor at hot zero power. A simple Doppler feedback mechanism assuming adiabatic fuel temperature heating is proposed. The finite element method on triangular calculation grids is used to solve the three-dimensional neutron kinetics problem. The software has been developed using the engineering and scientific calculation library FEniCS. The matrix spectral problem is solved using the scalable and flexible toolkit SLEPc. The solution accuracy of the dynamic benchmark is analyzed by condensing calculation grid and varying degree of finite elements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowhurst, James A, E-mail: jimcrowhurst@hotmail.com; School of Medicine, University of Queensland, St. Lucia, Brisbane, Queensland; Whitby, Mark
Radiation dose to patients undergoing invasive coronary angiography (ICA) is relatively high. Guidelines suggest that a local benchmark or diagnostic reference level (DRL) be established for these procedures. This study sought to create a DRL for ICA procedures in Queensland public hospitals. Data were collected for all Cardiac Catheter Laboratories in Queensland public hospitals. Data were collected for diagnostic coronary angiography (CA) and single-vessel percutaneous intervention (PCI) procedures. Dose area product (P{sub KA}), skin surface entrance dose (K{sub AR}), fluoroscopy time (FT), and patient height and weight were collected for 3 months. The DRL was set from the 75th percentilemore » of the P{sub KA.} 2590 patients were included in the CA group where the median FT was 3.5 min (inter-quartile range = 2.3–6.1). Median K{sub AR} = 581 mGy (374–876). Median P{sub KA} = 3908 uGym{sup 2} (2489–5865) DRL = 5865 uGym{sup 2}. 947 patients were included in the PCI group where median FT was 11.2 min (7.7–17.4). Median K{sub AR} = 1501 mGy (928–2224). Median P{sub KA} = 8736 uGym{sup 2} (5449–12,900) DRL = 12,900 uGym{sup 2}. This study established a benchmark for radiation dose for diagnostic and interventional coronary angiography in Queensland public facilities.« less
Kawamoto, Taisuke; Ito, Yuichi; Morita, Osamu; Honda, Hiroshi
2017-01-01
Cholestasis is one of the major causes of drug-induced liver injury (DILI), which can result in withdrawal of approved drugs from the market. Early identification of cholestatic drugs is difficult due to the complex mechanisms involved. In order to develop a strategy for mechanism-based risk assessment of cholestatic drugs, we analyzed gene expression data obtained from the livers of rats that had been orally administered with 12 known cholestatic compounds repeatedly for 28 days at three dose levels. Qualitative analyses were performed using two statistical approaches (hierarchical clustering and principle component analysis), in addition to pathway analysis. The transcriptional benchmark dose (tBMD) and tBMD 95% lower limit (tBMDL) were used for quantitative analyses, which revealed three compound sub-groups that produced different types of differential gene expression; these groups of genes were mainly involved in inflammation, cholesterol biosynthesis, and oxidative stress. Furthermore, the tBMDL values for each test compound were in good agreement with the relevant no observed adverse effect level. These results indicate that our novel strategy for drug safety evaluation using mechanism-based classification and tBMDL would facilitate the application of toxicogenomics for risk assessment of cholestatic DILI.
Dose and Effect Thresholds for Early Key Events in a Mode of ...
ABSTRACT Strategies for predicting adverse health outcomes of environmental chemicals are centered on early key events in toxicity pathways. However, quantitative relationships between early molecular changes in a given pathway and later health effects are often poorly defined. The goal of this study was to evaluate short-term key event indicators using qualitative and quantitative methods in an established pathway of mouse liver tumorigenesis mediated by peroxisome proliferator-activated receptor-alpha (PPARα). Male B6C3F1 mice were exposed for 7 days to di(2-ethylhexyl) phthalate (DEHP), di-n-octyl phthalate (DNOP), and n-butyl benzyl phthalate (BBP), which vary in PPARα activity and liver tumorigenicity. Each phthalate increased expression of select PPARα target genes at 7 days, while only DEHP significantly increased liver cell proliferation labeling index (LI). Transcriptional benchmark dose (BMDT) estimates for dose-related genomic markers stratified phthalates according to hypothetical tumorigenic potencies, unlike BMDs for non-genomic endpoints (liver weights or proliferation). The 7-day BMDT values for Acot1 as a surrogate measure for PPARα activation were 29, 370, and 676 mg/kg-d for DEHP, DNOP, and BBP, respectively, distinguishing DEHP (liver tumor BMD of 35 mg/kg-d) from non-tumorigenic DNOP and BBP. Effect thresholds were generated using linear regression of DEHP effects at 7 days and 2-year tumor incidence values to anchor early response molec
Monte Carlo simulations for angular and spatial distributions in therapeutic-energy proton beams
NASA Astrophysics Data System (ADS)
Lin, Yi-Chun; Pan, C. Y.; Chiang, K. J.; Yuan, M. C.; Chu, C. H.; Tsai, Y. W.; Teng, P. K.; Lin, C. H.; Chao, T. C.; Lee, C. C.; Tung, C. J.; Chen, A. E.
2017-11-01
The purpose of this study is to compare the angular and spatial distributions of therapeutic-energy proton beams obtained from the FLUKA, GEANT4 and MCNP6 Monte Carlo codes. The Monte Carlo simulations of proton beams passing through two thin targets and a water phantom were investigated to compare the primary and secondary proton fluence distributions and dosimetric differences among these codes. The angular fluence distributions, central axis depth-dose profiles, and lateral distributions of the Bragg peak cross-field were calculated to compare the proton angular and spatial distributions and energy deposition. Benchmark verifications from three different Monte Carlo simulations could be used to evaluate the residual proton fluence for the mean range and to estimate the depth and lateral dose distributions and the characteristic depths and lengths along the central axis as the physical indices corresponding to the evaluation of treatment effectiveness. The results showed a general agreement among codes, except that some deviations were found in the penumbra region. These calculated results are also particularly helpful for understanding primary and secondary proton components for stray radiation calculation and reference proton standard determination, as well as for determining lateral dose distribution performance in proton small-field dosimetry. By demonstrating these calculations, this work could serve as a guide to the recent field of Monte Carlo methods for therapeutic-energy protons.
Polonium-210 in marine mussels (bivalve molluscs) inhabiting the southern coast of India.
Khan, M Feroz; Wesley, S Godwin; Rajan, M P
2014-12-01
The present study focused on the determination of the alpha-emitter, (210)Po, in two species of marine mussels (bivalve molluscs) commonly available in the southern coastal region of India. The brown mussel, Perna indica was collected from the west coast and the green mussel, Perna viridis from the east coast. The concentration of (210)Po was related to the allometry (length of shell, wet/dry weight of shell/soft tissue) of the mussels and significant results were found. The study period focused on three seasons namely, pre-monsoon, monsoon and post-monsoon for a 1-year period (2010-2011). The results revealed higher activity levels in smaller-sized mussels compared to larger ones. Marked variation in (210)Po activity concentration was noted in the whole-body soft tissues between seasons and sampling site (p < 0.05). The dose rate assessment for mussels was performed using the ERICA Assessment tool. The chronic exposure to mussels due to (210)Po was found to be lesser than the global benchmark dose rate of 10 μGy h(-1). The effective ingestion dose to adults who intake mussels was estimated to be in the range 5.1-34.9 μSv y(-1). The measurement contributes to the furthering of knowledge of (210)Po, since no data exist in this region. Copyright © 2014 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, J.M.; Mole, M.L.; Chernoff, N.
1993-01-01
Pregnant CD-1 mice were exposed to 1,000, 2,000, 5,000, 7,500, 10,000, or 15,000 ppm on methanol for 7 hr/day on days 6-15 of gestation. On day 17 of gestation, remaining mice were weighed, killed and the gravid uterus was removed. Numbers of implantation sites, live and dead fetuses and resorptions were counted, and fetuses were examined externally and weighed as a litter. Significant increases in the incidence of exencephaly and cleft palate were observed at 5,000 ppm and above, increased postimplantation mortality at 7,500 ppm and above (including an increasing incidence of full-litter resorption), and reduced fetal weight at 10,000more » ppm and above. A dose-related increase in cervical ribs or ossification sites lateral to the seventh cervical vertebra was significant at 2,000 ppm and above. Thus, the NOAEL for the developmental toxicity in this study is 1,000 ppm. The results of this study indicate that inhaled methanol is developmentally toxic in the mouse at exposure levels which were not maternally toxic. Litters of pregnant mice gavaged orally with 4 g methanol/kg displayed developmental toxic effects similar to those seen in the 10,000 ppm methanol exposure group. (Copyright (c) 1993 Wiley-Liss, Inc.)« less
Contributions to Integral Nuclear Data in ICSBEP and IRPhEP since ND 2013
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bess, John D.; Briggs, J. Blair; Gulliford, Jim
2016-09-01
The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) was last discussed directly with the international nuclear data community at ND2013. Since ND2013, integral benchmark data that are available for nuclear data testing has continued to increase. The status of the international benchmark efforts and the latest contributions to integral nuclear data for testing is discussed. Select benchmark configurations that have been added to the ICSBEP and IRPhEP Handbooks since ND2013 are highlighted. The 2015 edition of the ICSBEP Handbook now contains 567 evaluations with benchmark specifications for 4,874more » critical, near-critical, or subcritical configurations, 31 criticality alarm placement/shielding configuration with multiple dose points apiece, and 207 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications. The 2015 edition of the IRPhEP Handbook contains data from 143 different experimental series that were performed at 50 different nuclear facilities. Currently 139 of the 143 evaluations are published as approved benchmarks with the remaining four evaluations published in draft format only. Measurements found in the IRPhEP Handbook include criticality, buckling and extrapolation length, spectral characteristics, reactivity effects, reactivity coefficients, kinetics, reaction-rate distributions, power distributions, isotopic compositions, and/or other miscellaneous types of measurements for various types of reactor systems. Annual technical review meetings for both projects were held in April 2016; additional approved benchmark evaluations will be included in the 2016 editions of these handbooks.« less
Benchmarking of MCNP for calculating dose rates at an interim storage facility for nuclear waste.
Heuel-Fabianek, Burkhard; Hille, Ralf
2005-01-01
During the operation of research facilities at Research Centre Jülich, Germany, nuclear waste is stored in drums and other vessels in an interim storage building on-site, which has a concrete shielding at the side walls. Owing to the lack of a well-defined source, measured gamma spectra were unfolded to determine the photon flux on the surface of the containers. The dose rate simulation, including the effects of skyshine, using the Monte Carlo transport code MCNP is compared with the measured dosimetric data at some locations in the vicinity of the interim storage building. The MCNP data for direct radiation confirm the data calculated using a point-kernel method. However, a comparison of the modelled dose rates for direct radiation and skyshine with the measured data demonstrate the need for a more precise definition of the source. Both the measured and the modelled dose rates verified the fact that the legal limits (<1 mSv a(-1)) are met in the area outside the perimeter fence of the storage building to which members of the public have access. Using container surface data (gamma spectra) to define the source may be a useful tool for practical calculations and additionally for benchmarking of computer codes if the discussed critical aspects with respect to the source can be addressed adequately.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-09
... transactions with foreign persons. In nonbenchmark years, the universe estimates covering these transactions... nonbenchmark years, the universe estimates covering these transactions would be derived from the sample data...
U.S. EPA Superfund Program's Policy for Risk and Dose Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Stuart
2008-01-15
The Environmental Protection Agency (EPA) Office of Superfund Remediation and Technology Innovation (OSRTI) has primary responsibility for implementing the long-term (non-emergency) portion of a key U.S. law regulating cleanup: the Comprehensive Environmental Response, Compensation and Liability Act, CERCLA, nicknamed 'Superfund'. The purpose of the Superfund program is to protect human health and the environment over the long term from releases or potential releases of hazardous substances from abandoned or uncontrolled hazardous waste sites. The focus of this paper is on risk and dose assessment policies and tools for addressing radioactively contaminated sites by the Superfund program. EPA has almost completedmore » two risk assessment tools that are particularly relevant to decommissioning activities conducted under CERCLA authority. These are the: 1. Building Preliminary Remediation Goals for Radionuclides (BPRG) electronic calculator, and 2. Radionuclide Outdoor Surfaces Preliminary Remediation Goals (SPRG) electronic calculator. EPA developed the BPRG calculator to help standardize the evaluation and cleanup of radiologically contaminated buildings at which risk is being assessed for occupancy. BPRGs are radionuclide concentrations in dust, air and building materials that correspond to a specified level of human cancer risk. The intent of SPRG calculator is to address hard outside surfaces such as building slabs, outside building walls, sidewalks and roads. SPRGs are radionuclide concentrations in dust and hard outside surface materials. EPA is also developing the 'Radionuclide Ecological Benchmark' calculator. This calculator provides biota concentration guides (BCGs), also known as ecological screening benchmarks, for use in ecological risk assessments at CERCLA sites. This calculator is intended to develop ecological benchmarks as part of the EPA guidance 'Ecological Risk Assessment Guidance for Superfund: Process for Designing and Conducting Ecological Risk Assessments'. The calculator develops ecological benchmarks for ionizing radiation based on cell death only.« less
NASA Astrophysics Data System (ADS)
Viereck, R. A.; Azeem, S. I.
2017-12-01
One of the goals of the National Space Weather Action Plan is to establish extreme event benchmarks. These benchmarks are estimates of environmental parameters that impact technologies and systems during extreme space weather events. Quantitative assessment of anticipated conditions during these extreme space weather event will enable operators and users of affected technologies to develop plans for mitigating space weather risks and improve preparedness. The ionosphere is one of the most important regions of space because so many applications either depend on ionospheric space weather for their operation (HF communication, over-the-horizon radars), or can be deleteriously affected by ionospheric conditions (e.g. GNSS navigation and timing, UHF satellite communications, synthetic aperture radar, HF communications). Since the processes that influence the ionosphere vary over time scales from seconds to years, it continues to be a challenge to adequately predict its behavior in many circumstances. Estimates with large uncertainties, in excess of 100%, may result in operators of impacted technologies over or under preparing for such events. The goal of the next phase of the benchmarking activity is to reduce these uncertainties. In this presentation, we will focus on the sources of uncertainty in the ionospheric response to extreme geomagnetic storms. We will then discuss various research efforts required to better understand the underlying processes of ionospheric variability and how the uncertainties in ionospheric response to extreme space weather could be reduced and the estimates improved.
Dean, Jeffry L; Zhao, Q Jay; Lambert, Jason C; Hawkins, Belinda S; Thomas, Russell S; Wesselkamper, Scott C
2017-05-01
The rate of new chemical development in commerce combined with a paucity of toxicity data for legacy chemicals presents a unique challenge for human health risk assessment. There is a clear need to develop new technologies and incorporate novel data streams to more efficiently inform derivation of toxicity values. One avenue of exploitation lies in the field of transcriptomics and the application of gene expression analysis to characterize biological responses to chemical exposures. In this context, gene set enrichment analysis (GSEA) was employed to evaluate tissue-specific, dose-response gene expression data generated following exposure to multiple chemicals for various durations. Patterns of transcriptional enrichment were evident across time and with increasing dose, and coordinated enrichment plausibly linked to the etiology of the biological responses was observed. GSEA was able to capture both transient and sustained transcriptional enrichment events facilitating differentiation between adaptive versus longer term molecular responses. When combined with benchmark dose (BMD) modeling of gene expression data from key drivers of biological enrichment, GSEA facilitated characterization of dose ranges required for enrichment of biologically relevant molecular signaling pathways, and promoted comparison of the activation dose ranges required for individual pathways. Median transcriptional BMD values were calculated for the most sensitive enriched pathway as well as the overall median BMD value for key gene members of significantly enriched pathways, and both were observed to be good estimates of the most sensitive apical endpoint BMD value. Together, these efforts support the application of GSEA to qualitative and quantitative human health risk assessment. Published by Oxford University Press on behalf of the Society of Toxicology 2017. This work is written by US Government employees and is in the public domain in the US.
A chronic oral reference dose for hexavalent chromium-induced intestinal cancer†
Thompson, Chad M; Kirman, Christopher R; Proctor, Deborah M; Haws, Laurie C; Suh, Mina; Hays, Sean M; Hixon, J Gregory; Harris, Mark A
2014-01-01
High concentrations of hexavalent chromium [Cr(VI)] in drinking water induce villous cytotoxicity and compensatory crypt hyperplasia in the small intestines of mice (but not rats). Lifetime exposure to such cytotoxic concentrations increases intestinal neoplasms in mice, suggesting that the mode of action for Cr(VI)-induced intestinal tumors involves chronic wounding and compensatory cell proliferation of the intestine. Therefore, we developed a chronic oral reference dose (RfD) designed to be protective of intestinal damage and thus intestinal cancer. A physiologically based pharmacokinetic model for chromium in mice was used to estimate the amount of Cr(VI) entering each intestinal tissue section (duodenum, jejunum and ileum) from the lumen per day (normalized to intestinal tissue weight). These internal dose metrics, together with corresponding incidences for diffuse hyperplasia, were used to derive points of departure using benchmark dose modeling and constrained nonlinear regression. Both modeling techniques resulted in similar points of departure, which were subsequently converted to human equivalent doses using a human physiologically based pharmacokinetic model. Applying appropriate uncertainty factors, an RfD of 0.006 mg kg–1 day–1 was derived for diffuse hyperplasia—an effect that precedes tumor formation. This RfD is protective of both noncancer and cancer effects in the small intestine and corresponds to a safe drinking water equivalent level of 210 µg l–1. This concentration is higher than the current federal maximum contaminant level for total Cr (100 µg l–1) and well above levels of Cr(VI) in US drinking water supplies (typically ≤ 5 µg l–1). © 2013 The Authors. Journal of Applied Toxicology published by John Wiley & Sons, Ltd. PMID:23943231
Practical examples of modeling choices and their consequences for risk assessment
Although benchmark dose (BMD) modeling has become the preferred approach to identifying a point of departure (POD) over the No Observed Adverse Effect Level, there remain challenges to its application in human health risk assessment. BMD modeling, as currently implemented by the...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jarabek, A.M.; Menache, M.G.; Overton, J.H. Jr.
1990-10-01
The U.S. Environmental Protection Agency (U.S. EPA) has advocated the establishment of general and scientific guidelines for the evaluation of toxicological data and their use in deriving benchmark values to protect exposed populations from adverse health effects. The Agency's reference dose (RfD) methodology for deriving benchmark values for noncancer toxicity originally addressed risk assessment of oral exposures. This paper presents a brief background on the development of the inhalation reference dose (RfDi) methodology, including concepts and issues related to addressing the dynamics of the respiratory system as the portal of entry. Different dosimetric adjustments are described that were incorporated intomore » the methodology to account for the nature of the inhaled agent (particle or gas) and the site of the observed toxic effects (respiratory or extra-respiratory). Impacts of these adjustments on the extrapolation of toxicity data of inhaled agents for human health risk assessment and future research directions are also discussed.« less
U. S. Environmental Protection Agency's inhalation RFD methodology: Risk assessment for air toxics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jarabek, A.M.; Menache, M.G.; Overton, J.H.
1989-01-01
The U.S. Environmental Protection Agency (U.S. EPA) has advocated the establishment of general and scientific guidelines for the evaluation of toxicological data and their use in deriving benchmark values to protect exposed populations from adverse health effects. The Agency's reference dose (RfD) methodology for deriving benchmark values for noncancer toxicity originally addressed risk assessment of oral exposures. The paper presents a brief background on the development of the inhalation reference dose (RFDi) methodology, including concepts and issues related to addressing the dynamics of the respiratory system as the portal of entry. Different dosimetric adjustments are described that were incorporated intomore » the methodology to account for the nature of the inhaled agent (particle or gas) and the site of the observed toxic effects (respiratory or extrarespiratory). Impacts of these adjustments on the extrapolation of toxicity data of inhaled agents for human health risk assessment and future research directions are also discussed.« less
Lee, Kam L; Bernardo, Michael; Ireland, Timothy A
2016-06-01
This is part two of a two-part study in benchmarking system performance of fixed digital radiographic systems. The study compares the system performance of seven fixed digital radiography systems based on quantitative metrics like modulation transfer function (sMTF), normalised noise power spectrum (sNNPS), detective quantum efficiency (sDQE) and entrance surface air kerma (ESAK). It was found that the most efficient image receptors (greatest sDQE) were not necessarily operating at the lowest ESAK. In part one of this study, sMTF is shown to depend on system configuration while sNNPS is shown to be relatively consistent across systems. Systems are ranked on their signal-to-noise ratio efficiency (sDQE) and their ESAK. Systems using the same equipment configuration do not necessarily have the same system performance. This implies radiographic practice at the site will have an impact on the overall system performance. In general, systems are more dose efficient at low dose settings.
Faddegon, Bruce A.; Shin, Jungwook; Castenada, Carlos M.; Ramos-Méndez, José; Daftari, Inder K.
2015-01-01
Purpose: To measure depth dose curves for a 67.5 ± 0.1 MeV proton beam for benchmarking and validation of Monte Carlo simulation. Methods: Depth dose curves were measured in 2 beam lines. Protons in the raw beam line traversed a Ta scattering foil, 0.1016 or 0.381 mm thick, a secondary emission monitor comprised of thin Al foils, and a thin Kapton exit window. The beam energy and peak width and the composition and density of material traversed by the beam were known with sufficient accuracy to permit benchmark quality measurements. Diodes for charged particle dosimetry from two different manufacturers were used to scan the depth dose curves with 0.003 mm depth reproducibility in a water tank placed 300 mm from the exit window. Depth in water was determined with an uncertainty of 0.15 mm, including the uncertainty in the water equivalent depth of the sensitive volume of the detector. Parallel-plate chambers were used to verify the accuracy of the shape of the Bragg peak and the peak-to-plateau ratio measured with the diodes. The uncertainty in the measured peak-to-plateau ratio was 4%. Depth dose curves were also measured with a diode for a Bragg curve and treatment beam spread out Bragg peak (SOBP) on the beam line used for eye treatment. The measurements were compared to Monte Carlo simulation done with geant4 using topas. Results: The 80% dose at the distal side of the Bragg peak for the thinner foil was at 37.47 ± 0.11 mm (average of measurement with diodes from two different manufacturers), compared to the simulated value of 37.20 mm. The 80% dose for the thicker foil was at 35.08 ± 0.15 mm, compared to the simulated value of 34.90 mm. The measured peak-to-plateau ratio was within one standard deviation experimental uncertainty of the simulated result for the thinnest foil and two standard deviations for the thickest foil. It was necessary to include the collimation in the simulation, which had a more pronounced effect on the peak-to-plateau ratio for the thicker foil. The treatment beam, being unfocussed, had a broader Bragg peak than the raw beam. A 1.3 ± 0.1 MeV FWHM peak width in the energy distribution was used in the simulation to match the Bragg peak width. An additional 1.3–2.24 mm of water in the water column was required over the nominal values to match the measured depth penetration. Conclusions: The proton Bragg curve measured for the 0.1016 mm thick Ta foil provided the most accurate benchmark, having a low contribution of proton scatter from upstream of the water tank. The accuracy was 0.15% in measured beam energy and 0.3% in measured depth penetration at the Bragg peak. The depth of the distal edge of the Bragg peak in the simulation fell short of measurement, suggesting that the mean ionization potential of water is 2–5 eV higher than the 78 eV used in the stopping power calculation for the simulation. The eye treatment beam line depth dose curves provide validation of Monte Carlo simulation of a Bragg curve and SOBP with 4%/2 mm accuracy. PMID:26133619
Deep reinforcement learning for automated radiation adaptation in lung cancer.
Tseng, Huan-Hsin; Luo, Yi; Cui, Sunan; Chien, Jen-Tzung; Ten Haken, Randall K; Naqa, Issam El
2017-12-01
To investigate deep reinforcement learning (DRL) based on historical treatment plans for developing automated radiation adaptation protocols for nonsmall cell lung cancer (NSCLC) patients that aim to maximize tumor local control at reduced rates of radiation pneumonitis grade 2 (RP2). In a retrospective population of 114 NSCLC patients who received radiotherapy, a three-component neural networks framework was developed for deep reinforcement learning (DRL) of dose fractionation adaptation. Large-scale patient characteristics included clinical, genetic, and imaging radiomics features in addition to tumor and lung dosimetric variables. First, a generative adversarial network (GAN) was employed to learn patient population characteristics necessary for DRL training from a relatively limited sample size. Second, a radiotherapy artificial environment (RAE) was reconstructed by a deep neural network (DNN) utilizing both original and synthetic data (by GAN) to estimate the transition probabilities for adaptation of personalized radiotherapy patients' treatment courses. Third, a deep Q-network (DQN) was applied to the RAE for choosing the optimal dose in a response-adapted treatment setting. This multicomponent reinforcement learning approach was benchmarked against real clinical decisions that were applied in an adaptive dose escalation clinical protocol. In which, 34 patients were treated based on avid PET signal in the tumor and constrained by a 17.2% normal tissue complication probability (NTCP) limit for RP2. The uncomplicated cure probability (P+) was used as a baseline reward function in the DRL. Taking our adaptive dose escalation protocol as a blueprint for the proposed DRL (GAN + RAE + DQN) architecture, we obtained an automated dose adaptation estimate for use at ∼2/3 of the way into the radiotherapy treatment course. By letting the DQN component freely control the estimated adaptive dose per fraction (ranging from 1-5 Gy), the DRL automatically favored dose escalation/de-escalation between 1.5 and 3.8 Gy, a range similar to that used in the clinical protocol. The same DQN yielded two patterns of dose escalation for the 34 test patients, but with different reward variants. First, using the baseline P+ reward function, individual adaptive fraction doses of the DQN had similar tendencies to the clinical data with an RMSE = 0.76 Gy; but adaptations suggested by the DQN were generally lower in magnitude (less aggressive). Second, by adjusting the P+ reward function with higher emphasis on mitigating local failure, better matching of doses between the DQN and the clinical protocol was achieved with an RMSE = 0.5 Gy. Moreover, the decisions selected by the DQN seemed to have better concordance with patients eventual outcomes. In comparison, the traditional temporal difference (TD) algorithm for reinforcement learning yielded an RMSE = 3.3 Gy due to numerical instabilities and lack of sufficient learning. We demonstrated that automated dose adaptation by DRL is a feasible and a promising approach for achieving similar results to those chosen by clinicians. The process may require customization of the reward function if individual cases were to be considered. However, development of this framework into a fully credible autonomous system for clinical decision support would require further validation on larger multi-institutional datasets. © 2017 American Association of Physicists in Medicine.
Two-dimensional free-surface flow under gravity: A new benchmark case for SPH method
NASA Astrophysics Data System (ADS)
Wu, J. Z.; Fang, L.
2018-02-01
Currently there are few free-surface benchmark cases with analytical results for the Smoothed Particle Hydrodynamics (SPH) simulation. In the present contribution we introduce a two-dimensional free-surface flow under gravity, and obtain an analytical expression on the surface height difference and a theoretical estimation on the surface fractal dimension. They are preliminarily validated and supported by SPH calculations.
Integrated Sensing Processor, Phase 2
2005-12-01
performance analysis for several baseline classifiers including neural nets, linear classifiers, and kNN classifiers. Use of CCDR as a preprocessing step...below the level of the benchmark non-linear classifier for this problem ( kNN ). Furthermore, the CCDR preconditioned kNN achieved a 10% improvement over...the benchmark kNN without CCDR. Finally, we found an important connection between intrinsic dimension estimation via entropic graphs and the optimal
NASA Astrophysics Data System (ADS)
Park, E.; Jeong, J.
2017-12-01
A precise estimation of groundwater fluctuation is studied by considering delayed recharge flux (DRF) and unsaturated zone drainage (UZD). Both DRF and UZD are due to gravitational flow impeded in the unsaturated zone, which may nonnegligibly affect groundwater level changes. In the validation, a previous model without the consideration of unsaturated flow is benchmarked where the actual groundwater level and precipitation data are divided into three periods based on the climatic condition. The estimation capability of the new model is superior to the benchmarked model as indicated by the significantly improved representation of groundwater level with physically interpretable model parameters.
Comment on ‘egs_brachy: a versatile and fast Monte Carlo code for brachytherapy’
NASA Astrophysics Data System (ADS)
Yegin, Gultekin
2018-02-01
In a recent paper (Chamberland et al 2016 Phys. Med. Biol. 61 8214) develop a new Monte Carlo code called egs_brachy for brachytherapy treatments. It is based on EGSnrc, and written in the C++ programming language. In order to benchmark the egs_brachy code, the authors use it in various test case scenarios in which complex geometry conditions exist. Another EGSnrc based brachytherapy dose calculation engine, BrachyDose, is used for dose comparisons. The authors fail to prove that egs_brachy can produce reasonable dose values for brachytherapy sources in a given medium. The dose comparisons in the paper are erroneous and misleading. egs_brachy should not be used in any further research studies unless and until all the potential bugs are fixed in the code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williamson, Jeffrey F.
This paper briefly reviews the evolution of brachytherapy dosimetry from 1900 to the present. Dosimetric practices in brachytherapy fall into three distinct eras: During the era of biological dosimetry (1900-1938), radium pioneers could only specify Ra-226 and Rn-222 implants in terms of the mass of radium encapsulated within the implanted sources. Due to the high energy of its emitted gamma rays and the long range of its secondary electrons in air, free-air chambers could not be used to quantify the output of Ra-226 sources in terms of exposure. Biological dosimetry, most prominently the threshold erythema dose, gained currency as amore » means of intercomparing radium treatments with exposure-calibrated orthovoltage x-ray units. The classical dosimetry era (1940-1980) began with successful exposure standardization of Ra-226 sources by Bragg-Gray cavity chambers. Classical dose-computation algorithms, based upon 1-D buildup factor measurements and point-source superposition computational algorithms, were able to accommodate artificial radionuclides such as Co-60, Ir-192, and Cs-137. The quantitative dosimetry era (1980- ) arose in response to the increasing utilization of low energy K-capture radionuclides such as I-125 and Pd-103 for which classical approaches could not be expected to estimate accurate correct doses. This led to intensive development of both experimental (largely TLD-100 dosimetry) and Monte Carlo dosimetry techniques along with more accurate air-kerma strength standards. As a result of extensive benchmarking and intercomparison of these different methods, single-seed low-energy radionuclide dose distributions are now known with a total uncertainty of 3%-5%.« less
Tavakoli, Mohammad Bagher; Reiazi, Reza; Mohammadi, Mohammad Mehdi; Jabbari, Keyvan
2015-01-01
After proposing the idea of antiproton cancer treatment in 1984 many experiments were launched to investigate different aspects of physical and radiobiological properties of antiproton, which came from its annihilation reactions. One of these experiments has been done at the European Organization for Nuclear Research known as CERN using the antiproton decelerator. The ultimate goal of this experiment was to assess the dosimetric and radiobiological properties of beams of antiprotons in order to estimate the suitability of antiprotons for radiotherapy. One difficulty on this way was the unavailability of antiproton beam in CERN for a long time, so the verification of Monte Carlo codes to simulate antiproton depth dose could be useful. Among available simulation codes, Geant4 provides acceptable flexibility and extensibility, which progressively lead to the development of novel Geant4 applications in research domains, especially modeling the biological effects of ionizing radiation at the sub-cellular scale. In this study, the depth dose corresponding to CERN antiproton beam energy by Geant4 recruiting all the standard physics lists currently available and benchmarked for other use cases were calculated. Overall, none of the standard physics lists was able to draw the antiproton percentage depth dose. Although, with some models our results were promising, the Bragg peak level remained as the point of concern for our study. It is concluded that the Bertini model with high precision neutron tracking (QGSP_BERT_HP) is the best to match the experimental data though it is also the slowest model to simulate events among the physics lists.
Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Laksar, Sarbani; Tozzi, Angelo; Scorsetti, Marta; Cozzi, Luca
2015-10-31
To evaluate the performance of a broad scope model-based optimisation process for volumetric modulated arc therapy applied to esophageal cancer. A set of 70 previously treated patients in two different institutions, were selected to train a model for the prediction of dose-volume constraints. The model was built with a broad-scope purpose, aiming to be effective for different dose prescriptions and tumour localisations. It was validated on three groups of patients from the same institution and from another clinic not providing patients for the training phase. Comparison of the automated plans was done against reference cases given by the clinically accepted plans. Quantitative improvements (statistically significant for the majority of the analysed dose-volume parameters) were observed between the benchmark and the test plans. Of 624 dose-volume objectives assessed for plan evaluation, in 21 cases (3.3 %) the reference plans failed to respect the constraints while the model-based plans succeeded. Only in 3 cases (<0.5 %) the reference plans passed the criteria while the model-based failed. In 5.3 % of the cases both groups of plans failed and in the remaining cases both passed the tests. Plans were optimised using a broad scope knowledge-based model to determine the dose-volume constraints. The results showed dosimetric improvements when compared to the benchmark data. Particularly the plans optimised for patients from the third centre, not participating to the training, resulted in superior quality. The data suggests that the new engine is reliable and could encourage its application to clinical practice.
Jaciw, Andrew P
2016-06-01
Various studies have examined bias in impact estimates from comparison group studies (CGSs) of job training programs, and in education, where results are benchmarked against experimental results. Such within-study comparison (WSC) approaches investigate levels of bias in CGS-based impact estimates, as well as the success of various design and analytic strategies for reducing bias. This article reviews past literature and summarizes conditions under which CGSs replicate experimental benchmark results. It extends the framework to, and develops the methodology for, situations where results from CGSs are generalized to untreated inference populations. Past research is summarized; methods are developed to examine bias in program impact estimates based on cross-site comparisons in a multisite trial that are evaluated against site-specific experimental benchmarks. Students in Grades K-3 in 79 schools in Tennessee; students in Grades 4-8 in 82 schools in Alabama. Grades K-3 Stanford Achievement Test (SAT) in reading and math scores; Grades 4-8 SAT10 reading scores. Past studies show that bias in CGS-based estimates can be limited through strong design, with local matching, and appropriate analysis involving pretest covariates and variables that represent selection processes. Extension of the methodology to investigate accuracy of generalized estimates from CGSs shows bias from confounders and effect moderators. CGS results, when extrapolated to untreated inference populations, may be biased due to variation in outcomes and impact. Accounting for effects of confounders or moderators may reduce bias. © The Author(s) 2016.
Benchmark map of forest carbon stocks in tropical regions across three continents.
Saatchi, Sassan S; Harris, Nancy L; Brown, Sandra; Lefsky, Michael; Mitchard, Edward T A; Salas, William; Zutta, Brian R; Buermann, Wolfgang; Lewis, Simon L; Hagen, Stephen; Petrova, Silvia; White, Lee; Silman, Miles; Morel, Alexandra
2011-06-14
Developing countries are required to produce robust estimates of forest carbon stocks for successful implementation of climate change mitigation policies related to reducing emissions from deforestation and degradation (REDD). Here we present a "benchmark" map of biomass carbon stocks over 2.5 billion ha of forests on three continents, encompassing all tropical forests, for the early 2000s, which will be invaluable for REDD assessments at both project and national scales. We mapped the total carbon stock in live biomass (above- and belowground), using a combination of data from 4,079 in situ inventory plots and satellite light detection and ranging (Lidar) samples of forest structure to estimate carbon storage, plus optical and microwave imagery (1-km resolution) to extrapolate over the landscape. The total biomass carbon stock of forests in the study region is estimated to be 247 Gt C, with 193 Gt C stored aboveground and 54 Gt C stored belowground in roots. Forests in Latin America, sub-Saharan Africa, and Southeast Asia accounted for 49%, 25%, and 26% of the total stock, respectively. By analyzing the errors propagated through the estimation process, uncertainty at the pixel level (100 ha) ranged from ± 6% to ± 53%, but was constrained at the typical project (10,000 ha) and national (>1,000,000 ha) scales at ca. ± 5% and ca. ± 1%, respectively. The benchmark map illustrates regional patterns and provides methodologically comparable estimates of carbon stocks for 75 developing countries where previous assessments were either poor or incomplete.
NASA Astrophysics Data System (ADS)
Berger, Thomas; Matthiä, Daniel; Koerner, Christine; George, Kerry; Rhone, Jordan; Cucinotta, Francis A.; Reitz, Guenther
The adequate knowledge of the radiation environment and the doses incurred during a space mission is essential for estimating an astronaut's health risk. The space radiation environment is complex and variable, and exposures inside the spacecraft and the astronaut's body are com-pounded by the interactions of the primary particles with the atoms of the structural materials and with the body itself. Astronauts' radiation exposures are measured by means of personal dosimetry, but there remains substantial uncertainty associated with the computational extrap-olation of skin dose to organ dose, which can lead to over-or under-estimation of the health risk. Comparisons of models to data showed that the astronaut's Effective dose (E) can be pre-dicted to within about a +10In the research experiment "Depth dose distribution study within a phantom torso" at the NASA Space Radiation Laboratory (NSRL) at BNL, Brookhaven, USA the large 1972 SPE spectrum was simulated using seven different proton energies from 50 up to 450 MeV. A phantom torso constructed of natural bones and realistic distributions of human tissue equivalent materials, which is comparable to the torso of the MATROSHKA phantom currently on the ISS, was equipped with a comprehensive set of thermoluminescence detectors and human cells. The detectors are applied to assess the depth dose distribution and radiation transport codes (e.g. GEANT4) are used to assess the radiation field and interactions of the radiation field with the phantom torso. Lymphocyte cells are strategically embedded at selected locations at the skin and internal organs and are processed after irradiation to assess the effects of shielding on the yield of chromosome damage. The first focus of the pre-sented experiment is to correlate biological results with physical dosimetry measurements in the phantom torso. Further on the results of the passive dosimetry using the anthropomorphic phantoms represent the best tool to generate reliable to benchmark computational radiation transport models in a radiation field of interest. The presentation will give first results of the physical dose distribution, the comparison with GEANT4 computer simulations, based on a Voxel model of the phantom, and a comparison with the data from the chromosome aberration study. The help and support of Adam Russek and Michael Sivertz of the NASA Space Radiation Laboratory (NSRL), Brookhaven, USA during the setup and the irradiation of the phantom are highly appreciated. The Voxel model describing the human phantom used for the GEANT4 simulations was kindly provided by Monika Puchalska (CHALMERS, Gothenburg, Sweden).
Severe iodine deficiency is known to cause adverse health outcomes and remains a benchmark for understanding the effects of hypothyroidism. However, the implications of marginal iodine deficiency on function of the thyroid axis remain less well known. The current study examined t...
CatReg Software for Categorical Regression Analysis (May 2016)
CatReg 3.0 is a Microsoft Windows enhanced version of the Agency’s categorical regression analysis (CatReg) program. CatReg complements EPA’s existing Benchmark Dose Software (BMDS) by greatly enhancing a risk assessor’s ability to determine whether data from separate toxicologic...
More, Simon J; Clegg, Tracy A; McCoy, Finola
2017-08-01
In this study, we used national-level data to describe trends in on-farm intramammary antimicrobial usage in Ireland from 2003 to 2015. We calculated actual sales of intramammary tubes and the quantity of active substance sold, by year, product type [lactation or dry cow therapy (DCT)], antimicrobial group, World Health Organization antimicrobial classification, and from 2009 to 2015, prescribing route. We also estimated on-farm usage of lactation and dry cow intramammary antimicrobials using defined daily dose (DDDvet) and defined course dose (DCDvet) calculations, and dry cow coverage. Sales of tubes of antimicrobial for DCT have increased, and the estimated national dry cow coverage in 2015 was 1,022 DCDvet per 1,000 cows per year. An increase has also occurred in sales of teat sealant (2015 sales: 66.7 tubes with teat sealant for every 100 tubes with antimicrobial for DCT). In contrast, the number of tubes of antimicrobial sold for lactation use has decreased to 1,398 DDDvet and 466 DCDvet per 1,000 animals per year. Sales in intramammary tubes with at least one critically important antimicrobial (CIA) have either risen since 2007 (DCT) or fallen (lactation therapy). Increases were observed in both the number of dry cow and lactation tubes containing CIA considered of highest priority for human health. Differences between prescribing routes with respect to CIA usage were observed. This study provides detailed insight into on-farm usage of intramammary antimicrobials in Ireland. It demonstrates positive national progress but also highlights areas for review. In particular, blanket dry cow treatment in Ireland should be reconsidered. It is not possible to investigate farm-level variation in antimicrobial usage from national sales data. In several countries, measurement and benchmarking have been critical to progress in reducing antimicrobial usage in farm animal production. Central collation of data on farm-level antimicrobial use is also needed in Ireland to allow objective measurement and benchmarking of on-farm usage. More generally, standardized indicators to quantify antimicrobial usage in farm animals are urgently needed to allow country-level comparisons. The Authors. Published by the Federation of Animal Science Societies and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).
Development and application of freshwater sediment-toxicity benchmarks for currently used pesticides
Nowell, Lisa H.; Norman, Julia E.; Ingersoll, Christopher G.; Moran, Patrick W.
2016-01-01
Sediment-toxicity benchmarks are needed to interpret the biological significance of currently used pesticides detected in whole sediments. Two types of freshwater sediment benchmarks for pesticides were developed using spiked-sediment bioassay (SSB) data from the literature. These benchmarks can be used to interpret sediment-toxicity data or to assess the potential toxicity of pesticides in whole sediment. The Likely Effect Benchmark (LEB) defines a pesticide concentration in whole sediment above which there is a high probability of adverse effects on benthic invertebrates, and the Threshold Effect Benchmark (TEB) defines a concentration below which adverse effects are unlikely. For compounds without available SSBs, benchmarks were estimated using equilibrium partitioning (EqP). When a sediment sample contains a pesticide mixture, benchmark quotients can be summed for all detected pesticides to produce an indicator of potential toxicity for that mixture. Benchmarks were developed for 48 pesticide compounds using SSB data and 81 compounds using the EqP approach. In an example application, data for pesticides measured in sediment from 197 streams across the United States were evaluated using these benchmarks, and compared to measured toxicity from whole-sediment toxicity tests conducted with the amphipod Hyalella azteca (28-d exposures) and the midge Chironomus dilutus (10-d exposures). Amphipod survival, weight, and biomass were significantly and inversely related to summed benchmark quotients, whereas midge survival, weight, and biomass showed no relationship to benchmarks. Samples with LEB exceedances were rare (n = 3), but all were toxic to amphipods (i.e., significantly different from control). Significant toxicity to amphipods was observed for 72% of samples exceeding one or more TEBs, compared to 18% of samples below all TEBs. Factors affecting toxicity below TEBs may include the presence of contaminants other than pesticides, physical/chemical characteristics of sediment, and uncertainty in TEB values. Additional evaluations of benchmarks in relation to sediment chemistry and toxicity are ongoing.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-16
... need for periodic benchmarking of the M3 estimates to reflect the manufacturing universe. The Annual.... The Census Bureau will use these data to develop universe estimates of unfilled orders as of the end...
Che, W W; Frey, H Christopher; Lau, Alexis K H
2014-12-01
Population and diary sampling methods are employed in exposure models to sample simulated individuals and their daily activity on each simulation day. Different sampling methods may lead to variations in estimated human exposure. In this study, two population sampling methods (stratified-random and random-random) and three diary sampling methods (random resampling, diversity and autocorrelation, and Markov-chain cluster [MCC]) are evaluated. Their impacts on estimated children's exposure to ambient fine particulate matter (PM2.5 ) are quantified via case studies for children in Wake County, NC for July 2002. The estimated mean daily average exposure is 12.9 μg/m(3) for simulated children using the stratified population sampling method, and 12.2 μg/m(3) using the random sampling method. These minor differences are caused by the random sampling among ages within census tracts. Among the three diary sampling methods, there are differences in the estimated number of individuals with multiple days of exposures exceeding a benchmark of concern of 25 μg/m(3) due to differences in how multiday longitudinal diaries are estimated. The MCC method is relatively more conservative. In case studies evaluated here, the MCC method led to 10% higher estimation of the number of individuals with repeated exposures exceeding the benchmark. The comparisons help to identify and contrast the capabilities of each method and to offer insight regarding implications of method choice. Exposure simulation results are robust to the two population sampling methods evaluated, and are sensitive to the choice of method for simulating longitudinal diaries, particularly when analyzing results for specific microenvironments or for exposures exceeding a benchmark of concern. © 2014 Society for Risk Analysis.
Using Benchmarking To Strengthen the Assessment of Persistence.
McLachlan, Michael S; Zou, Hongyan; Gouin, Todd
2017-01-03
Chemical persistence is a key property for assessing chemical risk and chemical hazard. Current methods for evaluating persistence are based on laboratory tests. The relationship between the laboratory based estimates and persistence in the environment is often unclear, in which case the current methods for evaluating persistence can be questioned. Chemical benchmarking opens new possibilities to measure persistence in the field. In this paper we explore how the benchmarking approach can be applied in both the laboratory and the field to deepen our understanding of chemical persistence in the environment and create a firmer scientific basis for laboratory to field extrapolation of persistence test results.
Research on computer systems benchmarking
NASA Technical Reports Server (NTRS)
Smith, Alan Jay (Principal Investigator)
1996-01-01
This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.
A Seafloor Benchmark for 3-dimensional Geodesy
NASA Astrophysics Data System (ADS)
Chadwell, C. D.; Webb, S. C.; Nooner, S. L.
2014-12-01
We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone. Using a ROV to place and remove sensors on the benchmarks will significantly reduce the number of sensors required by the community to monitor offshore strain in subduction zones.
Validating vignette and conjoint survey experiments against real-world behavior
Hainmueller, Jens; Hangartner, Dominik; Yamamoto, Teppei
2015-01-01
Survey experiments, like vignette and conjoint analyses, are widely used in the social sciences to elicit stated preferences and study how humans make multidimensional choices. However, there is a paucity of research on the external validity of these methods that examines whether the determinants that explain hypothetical choices made by survey respondents match the determinants that explain what subjects actually do when making similar choices in real-world situations. This study compares results from conjoint and vignette analyses on which immigrant attributes generate support for naturalization with closely corresponding behavioral data from a natural experiment in Switzerland, where some municipalities used referendums to decide on the citizenship applications of foreign residents. Using a representative sample from the same population and the official descriptions of applicant characteristics that voters received before each referendum as a behavioral benchmark, we find that the effects of the applicant attributes estimated from the survey experiments perform remarkably well in recovering the effects of the same attributes in the behavioral benchmark. We also find important differences in the relative performances of the different designs. Overall, the paired conjoint design, where respondents evaluate two immigrants side by side, comes closest to the behavioral benchmark; on average, its estimates are within 2% percentage points of the effects in the behavioral benchmark. PMID:25646415
Jacob, S A; Ng, W L; Do, V
2015-02-01
There is wide variation in the proportion of newly diagnosed cancer patients who receive chemotherapy, indicating the need for a benchmark rate of chemotherapy utilisation. This study describes an evidence-based model that estimates the proportion of new cancer patients in whom chemotherapy is indicated at least once (defined as the optimal chemotherapy utilisation rate). The optimal chemotherapy utilisation rate can act as a benchmark for measuring and improving the quality of care. Models of optimal chemotherapy utilisation were constructed for each cancer site based on indications for chemotherapy identified from evidence-based treatment guidelines. Data on the proportion of patient- and tumour-related attributes for which chemotherapy was indicated were obtained, using population-based data where possible. Treatment indications and epidemiological data were merged to calculate the optimal chemotherapy utilisation rate. Monte Carlo simulations and sensitivity analyses were used to assess the effect of controversial chemotherapy indications and variations in epidemiological data on our model. Chemotherapy is indicated at least once in 49.1% (95% confidence interval 48.8-49.6%) of all new cancer patients in Australia. The optimal chemotherapy utilisation rates for individual tumour sites ranged from a low of 13% in thyroid cancers to a high of 94% in myeloma. The optimal chemotherapy utilisation rate can serve as a benchmark for planning chemotherapy services on a population basis. The model can be used to evaluate service delivery by comparing the benchmark rate with patterns of care data. The overall estimate for other countries can be obtained by substituting the relevant distribution of cancer types. It can also be used to predict future chemotherapy workload and can be easily modified to take into account future changes in cancer incidence, presentation stage or chemotherapy indications. Copyright © 2014 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Ciesielski, Bartlomiej; Marciniak, Agnieszka; Zientek, Agnieszka; Krefft, Karolina; Cieszyński, Mateusz; Boguś, Piotr; Prawdzik-Dampc, Anita
2016-12-01
This study is about the accuracy of EPR dosimetry in bones based on deconvolution of the experimental spectra into the background (BG) and the radiation-induced signal (RIS) components. The model RIS's were represented by EPR spectra from irradiated enamel or bone powder; the model BG signals by EPR spectra of unirradiated bone samples or by simulated spectra. Samples of compact and trabecular bones were irradiated in the 30-270 Gy range and the intensities of their RIS's were calculated using various combinations of those benchmark spectra. The relationships between the dose and the RIS were linear (R 2 > 0.995), with practically no difference between results obtained when using signals from irradiated enamel or bone as the model RIS. Use of different experimental spectra for the model BG resulted in variations in intercepts of the dose-RIS calibration lines, leading to systematic errors in reconstructed doses, in particular for high- BG samples of trabecular bone. These errors were reduced when simulated spectra instead of the experimental ones were used as the benchmark BG signal in the applied deconvolution procedures. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Benchmark Dose Software Development and Maintenance Ten Berge Cxt Models
This report is intended to provide an overview of beta version 1.0 of the implementation of a concentration-time (CxT) model originally programmed and provided by Wil ten Berge (referred to hereafter as the ten Berge model). The recoding and development described here represent ...
Mille, Matthew M; Jung, Jae Won; Lee, Choonik; Kuzmin, Gleb A; Lee, Choonsik
2018-06-01
Radiation dosimetry is an essential input for epidemiological studies of radiotherapy patients aimed at quantifying the dose-response relationship of late-term morbidity and mortality. Individualised organ dose must be estimated for all tissues of interest located in-field, near-field, or out-of-field. Whereas conventional measurement approaches are limited to points in water or anthropomorphic phantoms, computational approaches using patient images or human phantoms offer greater flexibility and can provide more detailed three-dimensional dose information. In the current study, we systematically compared four different dose calculation algorithms so that dosimetrists and epidemiologists can better understand the advantages and limitations of the various approaches at their disposal. The four dose calculations algorithms considered were as follows: the (1) Analytical Anisotropic Algorithm (AAA) and (2) Acuros XB algorithm (Acuros XB), as implemented in the Eclipse treatment planning system (TPS); (3) a Monte Carlo radiation transport code, EGSnrc; and (4) an accelerated Monte Carlo code, the x-ray Voxel Monte Carlo (XVMC). The four algorithms were compared in terms of their accuracy and appropriateness in the context of dose reconstruction for epidemiological investigations. Accuracy in peripheral dose was evaluated first by benchmarking the calculated dose profiles against measurements in a homogeneous water phantom. Additional simulations in a heterogeneous cylinder phantom evaluated the performance of the algorithms in the presence of tissue heterogeneity. In general, we found that the algorithms contained within the commercial TPS (AAA and Acuros XB) were fast and accurate in-field or near-field, but not acceptable out-of-field. Therefore, the TPS is best suited for epidemiological studies involving large cohorts and where the organs of interest are located in-field or partially in-field. The EGSnrc and XVMC codes showed excellent agreement with measurements both in-field and out-of-field. The EGSnrc code was the most accurate dosimetry approach, but was too slow to be used for large-scale epidemiological cohorts. The XVMC code showed similar accuracy to EGSnrc, but was significantly faster, and thus epidemiological applications seem feasible, especially when the organs of interest reside far away from the field edge.
[Evaluation of Organ Dose Estimation from Indices of CT Dose Using Dose Index Registry].
Iriuchijima, Akiko; Fukushima, Yasuhiro; Ogura, Akio
Direct measurement of each patient organ dose from computed tomography (CT) is not possible. Most methods to estimate patient organ dose is using Monte Carlo simulation with dedicated software. However, dedicated software is too expensive for small scale hospitals. Not every hospital can estimate organ dose with dedicated software. The purpose of this study was to evaluate the simple method of organ dose estimation using some common indices of CT dose. The Monte Carlo simulation software Radimetrics (Bayer) was used for calculating organ dose and analysis relationship between indices of CT dose and organ dose. Multidetector CT scanners were compared with those from two manufactures (LightSpeed VCT, GE Healthcare; SOMATOM Definition Flash, Siemens Healthcare). Using stored patient data from Radimetrics, the relationships between indices of CT dose and organ dose were indicated as each formula for estimating organ dose. The accuracy of estimation method of organ dose was compared with the results of Monte Carlo simulation using the Bland-Altman plots. In the results, SSDE was the feasible index for estimation organ dose in almost organs because it reflected each patient size. The differences of organ dose between estimation and simulation were within 23%. In conclusion, our estimation method of organ dose using indices of CT dose is convenient for clinical with accuracy.
The current state of knowledge on the use of the benchmark dose concept in risk assessment.
Sand, Salomon; Victorin, Katarina; Filipsson, Agneta Falk
2008-05-01
This review deals with the current state of knowledge on the use of the benchmark dose (BMD) concept in health risk assessment of chemicals. The BMD method is an alternative to the traditional no-observed-adverse-effect level (NOAEL) and has been presented as a methodological improvement in the field of risk assessment. The BMD method has mostly been employed in the USA but is presently given higher attention also in Europe. The review presents a number of arguments in favor of the BMD, relative to the NOAEL. In addition, it gives a detailed overview of the several procedures that have been suggested and applied for BMD analysis, for quantal as well as continuous data. For quantal data the BMD is generally defined as corresponding to an additional or extra risk of 5% or 10%. For continuous endpoints it is suggested that the BMD is defined as corresponding to a percentage change in response relative to background or relative to the dynamic range of response. Under such definitions, a 5% or 10% change can be considered as default. Besides how to define the BMD and its lower bound, the BMDL, the question of how to select the dose-response model to be used in the BMD and BMDL determination is highlighted. Issues of study design and comparison of dose-response curves and BMDs are also covered. Copyright (c) 2007 John Wiley & Sons, Ltd.
Sahuquillo, I; Lagarda, M J; Silvestre, M D; Farré, R
2007-08-01
The mercury content of 25 samples of fish and seafood products most frequently consumed in Spain was determined. A simple method comprising cold vapour and atomic absorption spectrometry was used to determine separately inorganic and organic mercury. In all samples inorganic mercury content was below 50 microg kg(-1). There was wide variability, among not only the mercury levels of different fish species, but also for different samples of the same species - with the methylmercury content ranging from below 54 to 662 microg kg(-1). The highest mean methylmercury content was found in fresh tuna. Based on an average total fish consumption of 363 g/person week(-1), the methylmercury intake was estimated to be 46.2 microg/person week(-1). Therefore, the mercury intake of Spanish people with a body weight < or = 60 kg is lower than the Joint FAO/WHO Expert Committee on Food Additives (JECFA) provisional tolerable weekly intake (PTWI) of 1.6 microg kg(-1) body weight, but exceeds the US National Research Council (NRC) limit of 0.7 microg kg(-1) body weight week(-1) based on a benchmark dose.
Al-Hallaq, Hania A; Chmura, Steven; Salama, Joseph K; Winter, Kathryn A; Robinson, Clifford G; Pisansky, Thomas M; Borges, Virginia; Lowenstein, Jessica R; McNulty, Susan; Galvin, James M; Followill, David S; Timmerman, Robert D; White, Julia R; Xiao, Ying; Matuszak, Martha M
In 2014, the NRG Oncology Group initiated the first National Cancer Institute-sponsored, phase 1 clinical trial of stereotactic body radiation therapy (SBRT) for the treatment of multiple metastases in multiple organ sites (BR001; NCT02206334). The primary endpoint is to test the safety of SBRT for the treatment of 2 to 4 multiple lesions in several anatomic sites in a multi-institutional setting. Because of the technical challenges inherent to treating multiple lesions as their spatial separation decreases, we present the technical requirements for NRG-BR001 and the rationale for their selection. Patients with controlled primary tumors of breast, non-small cell lung, or prostate are eligible if they have 2 to 4 metastases distributed among 7 extracranial anatomic locations throughout the body. Prescription and organ-at-risk doses were determined by expert consensus. Credentialing requirements include (1) irradiation of the Imaging and Radiation Oncology Core phantom with SBRT, (2) submitting image guided radiation therapy case studies, and (3) planning the benchmark. Guidelines for navigating challenging planning cases including assessing composite dose are discussed. Dosimetric planning to multiple lesions receiving differing doses (45-50 Gy) and fractionation (3-5) while irradiating the same organs at risk is discussed, particularly for metastases in close proximity (≤5 cm). The benchmark case was selected to demonstrate the planning tradeoffs required to satisfy protocol requirements for 2 nearby lesions. Examples of passing benchmark plans exhibited a large variability in plan conformity. NRG-BR001 was developed using expert consensus on multiple issues from the dose fractionation regimen to the minimum image guided radiation therapy guidelines. Credentialing was tied to the task rather than the anatomic site to reduce its burden. Every effort was made to include a variety of delivery methods to reflect current SBRT technology. Although some simplifications were adopted, the successful completion of this trial will inform future designs of both national and institutional trials and would allow immediate clinical adoption of SBRT trials for oligometastases. Copyright © 2016 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
Probabilistic risk assessment of exposure to leucomalachite green residues from fish products.
Chu, Yung-Lin; Chimeddulam, Dalaijamts; Sheen, Lee-Yan; Wu, Kuen-Yuh
2013-12-01
To assess the potential risk of human exposure to carcinogenic leucomalachite green (LMG) due to fish consumption, the probabilistic risk assessment was conducted for adolescent, adult and senior adult consumers in Taiwan. The residues of LMG with the mean concentration of 13.378±20.56 μg kg(-1) (BFDA, 2009) in fish was converted into dose, considering fish intake reported for three consumer groups by NAHSIT (1993-1996) and body weight of an average individual of the group. The lifetime average and high 95th percentile dietary intakes of LMG from fish consumption for Taiwanese consumers were estimated at up to 0.0135 and 0.0451 μg kg-bw(-1) day(-1), respectively. Human equivalent dose (HED) of 2.875 mg kg-bw(-1) day(-1) obtained from a lower-bound benchmark dose (BMDL10) in mice by interspecies extrapolation was linearly extrapolated to oral cancer slope factor (CSF) of 0.035 (mgkg-bw(-1)day(-1))(-1) for humans. Although, the assumptions and methods are different, the results of lifetime cancer risk varying from 3×10(-7) to 1.6×10(-6) were comparable to those of margin of exposures (MOEs) varying from 410,000 to 4,800,000. In conclusions, Taiwanese fish consumers with the 95th percentile LADD of LMG have greater risk of liver cancer and need to an action of risk management in Taiwan. Copyright © 2013 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Egan, A; Laub, W
2014-06-15
Purpose: Several shortcomings of the current implementation of the analytic anisotropic algorithm (AAA) may lead to dose calculation errors in highly modulated treatments delivered to highly heterogeneous geometries. Here we introduce a set of dosimetric error predictors that can be applied to a clinical treatment plan and patient geometry in order to identify high risk plans. Once a problematic plan is identified, the treatment can be recalculated with more accurate algorithm in order to better assess its viability. Methods: Here we focus on three distinct sources dosimetric error in the AAA algorithm. First, due to a combination of discrepancies inmore » smallfield beam modeling as well as volume averaging effects, dose calculated through small MLC apertures can be underestimated, while that behind small MLC blocks can overestimated. Second, due the rectilinear scaling of the Monte Carlo generated pencil beam kernel, energy is not properly transported through heterogeneities near, but not impeding, the central axis of the beamlet. And third, AAA overestimates dose in regions very low density (< 0.2 g/cm{sup 3}). We have developed an algorithm to detect the location and magnitude of each scenario within the patient geometry, namely the field-size index (FSI), the heterogeneous scatter index (HSI), and the lowdensity index (LDI) respectively. Results: Error indices successfully identify deviations between AAA and Monte Carlo dose distributions in simple phantom geometries. Algorithms are currently implemented in the MATLAB computing environment and are able to run on a typical RapidArc head and neck geometry in less than an hour. Conclusion: Because these error indices successfully identify each type of error in contrived cases, with sufficient benchmarking, this method can be developed into a clinical tool that may be able to help estimate AAA dose calculation errors and when it might be advisable to use Monte Carlo calculations.« less
Bayesian Dose-Response Modeling in Sparse Data
NASA Astrophysics Data System (ADS)
Kim, Steven B.
This book discusses Bayesian dose-response modeling in small samples applied to two different settings. The first setting is early phase clinical trials, and the second setting is toxicology studies in cancer risk assessment. In early phase clinical trials, experimental units are humans who are actual patients. Prior to a clinical trial, opinions from multiple subject area experts are generally more informative than the opinion of a single expert, but we may face a dilemma when they have disagreeing prior opinions. In this regard, we consider compromising the disagreement and compare two different approaches for making a decision. In addition to combining multiple opinions, we also address balancing two levels of ethics in early phase clinical trials. The first level is individual-level ethics which reflects the perspective of trial participants. The second level is population-level ethics which reflects the perspective of future patients. We extensively compare two existing statistical methods which focus on each perspective and propose a new method which balances the two conflicting perspectives. In toxicology studies, experimental units are living animals. Here we focus on a potential non-monotonic dose-response relationship which is known as hormesis. Briefly, hormesis is a phenomenon which can be characterized by a beneficial effect at low doses and a harmful effect at high doses. In cancer risk assessments, the estimation of a parameter, which is known as a benchmark dose, can be highly sensitive to a class of assumptions, monotonicity or hormesis. In this regard, we propose a robust approach which considers both monotonicity and hormesis as a possibility. In addition, We discuss statistical hypothesis testing for hormesis and consider various experimental designs for detecting hormesis based on Bayesian decision theory. Past experiments have not been optimally designed for testing for hormesis, and some Bayesian optimal designs may not be optimal under a wrong parametric assumption. In this regard, we consider a robust experimental design which does not require any parametric assumption.
Dong, Nianbo; Lipsey, Mark W
2017-01-01
It is unclear whether propensity score analysis (PSA) based on pretest and demographic covariates will meet the ignorability assumption for replicating the results of randomized experiments. This study applies within-study comparisons to assess whether pre-Kindergarten (pre-K) treatment effects on achievement outcomes estimated using PSA based on a pretest and demographic covariates can approximate those found in a randomized experiment. Data-Four studies with samples of pre-K children each provided data on two math achievement outcome measures with baseline pretests and child demographic variables that included race, gender, age, language spoken at home, and mother's highest education. Research Design and Data Analysis-A randomized study of a pre-K math curriculum provided benchmark estimates of effects on achievement measures. Comparison samples from other pre-K studies were then substituted for the original randomized control and the effects were reestimated using PSA. The correspondence was evaluated using multiple criteria. The effect estimates using PSA were in the same direction as the benchmark estimates, had similar but not identical statistical significance, and did not differ from the benchmarks at statistically significant levels. However, the magnitude of the effect sizes differed and displayed both absolute and relative bias larger than required to show statistical equivalence with formal tests, but those results were not definitive because of the limited statistical power. We conclude that treatment effect estimates based on a single pretest and demographic covariates in PSA correspond to those from a randomized experiment on the most general criteria for equivalence.
NASA Astrophysics Data System (ADS)
Tanaka, Ken-ichi
2016-06-01
We performed benchmark calculation for radioactivity activated in a Primary Containment Vessel (PCV) of a Boiling Water Reactor (BWR) by using MAXS library, which was developed by collapsing with neutron energy spectra in the PCV of the BWR. Radioactivities due to neutron irradiation were measured by using activation foil detector of Gold (Au) and Nickel (Ni) at thirty locations in the PCV. We performed activation calculations of the foils with SCALE5.1/ORIGEN-S code with irradiation conditions of each foil location as the benchmark calculation. We compared calculations and measurements to estimate an effectiveness of MAXS library.
Extensions to the integral line-beam method for gamma-ray skyshine analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shultis, J.K.; Faw, R.E.
1995-08-01
A computationally simple method for estimating gamma-ray skyshine dose rates has been developed on the basis of the line-beam response function. Both Monte Carlo and pointkernel calculations that account for both annihilation and bremsstrahlung were used in the generation of line beam response functions (LBRF) for gamma-ray energies between 10 and 100 MeV. The LBRF is approximated by a three-parameter formula. By combining results with those obtained in an earlier study for gamma energies below 10 MeV, LBRF values are readily and accurately evaluated for source energies between 0.02 and 100 MeV, for source-to-detector distances between 1 and 3000 m,more » and beam angles as great as 180 degrees. Tables of the parameters for the approximate LBRF are presented. The new response functions are then applied to three simple skyshine geometries, an open silo geometry, an infinite wall, and a rectangular four-wall building. Results are compared to those of previous calculations and to benchmark measurements. A new approach is introduced to account for overhead shielding of the skyshine source and compared to the simplistic exponential-attenuation method used in earlier studies. The effect of the air-ground interface, usually neglected in gamma skyshine studies, is also examined and an empirical correction factor is introduced. Finally, a revised code based on the improved LBRF approximations and the treatment of the overhead shielding is presented, and results shown for several benchmark problems.« less
Vehicle Sprung Mass Estimation for Rough Terrain
2011-03-01
distributions are greater than zero. The multivariate polynomials are functions of the Legendre polynomials (Poularikas (1999...developed methods based on polynomial chaos theory and on the maximum likelihood approach to estimate the most likely value of the vehicle sprung...mass. The polynomial chaos estimator is compared to benchmark algorithms including recursive least squares, recursive total least squares, extended
Webster, A. Francina; Chepelev, Nikolai; Gagné, Rémi; Kuo, Byron; Recio, Leslie; Williams, Andrew; Yauk, Carole L.
2015-01-01
Many regulatory agencies are exploring ways to integrate toxicogenomic data into their chemical risk assessments. The major challenge lies in determining how to distill the complex data produced by high-content, multi-dose gene expression studies into quantitative information. It has been proposed that benchmark dose (BMD) values derived from toxicogenomics data be used as point of departure (PoD) values in chemical risk assessments. However, there is limited information regarding which genomics platforms are most suitable and how to select appropriate PoD values. In this study, we compared BMD values modeled from RNA sequencing-, microarray-, and qPCR-derived gene expression data from a single study, and explored multiple approaches for selecting a single PoD from these data. The strategies evaluated include several that do not require prior mechanistic knowledge of the compound for selection of the PoD, thus providing approaches for assessing data-poor chemicals. We used RNA extracted from the livers of female mice exposed to non-carcinogenic (0, 2 mg/kg/day, mkd) and carcinogenic (4, 8 mkd) doses of furan for 21 days. We show that transcriptional BMD values were consistent across technologies and highly predictive of the two-year cancer bioassay-based PoD. We also demonstrate that filtering data based on statistically significant changes in gene expression prior to BMD modeling creates more conservative BMD values. Taken together, this case study on mice exposed to furan demonstrates that high-content toxicogenomics studies produce robust data for BMD modelling that are minimally affected by inter-technology variability and highly predictive of cancer-based PoD doses. PMID:26313361
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sample, B.E. Opresko, D.M. Suter, G.W.
Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less thanmore » these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red-tailed hawk, osprey) (scientific names for both the mammalian and avian species are presented in Appendix B). [In this document, NOAEL refers to both dose (mg contaminant per kg animal body weight per day) and concentration (mg contaminant per kg of food or L of drinking water)]. The 20 wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at U.S. Department of Energy (DOE) waste sites. The NOAEL-based benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species; LOAEL-based benchmarks represent threshold levels at which adverse effects are likely to become evident. These benchmarks consider contaminant exposure through oral ingestion of contaminated media only. Exposure through inhalation and/or direct dermal exposure are not considered in this report.« less
2012-01-01
Background In the absence of current cumulative dietary exposure assessments, this analysis was conducted to estimate exposure to multiple dietary contaminants for children, who are more vulnerable to toxic exposure than adults. Methods We estimated exposure to multiple food contaminants based on dietary data from preschool-age children (2–4 years, n=207), school-age children (5–7 years, n=157), parents of young children (n=446), and older adults (n=149). We compared exposure estimates for eleven toxic compounds (acrylamide, arsenic, lead, mercury, chlorpyrifos, permethrin, endosulfan, dieldrin, chlordane, DDE, and dioxin) based on self-reported food frequency data by age group. To determine if cancer and non-cancer benchmark levels were exceeded, chemical levels in food were derived from publicly available databases including the Total Diet Study. Results Cancer benchmark levels were exceeded by all children (100%) for arsenic, dieldrin, DDE, and dioxins. Non-cancer benchmarks were exceeded by >95% of preschool-age children for acrylamide and by 10% of preschool-age children for mercury. Preschool-age children had significantly higher estimated intakes of 6 of 11 compounds compared to school-age children (p<0.0001 to p=0.02). Based on self-reported dietary data, the greatest exposure to pesticides from foods included in this analysis were tomatoes, peaches, apples, peppers, grapes, lettuce, broccoli, strawberries, spinach, dairy, pears, green beans, and celery. Conclusions Dietary strategies to reduce exposure to toxic compounds for which cancer and non-cancer benchmarks are exceeded by children vary by compound. These strategies include consuming organically produced dairy and selected fruits and vegetables to reduce pesticide intake, consuming less animal foods (meat, dairy, and fish) to reduce intake of persistent organic pollutants and metals, and consuming lower quantities of chips, cereal, crackers, and other processed carbohydrate foods to reduce acrylamide intake. PMID:23140444
Experimental validation of the TOPAS Monte Carlo system for passive scattering proton therapy
Testa, M.; Schümann, J.; Lu, H.-M.; Shin, J.; Faddegon, B.; Perl, J.; Paganetti, H.
2013-01-01
Purpose: TOPAS (TOol for PArticle Simulation) is a particle simulation code recently developed with the specific aim of making Monte Carlo simulations user-friendly for research and clinical physicists in the particle therapy community. The authors present a thorough and extensive experimental validation of Monte Carlo simulations performed with TOPAS in a variety of setups relevant for proton therapy applications. The set of validation measurements performed in this work represents an overall end-to-end testing strategy recommended for all clinical centers planning to rely on TOPAS for quality assurance or patient dose calculation and, more generally, for all the institutions using passive-scattering proton therapy systems. Methods: The authors systematically compared TOPAS simulations with measurements that are performed routinely within the quality assurance (QA) program in our institution as well as experiments specifically designed for this validation study. First, the authors compared TOPAS simulations with measurements of depth-dose curves for spread-out Bragg peak (SOBP) fields. Second, absolute dosimetry simulations were benchmarked against measured machine output factors (OFs). Third, the authors simulated and measured 2D dose profiles and analyzed the differences in terms of field flatness and symmetry and usable field size. Fourth, the authors designed a simple experiment using a half-beam shifter to assess the effects of multiple Coulomb scattering, beam divergence, and inverse square attenuation on lateral and longitudinal dose profiles measured and simulated in a water phantom. Fifth, TOPAS’ capabilities to simulate time dependent beam delivery was benchmarked against dose rate functions (i.e., dose per unit time vs time) measured at different depths inside an SOBP field. Sixth, simulations of the charge deposited by protons fully stopping in two different types of multilayer Faraday cups (MLFCs) were compared with measurements to benchmark the nuclear interaction models used in the simulations. Results: SOBPs’ range and modulation width were reproduced, on average, with an accuracy of +1, −2 and ±3 mm, respectively. OF simulations reproduced measured data within ±3%. Simulated 2D dose-profiles show field flatness and average field radius within ±3% of measured profiles. The field symmetry resulted, on average in ±3% agreement with commissioned profiles. TOPAS accuracy in reproducing measured dose profiles downstream the half beam shifter is better than 2%. Dose rate function simulation reproduced the measurements within ∼2% showing that the four-dimensional modeling of the passively modulation system was implement correctly and millimeter accuracy can be achieved in reproducing measured data. For MLFCs simulations, 2% agreement was found between TOPAS and both sets of experimental measurements. The overall results show that TOPAS simulations are within the clinical accepted tolerances for all QA measurements performed at our institution. Conclusions: Our Monte Carlo simulations reproduced accurately the experimental data acquired through all the measurements performed in this study. Thus, TOPAS can reliably be applied to quality assurance for proton therapy and also as an input for commissioning of commercial treatment planning systems. This work also provides the basis for routine clinical dose calculations in patients for all passive scattering proton therapy centers using TOPAS. PMID:24320505
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heilemann, G., E-mail: gerd.heilemann@meduniwien.ac.at; Kostiukhina, N.; Nesvacil, N.
2015-10-15
Purpose: The purpose of this study was to establish a method to perform multidimensional radiochromic film measurements of {sup 106}Ru plaques and to benchmark the resulting dose distributions against Monte Carlo simulations (MC), microdiamond, and diode measurements. Methods: Absolute dose rates and relative dose distributions in multiple planes were determined for three different plaque models (CCB, CCA, and COB), and three different plaques per model, using EBT3 films in an in-house developed polystyrene phantom and the MCNP6 MC code. Dose difference maps were generated to analyze interplaque variations for a specific type, and for comparing measurements against MC simulations. Furthermore,more » dose distributions were validated against values specified by the manufacturer (BEBIG) and microdiamond and diode measurements in a water scanning phantom. Radial profiles were assessed and used to estimate dosimetric margins for a given combination of representative tumor geometry and plaque size. Results: Absolute dose rates at a reference depth of 2 mm on the central axis of the plaque show an agreement better than 5% (10%) when comparing film measurements (MCNP6) to the manufacturer’s data. The reproducibility of depth-dose profile measurements was <7% (2 SD) for all investigated detectors and plaque types. Dose difference maps revealed minor interplaque deviations for a specific plaque type due to inhomogeneities of the active layer. The evaluation of dosimetric margins showed that for a majority of the investigated cases, the tumor was not completely covered by the 100% isodose prescribed to the tumor apex if the difference between geometrical plaque size and tumor base ≤4 mm. Conclusions: EBT3 film dosimetry in an in-house developed phantom was successfully used to characterize the dosimetric properties of different {sup 106}Ru plaque models. The film measurements were validated against MC calculations and other experimental methods and showed a good agreement with data from BEBIG well within published tolerances. The dosimetric information as well as interplaque comparison can be used for comprehensive quality assurance and for considerations in the treatment planning of ophthalmic brachytherapy.« less
Benchmarking passive transfer of immunity and growth in dairy calves.
Atkinson, D J; von Keyserlingk, M A G; Weary, D M
2017-05-01
Poor health and growth in young dairy calves can have lasting effects on their development and future production. This study benchmarked calf-rearing outcomes in a cohort of Canadian dairy farms, reported these findings back to producers and their veterinarians, and documented the results. A total of 18 Holstein dairy farms were recruited, all in British Columbia. Blood samples were collected from calves aged 1 to 7 d. We estimated serum total protein levels using digital refractometry, and failure of passive transfer (FPT) was defined as values below 5.2 g/dL. We estimated average daily gain (ADG) for preweaned heifers (1 to 70 d old) using heart-girth tape measurements, and analyzed early (≤35 d) and late (>35 d) growth separately. At first assessment, the average farm FPT rate was 16%. Overall, ADG was 0.68 kg/d, with early and late growth rates of 0.51 and 0.90 kg/d, respectively. Following delivery of the benchmark reports, all participants volunteered to undergo a second assessment. The majority (83%) made at least 1 change in their colostrum-management or milk-feeding practices, including increased colostrum at first feeding, reduced time to first colostrum, and increased initial and maximum daily milk allowances. The farms that made these changes experienced improved outcomes. On the 11 farms that made changes to improve colostrum feeding, the rate of FPT declined from 21 ± 10% before benchmarking to 11 ± 10% after making the changes. On the 10 farms that made changes to improve calf growth, ADG improved from 0.66 ± 0.09 kg/d before benchmarking to 0.72 ± 0.08 kg/d after making the management changes. Increases in ADG were greatest in the early milk-feeding period, averaging 0.13 kg/d higher than pre-benchmarking values for calves ≤35 d of age. Benchmarking specific outcomes associated with calf rearing can motivate producer engagement in calf care, leading to improved outcomes for calves on farms that apply relevant management changes. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Basin-scale estimates of oceanic primary production by remote sensing - The North Atlantic
NASA Technical Reports Server (NTRS)
Platt, Trevor; Caverhill, Carla; Sathyendranath, Shubha
1991-01-01
The monthly averaged CZCS data for 1979 are used to estimate annual primary production at ocean basin scales in the North Atlantic. The principal supplementary data used were 873 vertical profiles of chlorophyll and 248 sets of parameters derived from photosynthesis-light experiments. Four different procedures were tested for calculation of primary production. The spectral model with nonuniform biomass was considered as the benchmark for comparison against the other three models. The less complete models gave results that differed by as much as 50 percent from the benchmark. Vertically uniform models tended to underestimate primary production by about 20 percent compared to the nonuniform models. At horizontal scale, the differences between spectral and nonspectral models were negligible. The linear correlation between biomass and estimated production was poor outside the tropics, suggesting caution against the indiscriminate use of biomass as a proxy variable for primary production.
Benford, Diane J
2016-05-01
Genotoxic substances are generally not permitted for deliberate use in food production. However, an appreciable number of known or suspected genotoxic substances occur unavoidably in food, e.g. from natural occurrence, environmental contamination and generation during cooking and processing. Over the past decade a margin of exposure (MOE) approach has increasingly been used in assessing the exposure to substances in food that are genotoxic and carcinogenic. The MOE is defined as a reference point on the dose-response curve (e.g. a benchmark dose lower confidences limit derived from a rodent carcinogenicity study) divided by the estimated human intake. A small MOE indicates a higher concern than a very large MOE. Whilst the MOE cannot be directly equated to risk, it supports prioritisation of substances for further research or for possible regulatory action, and provides a basis for communicating to the public. So far, the MOE approach has been confined to substances for which carcinogenicity data are available. In the absence of carcinogenicity data, evidence of genotoxicity is used only in hazard identification. The challenge to the genetic toxicology community is to develop approaches for characterising risk to human health based on data from genotoxicity studies. In order to achieve wide acceptance, it would be important to further address the issues that have been discussed in the context of dose-response modelling of carcinogenicity data in order to assign levels of concern to particular MOE values, and also whether it is possible to make generic conclusions on how potency in genotoxicity assays relates to carcinogenic potency. © Crown copyright 2015.
Etard, Cécile; Bigand, Emeline; Salvat, Cécile; Vidal, Vincent; Beregi, Jean Paul; Hornbeck, Amaury; Greffier, Joël
2017-10-01
A national retrospective survey on patient doses was performed by the French Society of Medical physicists to assess reference levels (RLs) in interventional radiology as required by the European Directive 2013/59/Euratom. Fifteen interventional procedures in neuroradiology, vascular radiology and osteoarticular procedures were analysed. Kerma area product (KAP), fluoroscopy time (FT), reference air kerma and number of images were recorded for 10 to 30 patients per procedure. RLs were calculated as the 3rd quartiles of the distributions. Results on 4600 procedures from 36 departments confirmed the large variability in patient dose for the same procedure. RLs were proposed for the four dosimetric estimators and the 15 procedures. RLs in terms of KAP and FT were 90 Gm.cm 2 and 11 mins for cerebral angiography, 35 Gy.cm 2 and 16 mins for biliary drainage, 75 Gy.cm 2 and 6 mins for lower limbs arteriography and 70 Gy.cm 2 and 11 mins for vertebroplasty. For these four procedures, RLs were defined according to the complexity of the procedure. For all the procedures, the results were lower than most of those already published. This study reports RLs in interventional radiology based on a national survey. Continual evolution of practices and technologies requires regular updates of RLs. • Delivered dose in interventional radiology depends on procedure, practice and patient. • National RLs are proposed for 15 interventional procedures. • Reference levels (RLs) are useful to benchmark practices and optimize protocols. • RLs are proposed for kerma area product, air kerma, fluoroscopy time and number of images. • RLs should be adapted to the procedure complexity and updated regularly.
Boos, J; Meineke, A; Rubbert, C; Heusch, P; Lanzman, R S; Aissa, J; Antoch, G; Kröpil, P
2016-03-01
To implement automated CT dose data monitoring using the DICOM-Structured Report (DICOM-SR) in order to monitor dose-related CT data in regard to national diagnostic reference levels (DRLs). We used a novel in-house co-developed software tool based on the DICOM-SR to automatically monitor dose-related data from CT examinations. The DICOM-SR for each CT examination performed between 09/2011 and 03/2015 was automatically anonymized and sent from the CT scanners to a cloud server. Data was automatically analyzed in accordance with body region, patient age and corresponding DRL for volumetric computed tomography dose index (CTDIvol) and dose length product (DLP). Data of 36,523 examinations (131,527 scan series) performed on three different CT scanners and one PET/CT were analyzed. The overall mean CTDIvol and DLP were 51.3% and 52.8% of the national DRLs, respectively. CTDIvol and DLP reached 43.8% and 43.1% for abdominal CT (n=10,590), 66.6% and 69.6% for cranial CT (n=16,098) and 37.8% and 44.0% for chest CT (n=10,387) of the compared national DRLs, respectively. Overall, the CTDIvol exceeded national DRLs in 1.9% of the examinations, while the DLP exceeded national DRLs in 2.9% of the examinations. Between different CT protocols of the same body region, radiation exposure varied up to 50% of the DRLs. The implemented cloud-based CT dose monitoring based on the DICOM-SR enables automated benchmarking in regard to national DRLs. Overall the local dose exposure from CT reached approximately 50% of these DRLs indicating that DRL actualization as well as protocol-specific DRLs are desirable. The cloud-based approach enables multi-center dose monitoring and offers great potential to further optimize radiation exposure in radiological departments. • The newly developed software based on the DICOM-Structured Report enables large-scale cloud-based CT dose monitoring • The implemented software solution enables automated benchmarking in regard to national DRLs • The local radiation exposure from CT reached approximately 50 % of the national DRLs • The cloud-based approach offers great potential for multi-center dose analysis. © Georg Thieme Verlag KG Stuttgart · New York.
Nowell, Lisa H.; Crawford, Charles G.; Gilliom, Robert J.; Nakagaki, Naomi; Stone, Wesley W.; Thelin, Gail; Wolock, David M.
2009-01-01
Empirical regression models were developed for estimating concentrations of dieldrin, total chlordane, and total DDT in whole fish from U.S. streams. Models were based on pesticide concentrations measured in whole fish at 648 stream sites nationwide (1992-2001) as part of the U.S. Geological Survey's National Water Quality Assessment Program. Explanatory variables included fish lipid content, estimates (or surrogates) representing historical agricultural and urban sources, watershed characteristics, and geographic location. Models were developed using Tobit regression methods appropriate for data with censoring. Typically, the models explain approximately 50 to 70% of the variability in pesticide concentrations measured in whole fish. The models were used to predict pesticide concentrations in whole fish for streams nationwide using the U.S. Environmental Protection Agency's River Reach File 1 and to estimate the probability that whole-fish concentrations exceed benchmarks for protection of fish-eating wildlife. Predicted concentrations were highest for dieldrin in the Corn Belt, Texas, and scattered urban areas; for total chlordane in the Corn Belt, Texas, the Southeast, and urbanized Northeast; and for total DDT in the Southeast, Texas, California, and urban areas nationwide. The probability of exceeding wildlife benchmarks for dieldrin and chlordane was predicted to be low for most U.S. streams. The probability of exceeding wildlife benchmarks for total DDT is higher but varies depending on the fish taxon and on the benchmark used. Because the models in the present study are based on fish data collected during the 1990s and organochlorine pesticide residues in the environment continue to decline decades after their uses were discontinued, these models may overestimate present-day pesticide concentrations in fish. ?? 2009 SETAC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clay, Raymond C.; Holzmann, Markus; Ceperley, David M.
An accurate understanding of the phase diagram of dense hydrogen and helium mixtures is a crucial component in the construction of accurate models of Jupiter, Saturn, and Jovian extrasolar planets. Though DFT based rst principles methods have the potential to provide the accuracy and computational e ciency required for this task, recent benchmarking in hydrogen has shown that achieving this accuracy requires a judicious choice of functional, and a quanti cation of the errors introduced. In this work, we present a quantum Monte Carlo based benchmarking study of a wide range of density functionals for use in hydrogen-helium mixtures atmore » thermodynamic conditions relevant for Jovian planets. Not only do we continue our program of benchmarking energetics and pressures, but we deploy QMC based force estimators and use them to gain insights into how well the local liquid structure is captured by di erent density functionals. We nd that TPSS, BLYP and vdW-DF are the most accurate functionals by most metrics, and that the enthalpy, energy, and pressure errors are very well behaved as a function of helium concentration. Beyond this, we highlight and analyze the major error trends and relative di erences exhibited by the major classes of functionals, and estimate the magnitudes of these e ects when possible.« less
Clay, Raymond C.; Holzmann, Markus; Ceperley, David M.; ...
2016-01-19
An accurate understanding of the phase diagram of dense hydrogen and helium mixtures is a crucial component in the construction of accurate models of Jupiter, Saturn, and Jovian extrasolar planets. Though DFT based rst principles methods have the potential to provide the accuracy and computational e ciency required for this task, recent benchmarking in hydrogen has shown that achieving this accuracy requires a judicious choice of functional, and a quanti cation of the errors introduced. In this work, we present a quantum Monte Carlo based benchmarking study of a wide range of density functionals for use in hydrogen-helium mixtures atmore » thermodynamic conditions relevant for Jovian planets. Not only do we continue our program of benchmarking energetics and pressures, but we deploy QMC based force estimators and use them to gain insights into how well the local liquid structure is captured by di erent density functionals. We nd that TPSS, BLYP and vdW-DF are the most accurate functionals by most metrics, and that the enthalpy, energy, and pressure errors are very well behaved as a function of helium concentration. Beyond this, we highlight and analyze the major error trends and relative di erences exhibited by the major classes of functionals, and estimate the magnitudes of these e ects when possible.« less
The Costs of an Enhanced Employee Assistance Program (EAP) Intervention.
ERIC Educational Resources Information Center
French, Michael T.; Dunlap, Laura J.; Zarkin, Gary A.; Karuntzos, Georgia T.
1998-01-01
This study estimates the economic costs of an enhanced Employee Assistance Program (EAP) intervention at a large midwestern EAP that serves 90 worksites. Results specify developmental and implementation costs and provide benchmark cost estimated for other EAPs that may be considering enhanced services. (SLD)
Dual linear structured support vector machine tracking method via scale correlation filter
NASA Astrophysics Data System (ADS)
Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen
2018-01-01
Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.
SU-G-BRC-17: Using Generalized Mean for Equivalent Square Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, S; Fan, Q; Lei, Y
Purpose: Equivalent Square (ES) is a widely used concept in radiotherapy. It enables us to determine many important quantities for a rectangular treatment field, without measurement, based on the corresponding values from its ES field. In this study, we propose a Generalized Mean (GM) type ES formula and compare it with other established formulae using benchmark datasets. Methods: Our GM approach is expressed as ES=(w•fx^α+(1-w)•fy^α)^(1/α), where fx, fy, α, and w represent field sizes, power index, and a weighting factor, respectively. When α=−1 it reduces to well-known Sterling type ES formulae. In our study, α and w are determined throughmore » least-square-fitting. Akaike Information Criterion (AIC) was used to benchmark the performance of each formula. BJR (Supplement 17) ES field table for X-ray PDDs and open field output factor tables in Varian TrueBeam representative dataset were used for validation. Results: Switching from α=−1 to α=−1.25, a 20% reduction in standard deviation of residual error in ES estimation was achieved for the BJR dataset. The maximum relative residual error was reduced from ∼3% (in Sterling formula) or ∼2% (in Vadash/Bjarngard formula) down to ∼1% in GM formula for open fields of all energies and at rectangular field sizes from 3cm to 40cm in the Varian dataset. The improvement of the GM over the Sterling type ES formulae is particularly noticeable for very elongated rectangular fields with short width. AIC analysis confirmed the superior performance of the GM formula after taking into account the expanded parameter space. Conclusion: The GM significantly outperforms Sterling type formulae at slightly increased computational cost. The GM calculation may nullify the requirement of data measurement for many rectangular fields and hence shorten the Linac commissioning process. Improved dose calculation accuracy is also expected by adopting the GM formula into treatment planning and secondary MU check systems.« less
Benchmarking worker nodes using LHCb productions and comparing with HEPSpec06
NASA Astrophysics Data System (ADS)
Charpentier, P.
2017-10-01
In order to estimate the capabilities of a computing slot with limited processing time, it is necessary to know with a rather good precision its “power”. This allows for example pilot jobs to match a task for which the required CPU-work is known, or to define the number of events to be processed knowing the CPU-work per event. Otherwise one always has the risk that the task is aborted because it exceeds the CPU capabilities of the resource. It also allows a better accounting of the consumed resources. The traditional way the CPU power is estimated in WLCG since 2007 is using the HEP-Spec06 benchmark (HS06) suite that was verified at the time to scale properly with a set of typical HEP applications. However, the hardware architecture of processors has evolved, all WLCG experiments moved to using 64-bit applications and use different compilation flags from those advertised for running HS06. It is therefore interesting to check the scaling of HS06 with the HEP applications. For this purpose, we have been using CPU intensive massive simulation productions from the LHCb experiment and compared their event throughput to the HS06 rating of the worker nodes. We also compared it with a much faster benchmark script that is used by the DIRAC framework used by LHCb for evaluating at run time the performance of the worker nodes. This contribution reports on the finding of these comparisons: the main observation is that the scaling with HS06 is no longer fulfilled, while the fast benchmarks have a better scaling but are less precise. One can also clearly see that some hardware or software features when enabled on the worker nodes may enhance their performance beyond expectation from either benchmark, depending on external factors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirayama, S; Fujimoto, R
Purpose: The purpose was to demonstrate a developed acceleration technique of dose optimization and to investigate its applicability to the optimization process in a treatment planning system (TPS) for proton therapy. Methods: In the developed technique, the dose matrix is divided into two parts, main and halo, based on beam sizes. The boundary of the two parts is varied depending on the beam energy and water equivalent depth by utilizing the beam size as a singular threshold parameter. The optimization is executed with two levels of iterations. In the inner loop, doses from the main part are updated, whereas dosesmore » from the halo part remain constant. In the outer loop, the doses from the halo part are recalculated. We implemented this technique to the optimization process in the TPS and investigated the dependence on the target volume of the speedup effect and applicability to the worst-case optimization (WCO) in benchmarks. Results: We created irradiation plans for various cubic targets and measured the optimization time varying the target volume. The speedup effect was improved as the target volume increased, and the calculation speed increased by a factor of six for a 1000 cm3 target. An IMPT plan for the RTOG benchmark phantom was created in consideration of ±3.5% range uncertainties using the WCO. Beams were irradiated at 0, 45, and 315 degrees. The target’s prescribed dose and OAR’s Dmax were set to 3 Gy and 1.5 Gy, respectively. Using the developed technique, the calculation speed increased by a factor of 1.5. Meanwhile, no significant difference in the calculated DVHs was found before and after incorporating the technique into the WCO. Conclusion: The developed technique could be adapted to the TPS’s optimization. The technique was effective particularly for large target cases.« less
A Meta-Analysis of Reliability Coefficients in Second Language Research
ERIC Educational Resources Information Center
Plonsky, Luke; Derrick, Deirdre J.
2016-01-01
Ensuring internal validity in quantitative research requires, among other conditions, reliable instrumentation. Unfortunately, however, second language (L2) researchers often fail to report and even more often fail to interpret reliability estimates beyond generic benchmarks for acceptability. As a means to guide interpretations of such estimates,…
Test suite for image-based motion estimation of the brain and tongue
NASA Astrophysics Data System (ADS)
Ramsey, Jordan; Prince, Jerry L.; Gomez, Arnold D.
2017-03-01
Noninvasive analysis of motion has important uses as qualitative markers for organ function and to validate biomechanical computer simulations relative to experimental observations. Tagged MRI is considered the gold standard for noninvasive tissue motion estimation in the heart, and this has inspired multiple studies focusing on other organs, including the brain under mild acceleration and the tongue during speech. As with other motion estimation approaches, using tagged MRI to measure 3D motion includes several preprocessing steps that affect the quality and accuracy of estimation. Benchmarks, or test suites, are datasets of known geometries and displacements that act as tools to tune tracking parameters or to compare different motion estimation approaches. Because motion estimation was originally developed to study the heart, existing test suites focus on cardiac motion. However, many fundamental differences exist between the heart and other organs, such that parameter tuning (or other optimization) with respect to a cardiac database may not be appropriate. Therefore, the objective of this research was to design and construct motion benchmarks by adopting an "image synthesis" test suite to study brain deformation due to mild rotational accelerations, and a benchmark to model motion of the tongue during speech. To obtain a realistic representation of mechanical behavior, kinematics were obtained from finite-element (FE) models. These results were combined with an approximation of the acquisition process of tagged MRI (including tag generation, slice thickness, and inconsistent motion repetition). To demonstrate an application of the presented methodology, the effect of motion inconsistency on synthetic measurements of head- brain rotation and deformation was evaluated. The results indicated that acquisition inconsistency is roughly proportional to head rotation estimation error. Furthermore, when evaluating non-rigid deformation, the results suggest that inconsistent motion can yield "ghost" shear strains, which are a function of slice acquisition viability as opposed to a true physical deformation.
Test Suite for Image-Based Motion Estimation of the Brain and Tongue
Ramsey, Jordan; Prince, Jerry L.; Gomez, Arnold D.
2017-01-01
Noninvasive analysis of motion has important uses as qualitative markers for organ function and to validate biomechanical computer simulations relative to experimental observations. Tagged MRI is considered the gold standard for noninvasive tissue motion estimation in the heart, and this has inspired multiple studies focusing on other organs, including the brain under mild acceleration and the tongue during speech. As with other motion estimation approaches, using tagged MRI to measure 3D motion includes several preprocessing steps that affect the quality and accuracy of estimation. Benchmarks, or test suites, are datasets of known geometries and displacements that act as tools to tune tracking parameters or to compare different motion estimation approaches. Because motion estimation was originally developed to study the heart, existing test suites focus on cardiac motion. However, many fundamental differences exist between the heart and other organs, such that parameter tuning (or other optimization) with respect to a cardiac database may not be appropriate. Therefore, the objective of this research was to design and construct motion benchmarks by adopting an “image synthesis” test suite to study brain deformation due to mild rotational accelerations, and a benchmark to model motion of the tongue during speech. To obtain a realistic representation of mechanical behavior, kinematics were obtained from finite-element (FE) models. These results were combined with an approximation of the acquisition process of tagged MRI (including tag generation, slice thickness, and inconsistent motion repetition). To demonstrate an application of the presented methodology, the effect of motion inconsistency on synthetic measurements of head-brain rotation and deformation was evaluated. The results indicated that acquisition inconsistency is roughly proportional to head rotation estimation error. Furthermore, when evaluating non-rigid deformation, the results suggest that inconsistent motion can yield “ghost” shear strains, which are a function of slice acquisition viability as opposed to a true physical deformation. PMID:28781414
Methodology and Data Sources for Assessing Extreme Charging Events within the Earth's Magnetosphere
NASA Astrophysics Data System (ADS)
Parker, L. N.; Minow, J. I.; Talaat, E. R.
2016-12-01
Spacecraft surface and internal charging is a potential threat to space technologies because electrostatic discharges on, or within, charged spacecraft materials can result in a number of adverse impacts to spacecraft systems. The Space Weather Action Plan (SWAP) ionizing radiation benchmark team recognized that spacecraft charging will need to be considered to complete the ionizing radiation benchmarks in order to evaluate the threat of charging to critical space infrastructure operating within the near-Earth ionizing radiation environments. However, the team chose to defer work on the lower energy charging environments and focus the initial benchmark efforts on the higher energy galactic cosmic ray, solar energetic particle, and trapped radiation belt particle environments of concern for radiation dose and single event effects in humans and hardware. Therefore, an initial set of 1 in 100 year spacecraft charging environment benchmarks remains to be defined to meet the SWAP goals. This presentation will discuss the available data sources and a methodology to assess the 1 in 100 year extreme space weather events that drive surface and internal charging threats to spacecraft. Environments to be considered are the hot plasmas in the outer magnetosphere during geomagnetic storms, relativistic electrons in the outer radiation belt, and energetic auroral electrons in low Earth orbit at high latitudes.
ABSTRACT Results of global gene expression profiling after short-term exposures can be used to inform tumorigenic potency and chemical mode of action (MOA) and thus serve as a strategy to prioritize future or data-poor chemicals for further evaluation. This compilation of cas...
Dose-additivity has been the default assumption in risk assessments of pesticides with a common mechanism of action but it has been suspected that there could be non-additive effects. Inhibition of plasma cholinesterase (ChE) activity and hypothermia were used as benchmarks of e...
Neutron skyshine calculations with the integral line-beam method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gui, A.A.; Shultis, J.K.; Faw, R.E.
1997-10-01
Recently developed line- and conical-beam response functions are used to calculate neutron skyshine doses for four idealized source geometries. These calculations, which can serve as benchmarks, are compared with MCNP calculations, and the excellent agreement indicates that the integral conical- and line-beam method is an effective alternative to more computationally expensive transport calculations.
Jelin, Benjamin A; Sun, Wenjie; Kravets, Alexandra; Naboka, Maryna; Stepanova, Eugenia I; Vdovenko, Vitaliy Y; Karmaus, Wilfried J; Lichosherstov, Alex; Svendsen, Erik R
2016-11-01
The Chernobyl Nuclear Power Plant (CNPP) accident represents one of the most significant civilian releases of 137 Cesium ( 137 Cs, radiocesium) in human history. In the Chernobyl-affected region, radiocesium is considered to be the greatest on-going environmental hazard to human health by radiobiologists and public health scientists. The goal of this study was to characterize dosimetric patterns and predictive factors for whole-body count (WBC)-derived radiocesium internal dose estimations in a CNPP-affected children's cohort, and cross-validate these estimations with a soil-based ecological dose estimation model. WBC data were used to estimate the internal effective dose using the International Commission on Radiological Protection (ICRP) 67 dose conversion coefficient for 137 Cs and MONDAL Version 3.01 software. Geometric mean dose estimates from each model were compared utilizing paired t-tests and intra-class correlation coefficients. Additionally, we developed predictive models for WBC-derived dose estimation in order to determine the appropriateness of EMARC to estimate dose for this population. The two WBC-derived dose predictive models identified 137 Cs soil concentration (P<0.0001) as the strongest predictor of annual internal effective dose from radiocesium validating the use of the soil-based EMARC model. The geometric mean internal effective dose estimate of the EMARC model (0.183 mSv/y) was the highest followed by the ICRP 67 dose estimates (0.165 mSv/y) and the MONDAL model estimates (0.149 mSv/y). All three models yielded significantly different geometric mean dose (P<0.05) estimates for this cohort when stratified by sex, age at time of exam and season of exam, except for the mean MONDAL and EMARC estimates for 15- and 16-year olds and mean ICRP and MONDAL estimates for children examined in Winter. Further prospective and retrospective radio-epidemiological studies utilizing refined WBC measurements and ecological model dose estimations, in conjunction with findings from animal toxicological studies, should help elucidate possible deterministic radiogenic health effects associated with chronic low-dose internal exposure to 137 Cs.
NASA Astrophysics Data System (ADS)
Carpentieri, C.; Schwarz, C.; Ludwig, J.; Ashfaq, A.; Fiederle, M.
2002-07-01
High precision concerning the dose calibration of X-ray sources is required when counting and integrating methods are compared. The dose calibration for a dental X-ray tube was executed with special dose calibration equipment (dosimeter) as function of exposure time and rate. Results were compared with a benchmark spectrum and agree within ±1.5%. Dead time investigations with the Medipix1 photon-counting chip (PCC) have been performed by rate variations. Two different types of dead time, paralysable and non-paralysable will be discussed. The dead time depends on settings of the front-end electronics and is a function of signal height, which might lead to systematic defects of systems. Dead time losses in excess of 30% have been found for the PCC at 200 kHz absorbed photons per pixel.
Kocher, David C; Apostoaei, A Iulian; Hoffman, F Owen; Trabalka, John R
2018-06-01
This paper presents an analysis to develop a subjective state-of-knowledge probability distribution of a dose and dose-rate effectiveness factor for use in estimating risks of solid cancers from exposure to low linear energy transfer radiation (photons or electrons) whenever linear dose responses from acute and chronic exposure are assumed. A dose and dose-rate effectiveness factor represents an assumption that the risk of a solid cancer per Gy at low acute doses or low dose rates of low linear energy transfer radiation, RL, differs from the risk per Gy at higher acute doses, RH; RL is estimated as RH divided by a dose and dose-rate effectiveness factor, where RH is estimated from analyses of dose responses in Japanese atomic-bomb survivors. A probability distribution to represent uncertainty in a dose and dose-rate effectiveness factor for solid cancers was developed from analyses of epidemiologic data on risks of incidence or mortality from all solid cancers as a group or all cancers excluding leukemias, including (1) analyses of possible nonlinearities in dose responses in atomic-bomb survivors, which give estimates of a low-dose effectiveness factor, and (2) comparisons of risks in radiation workers or members of the public from chronic exposure to low linear energy transfer radiation at low dose rates with risks in atomic-bomb survivors, which give estimates of a dose-rate effectiveness factor. Probability distributions of uncertain low-dose effectiveness factors and dose-rate effectiveness factors for solid cancer incidence and mortality were combined using assumptions about the relative weight that should be assigned to each estimate to represent its relevance to estimation of a dose and dose-rate effectiveness factor. The probability distribution of a dose and dose-rate effectiveness factor for solid cancers developed in this study has a median (50th percentile) and 90% subjective confidence interval of 1.3 (0.47, 3.6). The harmonic mean is 1.1, which implies that the arithmetic mean of an uncertain estimate of the risk of a solid cancer per Gy at low acute doses or low dose rates of low linear energy transfer radiation is only about 10% less than the mean risk per Gy at higher acute doses. Data were also evaluated to define a low acute dose or low dose rate of low linear energy transfer radiation, i.e., a dose or dose rate below which a dose and dose-rate effectiveness factor should be applied in estimating risks of solid cancers.
Talibov, Madar; Salmelin, Raili; Lehtinen-Jacks, Susanna; Auvinen, Anssi
2017-04-01
Job-exposure matrices (JEM) are used for exposure assessment in occupational studies, but they can involve errors. We assessed agreement between the Nordic Occupational Cancer Studies JEM (NOCCA-JEM) and aggregate and individual dose estimates for cosmic radiation exposure among Finnish airline personnel. Cumulative cosmic radiation exposure for 5,022 airline crew members was compared between a JEM and aggregate and individual dose estimates. The NOCCA-JEM underestimated individual doses. Intraclass correlation coefficient was 0.37, proportion of agreement 64%, kappa 0.46 compared with individual doses. Higher agreement was achieved with aggregate dose estimates, that is annual medians of individual doses and estimates adjusted for heliocentric potentials. The substantial disagreement between NOCCA-JEM and individual dose estimates of cosmic radiation may lead to exposure misclassification and biased risk estimates in epidemiological studies. Using aggregate data may provide improved estimates. Am. J. Ind. Med. 60:386-393, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Determining a pre-mining radiological baseline from historic airborne gamma surveys: a case study.
Bollhöfer, Andreas; Beraldo, Annamarie; Pfitzner, Kirrilly; Esparon, Andrew; Doering, Che
2014-01-15
Knowing the baseline level of radioactivity in areas naturally enriched in radionuclides is important in the uranium mining context to assess radiation doses to humans and the environment both during and after mining. This information is particularly useful in rehabilitation planning and developing closure criteria for uranium mines as only radiation doses additional to the natural background are usually considered 'controllable' for radiation protection purposes. In this case study we have tested whether the method of contemporary groundtruthing of a historic airborne gamma survey could be used to determine the pre-mining radiological conditions at the Ranger mine in northern Australia. The airborne gamma survey was flown in 1976 before mining started and groundtruthed using ground gamma dose rate measurements made between 2007 and 2009 at an undisturbed area naturally enriched in uranium (Anomaly 2) located nearby the Ranger mine. Measurements of (226)Ra soil activity concentration and (222)Rn exhalation flux density at Anomaly 2 were made concurrent with the ground gamma dose rate measurements. Algorithms were developed to upscale the ground gamma data to the same spatial resolution as the historic airborne gamma survey data using a geographic information system, allowing comparison of the datasets. Linear correlation models were developed to estimate the pre-mining gamma dose rates, (226)Ra soil activity concentrations, and (222)Rn exhalation flux densities at selected areas in the greater Ranger region. The modelled levels agreed with measurements made at the Ranger Orebodies 1 and 3 before mining started, and at environmental sites in the region. The conclusion is that our approach can be used to determine baseline radiation levels, and provide a benchmark for rehabilitation of uranium mines or industrial sites where historical airborne gamma survey data are available and an undisturbed radiological analogue exists to groundtruth the data. © 2013.
White, Paul A; Johnson, George E
2016-05-01
Applied genetic toxicology is undergoing a transition from qualitative hazard identification to quantitative dose-response analysis and risk assessment. To facilitate this change, the Health and Environmental Sciences Institute (HESI) Genetic Toxicology Technical Committee (GTTC) sponsored a workshop held in Lancaster, UK on July 10-11, 2014. The event included invited speakers from several institutions and the contents was divided into three themes-1: Point-of-departure Metrics for Quantitative Dose-Response Analysis in Genetic Toxicology; 2: Measurement and Estimation of Exposures for Better Extrapolation to Humans and 3: The Use of Quantitative Approaches in Genetic Toxicology for human health risk assessment (HHRA). A host of pertinent issues were discussed relating to the use of in vitro and in vivo dose-response data, the development of methods for in vitro to in vivo extrapolation and approaches to use in vivo dose-response data to determine human exposure limits for regulatory evaluations and decision-making. This Special Issue, which was inspired by the workshop, contains a series of papers that collectively address topics related to the aforementioned themes. The Issue includes contributions that collectively evaluate, describe and discuss in silico, in vitro, in vivo and statistical approaches that are facilitating the shift from qualitative hazard evaluation to quantitative risk assessment. The use and application of the benchmark dose approach was a central theme in many of the workshop presentations and discussions, and the Special Issue includes several contributions that outline novel applications for the analysis and interpretation of genetic toxicity data. Although the contents of the Special Issue constitutes an important step towards the adoption of quantitative methods for regulatory assessment of genetic toxicity, formal acceptance of quantitative methods for HHRA and regulatory decision-making will require consensus regarding the relationships between genetic damage and disease, and the concomitant ability to use genetic toxicity results per se. © Her Majesty the Queen in Right of Canada 2016. Reproduced with the permission of the Minister of Health.
Kirman, C. R.; Gargas, M. L.; Collins, J. J.; Rowlands, J. C.
2012-01-01
A screening-level risk assessment was conducted for styrene-acrylonitrile (SAN) Trimer detected at the Reich Farm Superfund site in Toms River, NJ. Consistent with a screening-level approach, on-site and off-site exposure scenarios were evaluated using assumptions that are expected to overestimate actual exposures and hazards at the site. Environmental sampling data collected for soil and groundwater were used to estimate exposure point concentrations. Several exposure scenarios were evaluated to assess potential on-site and off-site exposures, using parameter values for exposures to soil (oral, inhalation of particulates, and dermal contact) and groundwater (oral, dermal contact) to reflect central tendency exposure (CTE) and reasonable maximum exposure (RME) conditions. Three reference dose (RfD) values were derived for SAN Trimer for short-term, subchronic, and chronic exposures, based upon its effects on the liver in exposed rats. Benchmark (BMD) methods were used to assess the relationship between exposure and response, and to characterize appropriate points of departure (POD) for each RfD. An uncertainty factor of 300 was applied to each POD to yield RfD values of 0.1, 0.04, and 0.03 mg/kg-d for short-term, subchronic, and chronic exposures, respectively. Because a chronic cancer bioassay for SAN Trimer in rats (NTP 2011a) does not provide evidence of carcinogenicity, a cancer risk assessment is not appropriate for this chemical. Potential health hazards to human health were assessed using a hazard index (HI) approach, which considers the ratio of exposure dose (i.e., average daily dose, mg/kg-d) to toxicity dose (RfD, mg/kg-d) for each scenario. All CTE and RME HI values are well below 1 (where the average daily dose is equivalent to the RfD), indicating that there is no concern for potential noncancer effects in exposed populations even under the conservative assumptions of this screening-level assessment. PMID:23030654
Precise Ages for the Benchmark Brown Dwarfs HD 19467 B and HD 4747 B
NASA Astrophysics Data System (ADS)
Wood, Charlotte; Boyajian, Tabetha; Crepp, Justin; von Braun, Kaspar; Brewer, John; Schaefer, Gail; Adams, Arthur; White, Tim
2018-01-01
Large uncertainty in the age of brown dwarfs, stemming from a mass-age degeneracy, makes it difficult to constrain substellar evolutionary models. To break the degeneracy, we need ''benchmark" brown dwarfs (found in binary systems) whose ages can be determined independent of their masses. HD~19467~B and HD~4747~B are two benchmark brown dwarfs detected through the TRENDS (TaRgeting bENchmark objects with Doppler Spectroscopy) high-contrast imaging program for which we have dynamical mass measurements. To constrain their ages independently through isochronal analysis, we measured the radii of the host stars with interferometry using the Center for High Angular Resolution Astronomy (CHARA) Array. Assuming the brown dwarfs have the same ages as their host stars, we use these results to distinguish between several substellar evolutionary models. In this poster, we present new age estimates for HD~19467 and HD~4747 that are more accurate and precise and show our preliminary comparisons to cooling models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, Timothy P.; Martz, Roger L.; Kiedrowski, Brian C.
New unstructured mesh capabilities in MCNP6 (developmental version during summer 2012) show potential for conducting multi-physics analyses by coupling MCNP to a finite element solver such as Abaqus/CAE[2]. Before these new capabilities can be utilized, the ability of MCNP to accurately estimate eigenvalues and pin powers using an unstructured mesh must first be verified. Previous work to verify the unstructured mesh capabilities in MCNP was accomplished using the Godiva sphere [1], and this work attempts to build on that. To accomplish this, a criticality benchmark and a fuel assembly benchmark were used for calculations in MCNP using both the Constructivemore » Solid Geometry (CSG) native to MCNP and the unstructured mesh geometry generated using Abaqus/CAE. The Big Ten criticality benchmark [3] was modeled due to its geometry being similar to that of a reactor fuel pin. The C5G7 3-D Mixed Oxide (MOX) Fuel Assembly Benchmark [4] was modeled to test the unstructured mesh capabilities on a reactor-type problem.« less
Lee, Eunsol; Goo, Hyun Woo; Lee, Jae-Yeong
2015-08-01
It is necessary to develop a mechanism to estimate and analyze cumulative radiation risks from multiple CT exams in various clinical scenarios in children. To identify major contributors to high cumulative CT dose estimates using actual dose-length product values collected for 5 years in children. Between August 2006 and July 2011 we reviewed 26,937 CT exams in 13,803 children. Among them, we included 931 children (median age 3.5 years, age range 0 days-15 years; M:F = 533:398) who had 5,339 CT exams. Each child underwent at least three CT scans and had accessible radiation dose reports. Dose-length product values were automatically extracted from DICOM files and we used recently updated conversion factors for age, gender, anatomical region and tube voltage to estimate CT radiation dose. We tracked the calculated CT dose estimates to obtain a 5-year cumulative value for each child. The study population was divided into three groups according to the cumulative CT dose estimates: high, ≥30 mSv; moderate, 10-30 mSv; and low, <10 mSv. We reviewed clinical data and CT protocols to identify major contributors to high and moderate cumulative CT dose estimates. Median cumulative CT dose estimate was 5.4 mSv (range 0.5-71.1 mSv), and median number of CT scans was 4 (range 3-36). High cumulative CT dose estimates were most common in children with malignant tumors (57.9%, 11/19). High frequency of CT scans was attributed to high cumulative CT dose estimates in children with ventriculoperitoneal shunt (35 in 1 child) and malignant tumors (range 18-49). Moreover, high-dose CT protocols, such as multiphase abdomen CT (median 4.7 mSv) contributed to high cumulative CT dose estimates even in children with a low number of CT scans. Disease group, number of CT scans, and high-dose CT protocols are major contributors to higher cumulative CT dose estimates in children.
Estimating organ doses from tube current modulated CT examinations using a generalized linear model.
Bostani, Maryam; McMillan, Kyle; Lu, Peiyun; Kim, Grace Hyun J; Cody, Dianna; Arbique, Gary; Greenberg, S Bruce; DeMarco, John J; Cagnon, Chris H; McNitt-Gray, Michael F
2017-04-01
Currently, available Computed Tomography dose metrics are mostly based on fixed tube current Monte Carlo (MC) simulations and/or physical measurements such as the size specific dose estimate (SSDE). In addition to not being able to account for Tube Current Modulation (TCM), these dose metrics do not represent actual patient dose. The purpose of this study was to generate and evaluate a dose estimation model based on the Generalized Linear Model (GLM), which extends the ability to estimate organ dose from tube current modulated examinations by incorporating regional descriptors of patient size, scanner output, and other scan-specific variables as needed. The collection of a total of 332 patient CT scans at four different institutions was approved by each institution's IRB and used to generate and test organ dose estimation models. The patient population consisted of pediatric and adult patients and included thoracic and abdomen/pelvis scans. The scans were performed on three different CT scanner systems. Manual segmentation of organs, depending on the examined anatomy, was performed on each patient's image series. In addition to the collected images, detailed TCM data were collected for all patients scanned on Siemens CT scanners, while for all GE and Toshiba patients, data representing z-axis-only TCM, extracted from the DICOM header of the images, were used for TCM simulations. A validated MC dosimetry package was used to perform detailed simulation of CT examinations on all 332 patient models to estimate dose to each segmented organ (lungs, breasts, liver, spleen, and kidneys), denoted as reference organ dose values. Approximately 60% of the data were used to train a dose estimation model, while the remaining 40% was used to evaluate performance. Two different methodologies were explored using GLM to generate a dose estimation model: (a) using the conventional exponential relationship between normalized organ dose and size with regional water equivalent diameter (WED) and regional CTDI vol as variables and (b) using the same exponential relationship with the addition of categorical variables such as scanner model and organ to provide a more complete estimate of factors that may affect organ dose. Finally, estimates from generated models were compared to those obtained from SSDE and ImPACT. The Generalized Linear Model yielded organ dose estimates that were significantly closer to the MC reference organ dose values than were organ doses estimated via SSDE or ImPACT. Moreover, the GLM estimates were better than those of SSDE or ImPACT irrespective of whether or not categorical variables were used in the model. While the improvement associated with a categorical variable was substantial in estimating breast dose, the improvement was minor for other organs. The GLM approach extends the current CT dose estimation methods by allowing the use of additional variables to more accurately estimate organ dose from TCM scans. Thus, this approach may be able to overcome the limitations of current CT dose metrics to provide more accurate estimates of patient dose, in particular, dose to organs with considerable variability across the population. © 2017 American Association of Physicists in Medicine.
Development of an agricultural job-exposure matrix for British Columbia, Canada.
Wood, David; Astrakianakis, George; Lang, Barbara; Le, Nhu; Bert, Joel
2002-09-01
Farmers in British Columbia (BC), Canada have been shown to have unexplained elevated proportional mortality rates for several cancers. Because agricultural exposures have never been documented systematically in BC, a quantitative agricultural Job-exposure matrix (JEM) was developed containing exposure assessments from 1950 to 1998. This JEM was developed to document historical exposures and to facilitate future epidemiological studies. Available information regarding BC farming practices was compiled and checklists of potential exposures were produced for each crop. Exposures identified included chemical, biological, and physical agents. Interviews with farmers and agricultural experts were conducted using the checklists as a starting point. This allowed the creation of an initial or 'potential' JEM based on three axes: exposure agent, 'type of work' and time. The 'type of work' axis was determined by combining several variables: region, crop, job title and task. This allowed for a complete description of exposures. Exposure assessments were made quantitatively, where data allowed, or by a dichotomous variable (exposed/unexposed). Quantitative calculations were divided into re-entry and application scenarios. 'Re-entry' exposures were quantified using a standard exposure model with some modification while application exposure estimates were derived using data from the North American Pesticide Handlers Exposure Database (PHED). As expected, exposures differed between crops and job titles both quantitatively and qualitatively. Of the 290 agents included in the exposure axis; 180 were pesticides. Over 3000 estimates of exposure were conducted; 50% of these were quantitative. Each quantitative estimate was at the daily absorbed dose level. Exposure estimates were then rated as high, medium, or low based on comparing them with their respective oral chemical reference dose (RfD) or Acceptable Daily Intake (ADI). This data was mainly obtained from the US Environmental Protection Agency (EPA) Integrated Risk Information System database. Of the quantitative estimates, 74% were rated as low (< 100%) and only 10% were rated as high (>500%). The JEM resulting from this study fills a void concerning exposures for BC farmers and farm workers. While only limited validation of assessments were possible, this JEM can serve as a benchmark for future studies. Preliminary analysis at the BC Cancer Agency (BCCA) using the JEM with prostate cancer records from a large cancer and occupation study/survey has already shown promising results. Development of this JEM provides a useful model for developing historical quantitative exposure estimates where is very little documented information available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moirano, J
Purpose: An accurate dose estimate is necessary for effective patient management after a fetal exposure. In the case of a high-dose exposure, it is critical to use all resources available in order to make the most accurate assessment of the fetal dose. This work will demonstrate a methodology for accurate fetal dose estimation using tools that have recently become available in many clinics, and show examples of best practices for collecting data and performing the fetal dose calculation. Methods: A fetal dose estimate calculation was performed using modern data collection tools to determine parameters for the calculation. The reference pointmore » air kerma as displayed by the fluoroscopic system was checked for accuracy. A cumulative dose incidence map and DICOM header mining were used to determine the displayed reference point air kerma. Corrections for attenuation caused by the patient table and pad were measured and applied in order to determine the peak skin dose. The position and depth of the fetus was determined by ultrasound imaging and consultation with a radiologist. The data collected was used to determine a normalized uterus dose from Monte Carlo simulation data. Fetal dose values from this process were compared to other accepted calculation methods. Results: An accurate high-dose fetal dose estimate was made. Comparison to accepted legacy methods were were within 35% of estimated values. Conclusion: Modern data collection and reporting methods ease the process for estimation of fetal dose from interventional fluoroscopy exposures. Many aspects of the calculation can now be quantified rather than estimated, which should allow for a more accurate estimation of fetal dose.« less
Children's Lead Exposure: A Multimedia Modeling Analysis to Guide Public Health Decision-Making.
Zartarian, Valerie; Xue, Jianping; Tornero-Velez, Rogelio; Brown, James
2017-09-12
Drinking water and other sources for lead are the subject of public health concerns around the Flint, Michigan, drinking water and East Chicago, Indiana, lead in soil crises. In 2015, the U.S. Environmental Protection Agency (EPA)'s National Drinking Water Advisory Council (NDWAC) recommended establishment of a "health-based, household action level" for lead in drinking water based on children's exposure. The primary objective was to develop a coupled exposure-dose modeling approach that can be used to determine what drinking water lead concentrations keep children's blood lead levels (BLLs) below specified values, considering exposures from water, soil, dust, food, and air. Related objectives were to evaluate the coupled model estimates using real-world blood lead data, to quantify relative contributions by the various media, and to identify key model inputs. A modeling approach using the EPA's Stochastic Human Exposure and Dose Simulation (SHEDS)-Multimedia and Integrated Exposure Uptake and Biokinetic (IEUBK) models was developed using available data. This analysis for the U.S. population of young children probabilistically simulated multimedia exposures and estimated relative contributions of media to BLLs across all population percentiles for several age groups. Modeled BLLs compared well with nationally representative BLLs (0-23% relative error). Analyses revealed relative importance of soil and dust ingestion exposure pathways and associated Pb intake rates; water ingestion was also a main pathway, especially for infants. This methodology advances scientific understanding of the relationship between lead concentrations in drinking water and BLLs in children. It can guide national health-based benchmarks for lead and related community public health decisions. https://doi.org/10.1289/EHP1605.
Children’s Lead Exposure: A Multimedia Modeling Analysis to Guide Public Health Decision-Making
Xue, Jianping; Tornero-Velez, Rogelio; Brown, James
2017-01-01
Background: Drinking water and other sources for lead are the subject of public health concerns around the Flint, Michigan, drinking water and East Chicago, Indiana, lead in soil crises. In 2015, the U.S. Environmental Protection Agency (EPA)’s National Drinking Water Advisory Council (NDWAC) recommended establishment of a “health-based, household action level” for lead in drinking water based on children’s exposure. Objectives: The primary objective was to develop a coupled exposure–dose modeling approach that can be used to determine what drinking water lead concentrations keep children’s blood lead levels (BLLs) below specified values, considering exposures from water, soil, dust, food, and air. Related objectives were to evaluate the coupled model estimates using real-world blood lead data, to quantify relative contributions by the various media, and to identify key model inputs. Methods: A modeling approach using the EPA’s Stochastic Human Exposure and Dose Simulation (SHEDS)-Multimedia and Integrated Exposure Uptake and Biokinetic (IEUBK) models was developed using available data. This analysis for the U.S. population of young children probabilistically simulated multimedia exposures and estimated relative contributions of media to BLLs across all population percentiles for several age groups. Results: Modeled BLLs compared well with nationally representative BLLs (0–23% relative error). Analyses revealed relative importance of soil and dust ingestion exposure pathways and associated Pb intake rates; water ingestion was also a main pathway, especially for infants. Conclusions: This methodology advances scientific understanding of the relationship between lead concentrations in drinking water and BLLs in children. It can guide national health-based benchmarks for lead and related community public health decisions. https://doi.org/10.1289/EHP1605 PMID:28934096
Benchmarking Foot Trajectory Estimation Methods for Mobile Gait Analysis
Ollenschläger, Malte; Roth, Nils; Klucken, Jochen
2017-01-01
Mobile gait analysis systems based on inertial sensing on the shoe are applied in a wide range of applications. Especially for medical applications, they can give new insights into motor impairment in, e.g., neurodegenerative disease and help objectify patient assessment. One key component in these systems is the reconstruction of the foot trajectories from inertial data. In literature, various methods for this task have been proposed. However, performance is evaluated on a variety of datasets due to the lack of large, generally accepted benchmark datasets. This hinders a fair comparison of methods. In this work, we implement three orientation estimation and three double integration schemes for use in a foot trajectory estimation pipeline. All methods are drawn from literature and evaluated against a marker-based motion capture reference. We provide a fair comparison on the same dataset consisting of 735 strides from 16 healthy subjects. As a result, the implemented methods are ranked and we identify the most suitable processing pipeline for foot trajectory estimation in the context of mobile gait analysis. PMID:28832511
Arnell, Magnus; Astals, Sergi; Åmand, Linda; Batstone, Damien J; Jensen, Paul D; Jeppsson, Ulf
2016-07-01
Anaerobic co-digestion is an emerging practice at wastewater treatment plants (WWTPs) to improve the energy balance and integrate waste management. Modelling of co-digestion in a plant-wide WWTP model is a powerful tool to assess the impact of co-substrate selection and dose strategy on digester performance and plant-wide effects. A feasible procedure to characterise and fractionate co-substrates COD for the Benchmark Simulation Model No. 2 (BSM2) was developed. This procedure is also applicable for the Anaerobic Digestion Model No. 1 (ADM1). Long chain fatty acid inhibition was included in the ADM1 model to allow for realistic modelling of lipid rich co-substrates. Sensitivity analysis revealed that, apart from the biodegradable fraction of COD, protein and lipid fractions are the most important fractions for methane production and digester stability, with at least two major failure modes identified through principal component analysis (PCA). The model and procedure were tested on bio-methane potential (BMP) tests on three substrates, each rich on carbohydrates, proteins or lipids with good predictive capability in all three cases. This model was then applied to a plant-wide simulation study which confirmed the positive effects of co-digestion on methane production and total operational cost. Simulations also revealed the importance of limiting the protein load to the anaerobic digester to avoid ammonia inhibition in the digester and overloading of the nitrogen removal processes in the water train. In contrast, the digester can treat relatively high loads of lipid rich substrates without prolonged disturbances. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clewell, H.J., E-mail: hclewell@thehamner.org; Efremenko, A.; Campbell, J.L.
Male and female Fischer 344 rats were exposed to naphthalene vapors at 0 (controls), 0.1, 1, 10, and 30 ppm for 6 h/d, 5 d/wk, over a 90-day period. Following exposure, the respiratory epithelium and olfactory epithelium from the nasal cavity were dissected separately, RNA was isolated, and gene expression microarray analysis was conducted. Only a few significant gene expression changes were observed in the olfactory or respiratory epithelium of either gender at the lowest concentration (0.1 ppm). At the 1.0 ppm concentration there was limited evidence of an oxidative stress response in the respiratory epithelium, but not in themore » olfactory epithelium. In contrast, a large number of significantly enriched cellular pathway responses were observed in both tissues at the two highest concentrations (10 and 30 ppm, which correspond to tumorigenic concentrations in the NTP bioassay). The nature of these responses supports a mode of action involving oxidative stress, inflammation and proliferation. These results are consistent with a dose-dependent transition in the mode of action for naphthalene toxicity/carcinogenicity between 1.0 and 10 ppm in the rat. In the female olfactory epithelium (the gender/site with the highest incidences of neuroblastomas in the NTP bioassay), the lowest concentration at which any signaling pathway was significantly affected, as characterized by the median pathway benchmark dose (BMD) or its 95% lower bound (BMDL) was 6.0 or 3.7 ppm, respectively, while the lowest female olfactory BMD values for pathways related to glutathione homeostasis, inflammation, and proliferation were 16.1, 11.1, and 8.4 ppm, respectively. In the male respiratory epithelium (the gender/site with the highest incidences of adenomas in the NTP bioassay), the lowest pathway BMD and BMDL were 0.4 and 0.3 ppm, respectively, and the lowest male respiratory BMD values for pathways related to glutathione homeostasis, inflammation, and proliferation were 0.5, 0.7, and 0.9 ppm, respectively. Using a published physiologically based pharmacokinetic (PBPK) model to estimate target tissue dose relevant to the proposed mode of action (total naphthalene metabolism per gram nasal tissue), the lowest transcriptional BMDLs from this analysis equate to human continuous naphthalene exposure at approximately 0.3 ppm. It is unlikely that significant effects of naphthalene or its metabolites will occur at exposures below this concentration. - Highlights: • We investigated mode of action for carcinogenicity of inhaled naphthalene in rats. • Gene expression changes were measured in rat nasal tissues after 90 day exposures. • Support a non-linear mode of action (oxidative stress, inflammation, and proliferation) • Suggest a dose-dependent transition in the mode of action between 1.0 and 10 ppm • Transcriptional benchmark doses could inform point of departure for risk assessment.« less
GROWTH OF THE INTERNATIONAL CRITICALITY SAFETY AND REACTOR PHYSICS EXPERIMENT EVALUATION PROJECTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. Blair Briggs; John D. Bess; Jim Gulliford
2011-09-01
Since the International Conference on Nuclear Criticality Safety (ICNC) 2007, the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) have continued to expand their efforts and broaden their scope. Eighteen countries participated on the ICSBEP in 2007. Now, there are 20, with recent contributions from Sweden and Argentina. The IRPhEP has also expanded from eight contributing countries in 2007 to 16 in 2011. Since ICNC 2007, the contents of the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments1' have increased from 442 evaluations (38000 pages), containing benchmark specifications for 3955 critical ormore » subcritical configurations to 516 evaluations (nearly 55000 pages), containing benchmark specifications for 4405 critical or subcritical configurations in the 2010 Edition of the ICSBEP Handbook. The contents of the Handbook have also increased from 21 to 24 criticality-alarm-placement/shielding configurations with multiple dose points for each, and from 20 to 200 configurations categorized as fundamental physics measurements relevant to criticality safety applications. Approximately 25 new evaluations and 150 additional configurations are expected to be added to the 2011 edition of the Handbook. Since ICNC 2007, the contents of the 'International Handbook of Evaluated Reactor Physics Benchmark Experiments2' have increased from 16 different experimental series that were performed at 12 different reactor facilities to 53 experimental series that were performed at 30 different reactor facilities in the 2011 edition of the Handbook. Considerable effort has also been made to improve the functionality of the searchable database, DICE (Database for the International Criticality Benchmark Evaluation Project) and verify the accuracy of the data contained therein. DICE will be discussed in separate papers at ICNC 2011. The status of the ICSBEP and the IRPhEP will be discussed in the full paper, selected benchmarks that have been added to the ICSBEP Handbook will be highlighted, and a preview of the new benchmarks that will appear in the September 2011 edition of the Handbook will be provided. Accomplishments of the IRPhEP will also be highlighted and the future of both projects will be discussed. REFERENCES (1) International Handbook of Evaluated Criticality Safety Benchmark Experiments, NEA/NSC/DOC(95)03/I-IX, Organisation for Economic Co-operation and Development-Nuclear Energy Agency (OECD-NEA), September 2010 Edition, ISBN 978-92-64-99140-8. (2) International Handbook of Evaluated Reactor Physics Benchmark Experiments, NEA/NSC/DOC(2006)1, Organisation for Economic Co-operation and Development-Nuclear Energy Agency (OECD-NEA), March 2011 Edition, ISBN 978-92-64-99141-5.« less
Shutdown Dose Rate Analysis Using the Multi-Step CADIS Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.
2015-01-01
The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) hybrid Monte Carlo (MC)/deterministic radiation transport method was proposed to speed up the shutdown dose rate (SDDR) neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. This work applied the MS-CADIS method to the ITER SDDR benchmark problem. The MS-CADIS method was also used to calculate the SDDR uncertainty resulting from uncertainties in the MC neutron calculation and to determine the degree of undersampling in SDDR calculations because of the limited ability of the MC method to tally detailed spatial and energy distributions. The analysismore » that used the ITER benchmark problem compared the efficiency of the MS-CADIS method to the traditional approach of using global MC variance reduction techniques for speeding up SDDR neutron MC calculation. Compared to the standard Forward-Weighted-CADIS (FW-CADIS) method, the MS-CADIS method increased the efficiency of the SDDR neutron MC calculation by 69%. The MS-CADIS method also increased the fraction of nonzero scoring mesh tally elements in the space-energy regions of high importance to the final SDDR.« less
NASA Astrophysics Data System (ADS)
Mendoza, Sergio; Rothenberger, Michael; Hake, Alison; Fathy, Hosam
2016-03-01
This article presents a framework for optimizing the thermal cycle to estimate a battery cell's entropy coefficient at 20% state of charge (SOC). Our goal is to maximize Fisher identifiability: a measure of the accuracy with which a parameter can be estimated. Existing protocols in the literature for estimating entropy coefficients demand excessive laboratory time. Identifiability optimization makes it possible to achieve comparable accuracy levels in a fraction of the time. This article demonstrates this result for a set of lithium iron phosphate (LFP) cells. We conduct a 24-h experiment to obtain benchmark measurements of their entropy coefficients. We optimize a thermal cycle to maximize parameter identifiability for these cells. This optimization proceeds with respect to the coefficients of a Fourier discretization of this thermal cycle. Finally, we compare the estimated parameters using (i) the benchmark test, (ii) the optimized protocol, and (iii) a 15-h test from the literature (by Forgez et al.). The results are encouraging for two reasons. First, they confirm the simulation-based prediction that the optimized experiment can produce accurate parameter estimates in 2 h, compared to 15-24. Second, the optimized experiment also estimates a thermal time constant representing the effects of thermal capacitance and convection heat transfer.
An open source framework for tracking and state estimation ('Stone Soup')
NASA Astrophysics Data System (ADS)
Thomas, Paul A.; Barr, Jordi; Balaji, Bhashyam; White, Kruger
2017-05-01
The ability to detect and unambiguously follow all moving entities in a state-space is important in multiple domains both in defence (e.g. air surveillance, maritime situational awareness, ground moving target indication) and the civil sphere (e.g. astronomy, biology, epidemiology, dispersion modelling). However, tracking and state estimation researchers and practitioners have difficulties recreating state-of-the-art algorithms in order to benchmark their own work. Furthermore, system developers need to assess which algorithms meet operational requirements objectively and exhaustively rather than intuitively or driven by personal favourites. We have therefore commenced the development of a collaborative initiative to create an open source framework for production, demonstration and evaluation of Tracking and State Estimation algorithms. The initiative will develop a (MIT-licensed) software platform for researchers and practitioners to test, verify and benchmark a variety of multi-sensor and multi-object state estimation algorithms. The initiative is supported by four defence laboratories, who will contribute to the development effort for the framework. The tracking and state estimation community will derive significant benefits from this work, including: access to repositories of verified and validated tracking and state estimation algorithms, a framework for the evaluation of multiple algorithms, standardisation of interfaces and access to challenging data sets. Keywords: Tracking,
Kassanjee, Reshma; De Angelis, Daniela; Farah, Marian; Hanson, Debra; Labuschagne, Jan Phillipus Lourens; Laeyendecker, Oliver; Le Vu, Stéphane; Tom, Brian; Wang, Rui; Welte, Alex
2017-03-01
The application of biomarkers for 'recent' infection in cross-sectional HIV incidence surveillance requires the estimation of critical biomarker characteristics. Various approaches have been employed for using longitudinal data to estimate the Mean Duration of Recent Infection (MDRI) - the average time in the 'recent' state. In this systematic benchmarking of MDRI estimation approaches, a simulation platform was used to measure accuracy and precision of over twenty approaches, in thirty scenarios capturing various study designs, subject behaviors and test dynamics that may be encountered in practice. Results highlight that assuming a single continuous sojourn in the 'recent' state can produce substantial bias. Simple interpolation provides useful MDRI estimates provided subjects are tested at regular intervals. Regression performs the best - while 'random effects' describe the subject-clustering in the data, regression models without random effects proved easy to implement, stable, and of similar accuracy in scenarios considered; robustness to parametric assumptions was improved by regressing 'recent'/'non-recent' classifications rather than continuous biomarker readings. All approaches were vulnerable to incorrect assumptions about subjects' (unobserved) infection times. Results provided show the relationships between MDRI estimation performance and the number of subjects, inter-visit intervals, missed visits, loss to follow-up, and aspects of biomarker signal and noise.
NASA Technical Reports Server (NTRS)
Waszak, Martin R.; Fung, Jimmy
1998-01-01
This report describes the development of transfer function models for the trailing-edge and upper and lower spoiler actuators of the Benchmark Active Control Technology (BACT) wind tunnel model for application to control system analysis and design. A simple nonlinear least-squares parameter estimation approach is applied to determine transfer function parameters from frequency response data. Unconstrained quasi-Newton minimization of weighted frequency response error was employed to estimate the transfer function parameters. An analysis of the behavior of the actuators over time to assess the effects of wear and aerodynamic load by using the transfer function models is also presented. The frequency responses indicate consistent actuator behavior throughout the wind tunnel test and only slight degradation in effectiveness due to aerodynamic hinge loading. The resulting actuator models have been used in design, analysis, and simulation of controllers for the BACT to successfully suppress flutter over a wide range of conditions.
NASA Astrophysics Data System (ADS)
Golnik, C.; Bemmerer, D.; Enghardt, W.; Fiedler, F.; Hueso-González, F.; Pausch, G.; Römer, K.; Rohling, H.; Schöne, S.; Wagner, L.; Kormoll, T.
2016-06-01
The finite range of a proton beam in tissue opens new vistas for the delivery of a highly conformal dose distribution in radiotherapy. However, the actual particle range, and therefore the accurate dose deposition, is sensitive to the tissue composition in the proton path. Range uncertainties, resulting from limited knowledge of this tissue composition or positioning errors, are accounted for in the form of safety margins. Thus, the unverified particle range constrains the principle benefit of proton therapy. Detecting prompt γ-rays, a side product of proton-tissue interaction, aims at an on-line and non-invasive monitoring of the particle range, and therefore towards exploiting the potential of proton therapy. Compton imaging of the spatial prompt γ-ray emission is a promising measurement approach. Prompt γ-rays exhibit emission energies of several MeV. Hence, common radioactive sources cannot provide the energy range a prompt γ-ray imaging device must be designed for. In this work a benchmark measurement-setup for the production of a localized, monoenergetic 4.44 MeV γ-ray source is introduced. At the Tandetron accelerator at the HZDR, the proton-capture resonance reaction 15N(p,α γ4.439)12C is utilized. This reaction provides the same nuclear de-excitation (and γ-ray emission) occurrent as an intense prompt γ-ray line in proton therapy. The emission yield is quantitatively described. A two-stage Compton imaging device, dedicated for prompt γ-ray imaging, is tested at the setup exemplarily. Besides successful imaging tests, the detection efficiency of the prototype at 4.44 MeV is derived from the measured data. Combining this efficiency with the emission yield for prompt γ-rays, the number of valid Compton events, induced by γ-rays in the energy region around 4.44 MeV, is estimated for the prototype being implemented in a therapeutic treatment scenario. As a consequence, the detection efficiency turns out to be a key parameter for prompt γ-rays Compton imaging limiting the applicability of the prototype in its current realization.
Benchmark of PENELOPE code for low-energy photon transport: dose comparisons with MCNP4 and EGS4.
Ye, Sung-Joon; Brezovich, Ivan A; Pareek, Prem; Naqvi, Shahid A
2004-02-07
The expanding clinical use of low-energy photon emitting 125I and 103Pd seeds in recent years has led to renewed interest in their dosimetric properties. Numerous papers pointed out that higher accuracy could be obtained in Monte Carlo simulations by utilizing newer libraries for the low-energy photon cross-sections, such as XCOM and EPDL97. The recently developed PENELOPE 2001 Monte Carlo code is user friendly and incorporates photon cross-section data from the EPDL97. The code has been verified for clinical dosimetry of high-energy electron and photon beams, but has not yet been tested at low energies. In the present work, we have benchmarked the PENELOPE code for 10-150 keV photons. We computed radial dose distributions from 0 to 10 cm in water at photon energies of 10-150 keV using both PENELOPE and MCNP4C with either DLC-146 or DLC-200 cross-section libraries, assuming a point source located at the centre of a 30 cm diameter and 20 cm length cylinder. Throughout the energy range of simulated photons (except for 10 keV), PENELOPE agreed within statistical uncertainties (at worst +/- 5%) with MCNP/DLC-146 in the entire region of 1-10 cm and with published EGS4 data up to 5 cm. The dose at 1 cm (or dose rate constant) of PENELOPE agreed with MCNP/DLC-146 and EGS4 data within approximately +/- 2% in the range of 20-150 keV, while MCNP/DLC-200 produced values up to 9% lower in the range of 20-100 keV than PENELOPE or the other codes. However, the differences among the four datasets became negligible above 100 keV.
Geras'kin, S A; Oudalova, A A; Dikarev, V G; Dikareva, N S; Mozolin, E M; Hinton, T; Spiridonov, S I; Copplestone, D; Garnier-Laplace, J
2012-02-01
Morphological and cytogenetic abnormalities were examined in crested hairgrass (Koeleria gracilis Pers.) populations inhabiting the Semipalatinsk nuclear test site (STS), Kazakhstan. Sampling of biological material and soil was carried out during 3 years (2005-2007) at 4 sites within the STS. Activity concentrations of 10 radionuclides and 8 heavy metals content in soils were measured. Doses absorbed by plants were estimated and varied, depending on the plot, from 4 up to 265 mGy/y. The frequency of cytogenetic alterations in apical meristem of germinated seeds from the highly contaminated plot significantly exceeded the level observed at other plots with lower levels of radioactive contamination during all three years of the study. A significant excess of chromosome aberrations, typical for radiation exposure, as well as a dependence of the frequency of these types of mutations on dose absorbed by plants were revealed. The results indicate the role radioactive contamination plays in the occurrence of cytogenetic effects. However, no radiation-dependent morphological alterations were detected in the progeny of the exposed populations. Given that the crested hairgrass populations have occupied the radioactively contaminated plots for some 50 years, adaptation to the radiation stress was not evident. The findings obtained were in agreement with the benchmark values proposed in the FASSET and ERICA projects to restrict radiation impacts on biota. Copyright © 2011 Elsevier Ltd. All rights reserved.
Christie, David; Dear, Keith; Le, Thai; Barton, Michael; Wirth, Andrew; Porter, David; Roos, Daniel; Pratt, Gary
2011-07-15
To establish benchmark outcomes for combined modality treatment to be used in future prospective studies of osteolymphoma (primary bone lymphoma). In 1999, the Trans-Tasman Radiation Oncology Group (TROG) invited the Australasian Leukemia and Lymphoma Group (ALLG) to collaborate on a prospective study of limited chemotherapy and radiotherapy for osteolymphoma. The treatment was designed to maintain efficacy but limit the risk of subsequent pathological fractures. Patient assessment included both functional imaging and isotope bone scanning. Treatment included three cycles of CHOP chemotherapy and radiation to a dose of 45 Gy in 25 fractions using a shrinking field technique. The trial closed because of slow accrual after 33 patients had been entered. Accrual was noted to slow down after Rituximab became readily available in Australia. After a median follow-up of 4.3 years, the five-year overall survival and local control rates are estimated at 90% and 72% respectively. Three patients had fractures at presentation that persisted after treatment, one with recurrent lymphoma. Relatively high rates of survival were achieved but the number of local failures suggests that the dose of radiotherapy should remain higher than it is for other types of lymphoma. Disability after treatment due to pathological fracture was not seen. Copyright © 2011 Elsevier Inc. All rights reserved.
A Review of Flood Loss Models as Basis for Harmonization and Benchmarking
Kreibich, Heidi; Franco, Guillermo; Marechal, David
2016-01-01
Risk-based approaches have been increasingly accepted and operationalized in flood risk management during recent decades. For instance, commercial flood risk models are used by the insurance industry to assess potential losses, establish the pricing of policies and determine reinsurance needs. Despite considerable progress in the development of loss estimation tools since the 1980s, loss estimates still reflect high uncertainties and disparities that often lead to questioning their quality. This requires an assessment of the validity and robustness of loss models as it affects prioritization and investment decision in flood risk management as well as regulatory requirements and business decisions in the insurance industry. Hence, more effort is needed to quantify uncertainties and undertake validations. Due to a lack of detailed and reliable flood loss data, first order validations are difficult to accomplish, so that model comparisons in terms of benchmarking are essential. It is checked if the models are informed by existing data and knowledge and if the assumptions made in the models are aligned with the existing knowledge. When this alignment is confirmed through validation or benchmarking exercises, the user gains confidence in the models. Before these benchmarking exercises are feasible, however, a cohesive survey of existing knowledge needs to be undertaken. With that aim, this work presents a review of flood loss–or flood vulnerability–relationships collected from the public domain and some professional sources. Our survey analyses 61 sources consisting of publications or software packages, of which 47 are reviewed in detail. This exercise results in probably the most complete review of flood loss models to date containing nearly a thousand vulnerability functions. These functions are highly heterogeneous and only about half of the loss models are found to be accompanied by explicit validation at the time of their proposal. This paper exemplarily presents an approach for a quantitative comparison of disparate models via the reduction to the joint input variables of all models. Harmonization of models for benchmarking and comparison requires profound insight into the model structures, mechanisms and underlying assumptions. Possibilities and challenges are discussed that exist in model harmonization and the application of the inventory in a benchmarking framework. PMID:27454604
A Review of Flood Loss Models as Basis for Harmonization and Benchmarking.
Gerl, Tina; Kreibich, Heidi; Franco, Guillermo; Marechal, David; Schröter, Kai
2016-01-01
Risk-based approaches have been increasingly accepted and operationalized in flood risk management during recent decades. For instance, commercial flood risk models are used by the insurance industry to assess potential losses, establish the pricing of policies and determine reinsurance needs. Despite considerable progress in the development of loss estimation tools since the 1980s, loss estimates still reflect high uncertainties and disparities that often lead to questioning their quality. This requires an assessment of the validity and robustness of loss models as it affects prioritization and investment decision in flood risk management as well as regulatory requirements and business decisions in the insurance industry. Hence, more effort is needed to quantify uncertainties and undertake validations. Due to a lack of detailed and reliable flood loss data, first order validations are difficult to accomplish, so that model comparisons in terms of benchmarking are essential. It is checked if the models are informed by existing data and knowledge and if the assumptions made in the models are aligned with the existing knowledge. When this alignment is confirmed through validation or benchmarking exercises, the user gains confidence in the models. Before these benchmarking exercises are feasible, however, a cohesive survey of existing knowledge needs to be undertaken. With that aim, this work presents a review of flood loss-or flood vulnerability-relationships collected from the public domain and some professional sources. Our survey analyses 61 sources consisting of publications or software packages, of which 47 are reviewed in detail. This exercise results in probably the most complete review of flood loss models to date containing nearly a thousand vulnerability functions. These functions are highly heterogeneous and only about half of the loss models are found to be accompanied by explicit validation at the time of their proposal. This paper exemplarily presents an approach for a quantitative comparison of disparate models via the reduction to the joint input variables of all models. Harmonization of models for benchmarking and comparison requires profound insight into the model structures, mechanisms and underlying assumptions. Possibilities and challenges are discussed that exist in model harmonization and the application of the inventory in a benchmarking framework.
Formative usability evaluation of a fixed-dose pen-injector platform device
Lange, Jakob; Nemeth, Tobias
2018-01-01
Background This article for the first time presents a formative usability study of a fixed-dose pen injector platform device used for the subcutaneous delivery of biopharmaceuticals, primarily for self-administration by the patient. The study was conducted with a user population of both naïve and experienced users across a range of ages. The goals of the study were to evaluate whether users could use the devices safely and effectively relying on the instructions for use (IFU) for guidance, as well as to benchmark the device against another similar injector established in the market. Further objectives were to capture any usability issues and obtain participants’ subjective ratings on the properties and performance of both devices. Methods A total of 20 participants in three groups studied the IFU and performed simulated injections into an injection pad. Results All participants were able to use the device successfully. The device was well appreciated by all users with, maximum usability feedback scores reported by 90% or more on handling forces and device feedback, and by 85% or more on fit and grip of the device. The presence of clear audible and visible feedbacks upon successful loading of a dose and completion of injection was seen to be a significant improvement over the benchmark injector. Conclusion The observation that the platform device can be safely and efficiently used by all user groups provides confidence that the device and IFU in their current form will pass future summative testing in specific applications. PMID:29670411
Estimation of the Dose and Dose Rate Effectiveness Factor
NASA Technical Reports Server (NTRS)
Chappell, L.; Cucinotta, F. A.
2013-01-01
Current models to estimate radiation risk use the Life Span Study (LSS) cohort that received high doses and high dose rates of radiation. Transferring risks from these high dose rates to the low doses and dose rates received by astronauts in space is a source of uncertainty in our risk calculations. The solid cancer models recommended by BEIR VII [1], UNSCEAR [2], and Preston et al [3] is fitted adequately by a linear dose response model, which implies that low doses and dose rates would be estimated the same as high doses and dose rates. However animal and cell experiments imply there should be curvature in the dose response curve for tumor induction. Furthermore animal experiments that directly compare acute to chronic exposures show lower increases in tumor induction than acute exposures. A dose and dose rate effectiveness factor (DDREF) has been estimated and applied to transfer risks from the high doses and dose rates of the LSS cohort to low doses and dose rates such as from missions in space. The BEIR VII committee [1] combined DDREF estimates using the LSS cohort and animal experiments using Bayesian methods for their recommendation for a DDREF value of 1.5 with uncertainty. We reexamined the animal data considered by BEIR VII and included more animal data and human chromosome aberration data to improve the estimate for DDREF. Several experiments chosen by BEIR VII were deemed inappropriate for application to human risk models of solid cancer risk. Animal tumor experiments performed by Ullrich et al [4], Alpen et al [5], and Grahn et al [6] were analyzed to estimate the DDREF. Human chromosome aberration experiments performed on a sample of astronauts within NASA were also available to estimate the DDREF. The LSS cohort results reported by BEIR VII were combined with the new radiobiology results using Bayesian methods.
Acoustic Source Bearing Estimation (ASBE) computer program development
NASA Technical Reports Server (NTRS)
Wiese, Michael R.
1987-01-01
A new bearing estimation algorithm (Acoustic Source Analysis Technique - ASAT) and an acoustic analysis computer program (Acoustic Source Bearing Estimation - ASBE) are described, which were developed by Computer Sciences Corporation for NASA Langley Research Center. The ASBE program is used by the Acoustics Division/Applied Acoustics Branch and the Instrument Research Division/Electro-Mechanical Instrumentation Branch to analyze acoustic data and estimate the azimuths from which the source signals radiated. Included are the input and output from a benchmark test case.
Blanck, Oliver; Wang, Lei; Baus, Wolfgang; Grimm, Jimm; Lacornerie, Thomas; Nilsson, Joakim; Luchkovskyi, Sergii; Cano, Isabel Palazon; Shou, Zhenyu; Ayadi, Myriam; Treuer, Harald; Viard, Romain; Siebert, Frank-Andre; Chan, Mark K H; Hildebrandt, Guido; Dunst, Jürgen; Imhoff, Detlef; Wurster, Stefan; Wolff, Robert; Romanelli, Pantaleo; Lartigau, Eric; Semrau, Robert; Soltys, Scott G; Schweikard, Achim
2016-05-08
Stereotactic radiosurgery (SRS) is the accurate, conformal delivery of high-dose radiation to well-defined targets while minimizing normal structure doses via steep dose gradients. While inverse treatment planning (ITP) with computerized optimization algorithms are routine, many aspects of the planning process remain user-dependent. We performed an international, multi-institutional benchmark trial to study planning variability and to analyze preferable ITP practice for spinal robotic radiosurgery. 10 SRS treatment plans were generated for a complex-shaped spinal metastasis with 21 Gy in 3 fractions and tight constraints for spinal cord (V14Gy < 2 cc, V18Gy < 0.1 cc) and target (coverage > 95%). The resulting plans were rated on a scale from 1 to 4 (excellent-poor) in five categories (constraint compliance, optimization goals, low-dose regions, ITP complexity, and clinical acceptability) by a blinded review panel. Additionally, the plans were mathemati-cally rated based on plan indices (critical structure and target doses, conformity, monitor units, normal tissue complication probability, and treatment time) and compared to the human rankings. The treatment plans and the reviewers' rankings varied substantially among the participating centers. The average mean overall rank was 2.4 (1.2-4.0) and 8/10 plans were rated excellent in at least one category by at least one reviewer. The mathematical rankings agreed with the mean overall human rankings in 9/10 cases pointing toward the possibility for sole mathematical plan quality comparison. The final rankings revealed that a plan with a well-balanced trade-off among all planning objectives was preferred for treatment by most par-ticipants, reviewers, and the mathematical ranking system. Furthermore, this plan was generated with simple planning techniques. Our multi-institutional planning study found wide variability in ITP approaches for spinal robotic radiosurgery. The participants', reviewers', and mathematical match on preferable treatment plans and ITP techniques indicate that agreement on treatment planning and plan quality can be reached for spinal robotic radiosurgery.
Pecquet, Alison M; Martinez, Jeanelle M; Vincent, Melissa; Erraguntla, Neeraja; Dourson, Michael
2018-06-01
A no-significant-risk-level of 20 mg day -1 was derived for tetrabromobisphenol A (TBBPA). Uterine tumors (adenomas, adenocarcinomas, and malignant mixed Müllerian) observed in female Wistar Han rats from a National Toxicology Program 2-year cancer bioassay were identified as the critical effect. Studies suggest that TBBPA is acting through a non-mutagenic mode of action. Thus, the most appropriate approach to derivation of a cancer risk value based on US Environmental Protection Agency guidelines is a threshold approach, akin to a cancer safe dose (RfD cancer ). Using the National Toxicology Program data, we utilized Benchmark dose software to derive a benchmark dose lower limit (BMDL 10 ) as the point of departure (POD) of 103 mg kg -1 day -1 . The POD was adjusted to a human equivalent dose of 25.6 mg kg -1 day -1 using allometric scaling. We applied a composite adjustment factor of 100 to the POD to derive an RfD cancer of 0.26 mg kg -1 day -1 . Based on a human body weight of 70 kg, the RfD cancer was adjusted to a no-significant-risk-level of 20 mg day -1 . This was compared to other available non-cancer and cancer risk values, and aligns well with our understanding of the underlying biology based on the toxicology data. Overall, the weight of evidence from animal studies indicates that TBBPA has low toxicity and suggests that high doses over long exposure durations are needed to induce uterine tumor formation. Future research needs include a thorough and detailed vetting of the proposed adverse outcome pathway, including further support for key events leading to uterine tumor formation and a quantitative weight of evidence analysis. Copyright © 2018 John Wiley & Sons, Ltd.
SU-E-T-776: Use of Quality Metrics for a New Hypo-Fractionated Pre-Surgical Mesothelioma Protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richardson, S; Mehta, V
Purpose: The “SMART” (Surgery for Mesothelioma After Radiation Therapy) approach involves hypo-fractionated radiotherapy of the lung pleura to 25Gy over 5 days followed by surgical resection within 7. Early clinical results suggest that this approach is very promising, but also logistically challenging due to the multidisciplinary involvement. Due to the compressed schedule, high dose, and shortened planning time, the delivery of the planned doses were monitored for safety with quality metric software. Methods: Hypo-fractionated IMRT treatment plans were developed for all patients and exported to Quality Reports™ software. Plan quality metrics or PQMs™ were created to calculate an objective scoringmore » function for each plan. This allows for an objective assessment of the quality of the plan and a benchmark for plan improvement for subsequent patients. The priorities of various components were incorporated based on similar hypo-fractionated protocols such as lung SBRT treatments. Results: Five patients have been treated at our institution using this approach. The plans were developed, QA performed, and ready within 5 days of simulation. Plan Quality metrics utilized in scoring included doses to OAR and target coverage. All patients tolerated treatment well and proceeded to surgery as scheduled. Reported toxicity included grade 1 nausea (n=1), grade 1 esophagitis (n=1), grade 2 fatigue (n=3). One patient had recurrent fluid accumulation following surgery. No patients experienced any pulmonary toxicity prior to surgery. Conclusion: An accelerated course of pre-operative high dose radiation for mesothelioma is an innovative and promising new protocol. Without historical data, one must proceed cautiously and monitor the data carefully. The development of quality metrics and scoring functions for these treatments allows us to benchmark our plans and monitor improvement. If subsequent toxicities occur, these will be easy to investigate and incorporate into the metrics. This will improve the safe delivery of large doses for these patients.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Y; Lacroix, F; Lavallee, M
Purpose: To evaluate the commercially released Collapsed Cone convolution-based(CCC) dose calculation module of the Elekta OncentraBrachy(OcB) treatment planning system(TPS). Methods: An allwater phantom was used to perform TG43 benchmarks with single source and seventeen sources, separately. Furthermore, four real-patient heterogeneous geometries (chestwall, lung, breast and prostate) were used. They were selected based on their clinical representativity of a class of clinical anatomies that pose clear challenges. The plans were used as is(no modification). For each case, TG43 and CCC calculations were performed in the OcB TPS, with TG186-recommended materials properly assigned to ROIs. For comparison, Monte Carlo simulation was runmore » for each case with the same material scheme and grid mesh as TPS calculations. Both modes of CCC (standard and high quality) were tested. Results: For the benchmark case, the CCC dose, when divided by that of TG43, yields hot-n-cold spots in a radial pattern. The pattern of the high mode is denser than that of the standard mode and is representative of angular dicretization. The total deviation ((hot-cold)/TG43) is 18% for standard mode and 11% for high mode. Seventeen dwell positions help to reduce “ray-effect”, with the total deviation to 6% (standard) and 5% (high), respectively. For the four patient cases, CCC produces, as expected, more realistic dose distributions than TG43. A close agreement was observed between CCC and MC for all isodose lines, from 20% and up; the 10% isodose line of CCC appears shifted compared to that of MC. The DVH plots show dose deviations of CCC from MC in small volume, high dose regions (>100% isodose). For patient cases, the difference between standard and high modes is almost undiscernable. Conclusion: OncentraBrachy CCC algorithm marks a significant dosimetry improvement relative to TG43 in real-patient cases. Further researches are recommended regarding the clinical implications of the above observations. Support provided by a CIHR grant and CCC system provided by Elekta-Nucletron.« less
NASA Astrophysics Data System (ADS)
Angel, Erin
Advances in Computed Tomography (CT) technology have led to an increase in the modality's diagnostic capabilities and therefore its utilization, which has in turn led to an increase in radiation exposure to the patient population. As a result, CT imaging currently constitutes approximately half of the collective exposure to ionizing radiation from medical procedures. In order to understand the radiation risk, it is necessary to estimate the radiation doses absorbed by patients undergoing CT imaging. The most widely accepted risk models are based on radiosensitive organ dose as opposed to whole body dose. In this research, radiosensitive organ dose was estimated using Monte Carlo based simulations incorporating detailed multidetector CT (MDCT) scanner models, specific scan protocols, and using patient models based on accurate patient anatomy and representing a range of patient sizes. Organ dose estimates were estimated for clinical MDCT exam protocols which pose a specific concern for radiosensitive organs or regions. These dose estimates include estimation of fetal dose for pregnant patients undergoing abdomen pelvis CT exams or undergoing exams to diagnose pulmonary embolism and venous thromboembolism. Breast and lung dose were estimated for patients undergoing coronary CTA imaging, conventional fixed tube current chest CT, and conventional tube current modulated (TCM) chest CT exams. The correlation of organ dose with patient size was quantified for pregnant patients undergoing abdomen/pelvis exams and for all breast and lung dose estimates presented. Novel dose reduction techniques were developed that incorporate organ location and are specifically designed to reduce close to radiosensitive organs during CT acquisition. A generalizable model was created for simulating conventional and novel attenuation-based TCM algorithms which can be used in simulations estimating organ dose for any patient model. The generalizable model is a significant contribution of this work as it lays the foundation for the future of simulating TCM using Monte Carlo methods. As a result of this research organ dose can be estimated for individual patients undergoing specific conventional MDCT exams. This research also brings understanding to conventional and novel close reduction techniques in CT and their effect on organ dose.
Lambrecht, Marie; Melidis, Christos; Sonke, Jan-Jakob; Adebahr, Sonja; Boellaard, Ronald; Verheij, Marcel; Guckenberger, Matthias; Nestle, Ursula; Hurkmans, Coen
2016-01-20
The EORTC has launched a phase II trial to assess safety and efficacy of SBRT for centrally located NSCLC: The EORTC 22113-08113-Lungtech trial. Due to neighbouring critical structures, these tumours remain challenging to treat. To guarantee accordance to protocol and treatment safety, an RTQA procedure has been implemented within the frame of the EORTC RTQA levels. These levels are here expanded to include innovative tools beyond protocol compliance verification: the actual dose delivered to each patient will be estimated and linked to trial outcomes to enable better understanding of dose related response and toxicity. For trial participation, institutions must provide a completed facility questionnaire and beam output audit results. To insure ability to comply with protocol specifications a benchmark case is sent to all centres. After approval, institutions are allowed to recruit patients. Nonetheless, each treatment plan will be prospectively reviewed insuring trial compliance consistency over time. As new features, patient's CBCT images and applied positioning corrections will be saved for dose recalculation on patient's daily geometry. To assess RTQA along the treatment chain, institutions will be visited once during the time of the trial. Over the course of this visit, end-to-end tests will be performed using the 008ACIRS-breathing platform with two separate bodies. The first body carries EBT3 films and an ionization chamber. The other body newly developed for PET- CT evaluation is fillable with a solution of high activity. 3D or 4D PET-CT and 4D-CT scanning techniques will be evaluated to assess the impact of motion artefacts on target volume accuracy. Finally, a dosimetric evaluation in static and dynamic conditions will be performed. Previous data on mediastinal toxicity are scarce and source of cautiousness for setting-up SBRT treatments for centrally located NSCLC. Thanks to the combination of documented patient related outcomes and CBCT based dose recalculation we expect to provide improved models for dose response and dose related toxicity. We have developed a comprehensive RTQA model for trials involving modern radiotherapy. These procedures could also serve as examples of extended RTQA for future radiotherapy trials involving quantitative use of PET and tumour motion.
Levelized cost of energy for a Backward Bent Duct Buoy
Bull, Diana; Jenne, D. Scott; Smith, Christopher S.; ...
2016-07-18
The Reference Model Project, supported by the U.S. Department of Energy, was developed to provide publicly available technical and economic benchmarks for a variety of marine energy converters. The methodology to achieve these benchmarks is to develop public domain designs that incorporate power performance estimates, structural models, anchor and mooring designs, power conversion chain designs, and estimates of the operations and maintenance, installation, and environmental permitting required. The reference model designs are intended to be conservative, robust, and experimentally verified. The Backward Bent Duct Buoy (BBDB) presented in this paper is one of three wave energy conversion devices studied withinmore » the Reference Model Project. Furthermore, comprehensive modeling of the BBDB in a Northern California climate has enabled a full levelized cost of energy (LCOE) analysis to be completed on this device.« less
Levelized cost of energy for a Backward Bent Duct Buoy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bull, Diana; Jenne, D. Scott; Smith, Christopher S.
2016-12-01
The Reference Model Project, supported by the U.S. Department of Energy, was developed to provide publically available technical and economic benchmarks for a variety of marine energy converters. The methodology to achieve these benchmarks is to develop public domain designs that incorporate power performance estimates, structural models, anchor and mooring designs, power conversion chain designs, and estimates of the operations and maintenance, installation, and environmental permitting required. The reference model designs are intended to be conservative, robust, and experimentally verified. The Backward Bent Duct Buoy (BBDB) presented in this paper is one of three wave energy conversion devices studied withinmore » the Reference Model Project. Comprehensive modeling of the BBDB in a Northern California climate has enabled a full levelized cost of energy (LCOE) analysis to be completed on this device.« less
BioPreDyn-bench: a suite of benchmark problems for dynamic modelling in systems biology.
Villaverde, Alejandro F; Henriques, David; Smallbone, Kieran; Bongard, Sophia; Schmid, Joachim; Cicin-Sain, Damjan; Crombach, Anton; Saez-Rodriguez, Julio; Mauch, Klaus; Balsa-Canto, Eva; Mendes, Pedro; Jaeger, Johannes; Banga, Julio R
2015-02-20
Dynamic modelling is one of the cornerstones of systems biology. Many research efforts are currently being invested in the development and exploitation of large-scale kinetic models. The associated problems of parameter estimation (model calibration) and optimal experimental design are particularly challenging. The community has already developed many methods and software packages which aim to facilitate these tasks. However, there is a lack of suitable benchmark problems which allow a fair and systematic evaluation and comparison of these contributions. Here we present BioPreDyn-bench, a set of challenging parameter estimation problems which aspire to serve as reference test cases in this area. This set comprises six problems including medium and large-scale kinetic models of the bacterium E. coli, baker's yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The level of description includes metabolism, transcription, signal transduction, and development. For each problem we provide (i) a basic description and formulation, (ii) implementations ready-to-run in several formats, (iii) computational results obtained with specific solvers, (iv) a basic analysis and interpretation. This suite of benchmark problems can be readily used to evaluate and compare parameter estimation methods. Further, it can also be used to build test problems for sensitivity and identifiability analysis, model reduction and optimal experimental design methods. The suite, including codes and documentation, can be freely downloaded from the BioPreDyn-bench website, https://sites.google.com/site/biopredynbenchmarks/ .
Wildlife toxicity extrapolations: NOAEL versus LOAEL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fairbrother, A.; Berg, M. van den
1995-12-31
Ecotoxicological assessments must rely on the extrapolation of toxicity data from a few indicator species to many species of concern. Data are available from laboratory studies (e.g., quail, mallards, rainbow trout, fathead minnow) and some planned or serendipitous field studies of a broader, but by no means comprehensive, suite of species. Yet all ecological risk assessments begin with an estimate of risk based on information gleaned from the literature. One is then confronted with the necessity of extrapolating toxicity information from a limited number of indicator species to all organisms of interest. This is a particularly acute problem when tryingmore » to estimate hazards to wildlife in terrestrial systems as there is an extreme paucity of data for most chemicals in all but a handful of species. This section continues the debate by six panelists of the ``correct`` approach for determining wildlife toxicity thresholds by debating which toxicity value should be used for setting threshold criteria. Should the lowest observable effect level (LOAEL) be used or is it more appropriate to use the no observable effect level (NOAEL)? What are the short-comings of using either of these point estimates? Should a ``benchmark`` approach, similar to that proposed for human health risk assessments, be used instead, where an EC{sub 5} or EC{sub 10} and associated confidence limits are determined and then divided by a safety factor? How should knowledge of the slope of the dose-response curve be incorporated into determination of toxicity threshold values?« less
Pencil-beam redefinition algorithm dose calculations for electron therapy treatment planning
NASA Astrophysics Data System (ADS)
Boyd, Robert Arthur
2001-08-01
The electron pencil-beam redefinition algorithm (PBRA) of Shiu and Hogstrom has been developed for use in radiotherapy treatment planning (RTP). Earlier studies of Boyd and Hogstrom showed that the PBRA lacked an adequate incident beam model, that PBRA might require improved electron physics, and that no data existed which allowed adequate assessment of the PBRA-calculated dose accuracy in a heterogeneous medium such as one presented by patient anatomy. The hypothesis of this research was that by addressing the above issues the PBRA-calculated dose would be accurate to within 4% or 2 mm in regions of high dose gradients. A secondary electron source was added to the PBRA to account for collimation-scattered electrons in the incident beam. Parameters of the dual-source model were determined from a minimal data set to allow ease of beam commissioning. Comparisons with measured data showed 3% or better dose accuracy in water within the field for cases where 4% accuracy was not previously achievable. A measured data set was developed that allowed an evaluation of PBRA in regions distal to localized heterogeneities. Geometries in the data set included irregular surfaces and high- and low-density internal heterogeneities. The data was estimated to have 1% precision and 2% agreement with accurate, benchmarked Monte Carlo (MC) code. PBRA electron transport was enhanced by modeling local pencil beam divergence. This required fundamental changes to the mathematics of electron transport (divPBRA). Evaluation of divPBRA with the measured data set showed marginal improvement in dose accuracy when compared to PBRA; however, 4% or 2mm accuracy was not achieved by either PBRA version for all data points. Finally, PBRA was evaluated clinically by comparing PBRA- and MC-calculated dose distributions using site-specific patient RTP data. Results show PBRA did not agree with MC to within 4% or 2mm in a small fraction (<3%) of the irradiated volume. Although the hypothesis of the research was shown to be false, the minor dose inaccuracies should have little or no impact on RTP decisions or patient outcome. Therefore, given ease of beam commissioning, documentation of accuracy, and calculational speed, the PBRA should be considered a practical tool for clinical use.
Diller, Thomas; Kelly, J William; Blackhurst, Dawn; Steed, Connie; Boeker, Sue; McElveen, Danielle C
2014-06-01
We previously published a formula to estimate the number of hand hygiene opportunities (HHOs) per patient-day using the World Health Organization's "Five Moments for Hand Hygiene" methodology (HOW2 Benchmark Study). HHOs can be used as a denominator for calculating hand hygiene compliance rates when product utilization data are available. This study validates the previously derived HHO estimate using 24-hour video surveillance of health care worker hand hygiene activity. The validation study utilized 24-hour video surveillance recordings of 26 patients' hospital stays to measure the actual number of HHOs per patient-day on a medicine ward in a large teaching hospital. Statistical methods were used to compare these results to those obtained by episodic observation of patient activity in the original derivation study. Total hours of data collection were 81.3 and 1,510.8, resulting in 1,740 and 4,522 HHOs in the derivation and validation studies, respectively. Comparisons of the mean and median HHOs per 24-hour period did not differ significantly. HHOs were 71.6 (95% confidence interval: 64.9-78.3) and 73.9 (95% confidence interval: 69.1-84.1), respectively. This study validates the HOW2 Benchmark Study and confirms that expected numbers of HHOs can be estimated from the unit's patient census and patient-to-nurse ratio. These data can be used as denominators in calculations of hand hygiene compliance rates from electronic monitoring using the "Five Moments for Hand Hygiene" methodology. Copyright © 2014 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.
Creation of problem-dependent Doppler-broadened cross sections in the KENO Monte Carlo code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Shane W. D.; Celik, Cihangir; Maldonado, G. Ivan
2015-11-06
In this paper, we introduce a quick method for improving the accuracy of Monte Carlo simulations by generating one- and two-dimensional cross sections at a user-defined temperature before performing transport calculations. A finite difference method is used to Doppler-broaden cross sections to the desired temperature, and unit-base interpolation is done to generate the probability distributions for double differential two-dimensional thermal moderator cross sections at any arbitrarily user-defined temperature. The accuracy of these methods is tested using a variety of contrived problems. In addition, various benchmarks at elevated temperatures are modeled, and results are compared with benchmark results. Lastly, the problem-dependentmore » cross sections are observed to produce eigenvalue estimates that are closer to the benchmark results than those without the problem-dependent cross sections.« less
Development of a silicon diode detector for skin dosimetry in radiotherapy.
Vicoroski, Nikolina; Espinoza, Anthony; Duncan, Mitchell; Oborn, Bradley M; Carolan, Martin; Metcalfe, Peter; Menichelli, David; Perevertaylo, Vladimir L; Lerch, Michael L F; Rosenfeld, Anatoly B; Petasecca, Marco
2017-10-01
The aim of in vivo skin dosimetry was to measure the absorbed dose to the skin during radiotherapy, when treatment planning calculations cannot be relied on. It is of particularly importance in hypo-fractionated stereotactic modalities, where excessive dose can lead to severe skin toxicity. Currently, commercial diodes for such applications are with water equivalent depths ranging from 0.5 to 0.8 mm. In this study, we investigate a new detector for skin dosimetry based on a silicon epitaxial diode, referred to as the skin diode. The skin diode is manufactured on a thin epitaxial layer and packaged using the "drop-in" technology. It was characterized in terms of percentage depth dose, dose linearity, and dose rate dependence, and benchmarked against the Attix ionization chamber. The response of the skin diode in the build-up region of the percentage depth dose (PDD) curve of a 6 MV clinical photon beam was investigated. Geant4 radiation transport simulations were used to model the PDD in order to estimate the water equivalent measurement depth (WED) of the skin diode. Measured output factors using the skin diode were compared with the MOSkin detector and EBT3 film at 10 cm depth and at surface at isocenter of a water equivalent phantom. The intrinsic angular response of the skin diode was also quantified in charge particle equilibrium conditions (CPE) and at the surface of a solid water phantom. Finally, the radiation hardness of the skin diode up to an accumulated dose of 80 kGy using photons from a Co-60 gamma source was evaluated. The PDD curve measured with the skin diode was within 0.5% agreement of the equivalent Geant4 simulated curve. When placed at the phantom surface, the WED of the skin diode was estimated to be 0.075 ± 0.005 mm from Geant4 simulations and was confirmed using the response of a corrected Attix ionization chamber placed at water equivalent depth of 0.075 mm, with the measurement agreement to within 0.3%. The output factor measurements at 10 cm depth were within 2% of those measured with film and the MOSkin detector down to a field size of 2 × 2 cm 2 . The dose-response for all detector samples was linear and with a repeatability within 0.2%. The skin diode intrinsic angular response showed a maximum deviation of 8% at 90 degrees and from 0 to 60 degree is less than 5%. The radiation sensitivity reduced by 25% after an accumulated dose of 20 kGy but after was found to stabilize. At 60 kGy total accumulated dose the response was within 2% of that measured at 20 kGy total accumulated dose. This work characterizes an innovative detector for in vivo and real-time skin dose measurements that is based on an epitaxial silicon diode combined with the Centre for Medical Radiation Physics (CMRP) "drop-in" packaging technology. The skin diode proved to have a water equivalent depth of measurement of 0.075 ± 0.005 mm and the ability to measure doses accurately relative to reference detectors. © 2017 American Association of Physicists in Medicine.
TRIPOLI-4® - MCNP5 ITER A-lite neutronic model benchmarking
NASA Astrophysics Data System (ADS)
Jaboulay, J.-C.; Cayla, P.-Y.; Fausser, C.; Lee, Y.-K.; Trama, J.-C.; Li-Puma, A.
2014-06-01
The aim of this paper is to present the capability of TRIPOLI-4®, the CEA Monte Carlo code, to model a large-scale fusion reactor with complex neutron source and geometry. In the past, numerous benchmarks were conducted for TRIPOLI-4® assessment on fusion applications. Experiments (KANT, OKTAVIAN, FNG) analysis and numerical benchmarks (between TRIPOLI-4® and MCNP5) on the HCLL DEMO2007 and ITER models were carried out successively. In this previous ITER benchmark, nevertheless, only the neutron wall loading was analyzed, its main purpose was to present MCAM (the FDS Team CAD import tool) extension for TRIPOLI-4®. Starting from this work a more extended benchmark has been performed about the estimation of neutron flux, nuclear heating in the shielding blankets and tritium production rate in the European TBMs (HCLL and HCPB) and it is presented in this paper. The methodology to build the TRIPOLI-4® A-lite model is based on MCAM and the MCNP A-lite model (version 4.1). Simplified TBMs (from KIT) have been integrated in the equatorial-port. Comparisons of neutron wall loading, flux, nuclear heating and tritium production rate show a good agreement between the two codes. Discrepancies are mainly included in the Monte Carlo codes statistical error.
Energy benchmarking of commercial buildings: a low-cost pathway toward urban sustainability
NASA Astrophysics Data System (ADS)
Cox, Matt; Brown, Marilyn A.; Sun, Xiaojing
2013-09-01
US cities are beginning to experiment with a regulatory approach to address information failures in the real estate market by mandating the energy benchmarking of commercial buildings. Understanding how a commercial building uses energy has many benefits; for example, it helps building owners and tenants identify poor-performing buildings and subsystems and it enables high-performing buildings to achieve greater occupancy rates, rents, and property values. This paper estimates the possible impacts of a national energy benchmarking mandate through analysis chiefly utilizing the Georgia Tech version of the National Energy Modeling System (GT-NEMS). Correcting input discount rates results in a 4.0% reduction in projected energy consumption for seven major classes of equipment relative to the reference case forecast in 2020, rising to 8.7% in 2035. Thus, the official US energy forecasts appear to overestimate future energy consumption by underestimating investments in energy-efficient equipment. Further discount rate reductions spurred by benchmarking policies yield another 1.3-1.4% in energy savings in 2020, increasing to 2.2-2.4% in 2035. Benchmarking would increase the purchase of energy-efficient equipment, reducing energy bills, CO2 emissions, and conventional air pollution. Achieving comparable CO2 savings would require more than tripling existing US solar capacity. Our analysis suggests that nearly 90% of the energy saved by a national benchmarking policy would benefit metropolitan areas, and the policy’s benefits would outweigh its costs, both to the private sector and society broadly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sahbaee, Pooyan, E-mail: psahbae@ncsu.edu; Segars, W. Paul; Samei, Ehsan
2014-07-15
Purpose: This study aimed to provide a comprehensive patient-specific organ dose estimation across a multiplicity of computed tomography (CT) examination protocols. Methods: A validated Monte Carlo program was employed to model a common CT system (LightSpeed VCT, GE Healthcare). The organ and effective doses were estimated from 13 commonly used body and neurological CT examination. The dose estimation was performed on 58 adult computational extended cardiac-torso phantoms (35 male, 23 female, mean age 51.5 years, mean weight 80.2 kg). The organ dose normalized by CTDI{sub vol} (h factor) and effective dose normalized by the dose length product (DLP) (k factor)more » were calculated from the results. A mathematical model was derived for the correlation between the h and k factors with the patient size across the protocols. Based on this mathematical model, a dose estimation iPhone operating system application was designed and developed to be used as a tool to estimate dose to the patients for a variety of routinely used CT examinations. Results: The organ dose results across all the protocols showed an exponential decrease with patient body size. The correlation was generally strong for the organs which were fully or partially located inside the scan coverage (Pearson sample correlation coefficient (r) of 0.49). The correlation was weaker for organs outside the scan coverage for which distance between the organ and the irradiation area was a stronger predictor of dose to the organ. For body protocols, the effective dose before and after normalization by DLP decreased exponentially with increasing patient's body diameter (r > 0.85). The exponential relationship between effective dose and patient's body diameter was significantly weaker for neurological protocols (r < 0.41), where the trunk length was a slightly stronger predictor of effective dose (0.15 < r < 0.46). Conclusions: While the most accurate estimation of a patient dose requires specific modeling of the patient anatomy, a first order approximation of organ and effective doses from routine CT scan protocols can be reasonably estimated using size specific factors. Estimation accuracy is generally poor for organ outside the scan range and for neurological protocols. The dose calculator designed in this study can be used to conveniently estimate and report the dose values for a patient across a multiplicity of CT scan protocols.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidson, C.; James, T. L.; Margolis, R.
The price of photovoltaic (PV) systems in the United States (i.e., the cost to the system owner) has dropped precipitously in recent years, led by substantial reductions in global PV module prices. This report provides a Q4 2013 update for residential PV systems, based on an objective methodology that closely approximates the book value of a PV system. Several cases are benchmarked to represent common variation in business models, labor rates, and module choice. We estimate a weighted-average cash purchase price of $3.29/W for modeled standard-efficiency, polycrystalline-silicon residential PV systems installed in the United States. This is a 46% declinemore » from the 2013-dollar-adjusted price reported in the Q4 2010 benchmark report. In addition, this report frames the cash purchase price in the context of key price metrics relevant to the continually evolving landscape of third-party-owned PV systems by benchmarking the minimum sustainable lease price and the fair market value of residential PV systems.« less
NASA Astrophysics Data System (ADS)
Pernot, Pascal; Savin, Andreas
2018-06-01
Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.
NASA Astrophysics Data System (ADS)
Hutchinson, C. F.; van Leeuwen, W.; Doorn, B.; Drake, S.; Haithcoat, T.; Kaupp, V.; Likholetov, V.; Sheffner, E.; Tralli, D.
2008-12-01
The Office of Global Analysis/ International Production Assessment Branch (IGA/IPA; formerly the Production Estimates and Crop Assessment Division (PECAD)), of the United States Department of Agriculture - Foreign Agricultural Service (USDA-FAS) has been assimilates data and information products from the National Aeronautics and Space Administration (NASA) into its operational decision support system (DSS). The intent is to improve monthly estimates of global production of selected agricultural commodities that are provided to the World Agricultural Outlook Board (WAOB). This research builds on the intermittent collaboration between USDA and NASA in remote sensing of agriculture since 1974. The goal of the research was to develop an approach to measure changes in system performance after the assimilation of NASA products. An important first step was to develop a baseline characterization of the DSS, the working environment and its constraints including the identification of issues and potential solutions. Both qualitative and quantitative information were gathered to benchmark IGA/IPA's DSS using data from questionnaires and interviews. An interactive risk management tool developed for NASA mission architecture design (DDP - Defect Detection and Prevention) was used to evaluate the effectiveness of various Mitigation options against potential Risks, with quantified attainment of Objectives being the most important benchmarking indicator to examine the effectiveness of the assimilation of NASA products into IGA/IPA's DSS. The collaborative benchmarking activities provided not only feedback about the benefits of DSS enhancement to USDA/FAS and NASA, but facilitated communication among DSS users, developers, and USDA management that helped to suggest future avenues for system development as well as improved intra- and interagency collaboration. From this research emerged a model for benchmarking DSSs that (1) promotes continuity and synergy within and between agencies, (2) accommodates scientific, operational and architectural dynamics, and (3) facilitates transfer of knowledge among researchers, management, and decision makers, as well as among decision making agencies.
Convolution-based estimation of organ dose in tube current modulated CT
NASA Astrophysics Data System (ADS)
Tian, Xiaoyu; Segars, W. Paul; Dixon, Robert L.; Samei, Ehsan
2016-05-01
Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460-7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18-70 years, weight range: 60-180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients ({{h}\\text{Organ}} ) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} with the organ dose coefficients ({{h}\\text{Organ}} ). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled. The discrepancy between the estimated organ dose and dose simulated using TCM Monte Carlo program was quantified. We further compared the convolution-based organ dose estimation method with two other strategies with different approaches of quantifying the irradiation field. The proposed convolution-based estimation method showed good accuracy with the organ dose simulated using the TCM Monte Carlo simulation. The average percentage error (normalized by CTDIvol) was generally within 10% across all organs and modulation profiles, except for organs located in the pelvic and shoulder regions. This study developed an improved method that accurately quantifies the irradiation field under TCM scans. The results suggested that organ dose could be estimated in real-time both prospectively (with the localizer information only) and retrospectively (with acquired CT data).
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-06
... NOAA Fisheries Southeast Regional Office, Highly Migratory Species Management Division, and Southeast... describes the fisheries, evaluates the status of the stock, estimates biological benchmarks, projects future...
SU-E-T-86: A Systematic Method for GammaKnife SRS Fetal Dose Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geneser, S; Paulsson, A; Sneed, P
Purpose: Estimating fetal dose is critical to the decision-making process when radiation treatment is indicated during pregnancy. Fetal doses less than 5cGy confer no measurable non-cancer developmental risks but can produce a threefold increase in developing childhood cancer. In this study, we estimate fetal dose for a patient receiving Gamma Knife stereotactic radiosurgery (GKSRS) treatment and develop a method to estimate dose directly from plan details. Methods: A patient underwent GKSRS on a Perfexion unit for eight brain metastases (two infratentorial and one brainstem). Dose measurements were performed using a CC13, head phantom, and solid water. Superficial doses to themore » thyroid, sternum, and pelvis were measured using MOSFETs during treatment. Because the fetal dose was too low to accurately measure, we obtained measurements proximally to the isocenter, fitted to an exponential function, and extrapolated dose to the fundus of the uterus, uterine midpoint, and pubic synthesis for both the preliminary and delivered plans. Results: The R-squared fit for the delivered doses was 0.995. The estimated fetal doses for the 72 minute preliminary and 138 minute delivered plans range from 0.0014 to 0.028cGy and 0.07 to 0.38cGy, respectively. MOSFET readings during treatment were just above background for the thyroid and negligible for all inferior positions. The method for estimating fetal dose from plan shot information was within 0.2cGy of the measured values at 14cm cranial to the fetal location. Conclusion: Estimated fetal doses for both the preliminary and delivered plan were well below the 5cGy recommended limit. Due to Pefexion shielding, internal dose is primarily governed by attenuation and drops off exponentially. This is the first work that reports fetal dose for a GK Perfexion unit. Although multiple lesions were treated and the duration of treatment was long, the estimated fetal dose remained very low.« less
Initial characterization, dosimetric benchmark and performance validation of Dynamic Wave Arc.
Burghelea, Manuela; Verellen, Dirk; Poels, Kenneth; Hung, Cecilia; Nakamura, Mitsuhiro; Dhont, Jennifer; Gevaert, Thierry; Van den Begin, Robbe; Collen, Christine; Matsuo, Yukinori; Kishi, Takahiro; Simon, Viorica; Hiraoka, Masahiro; de Ridder, Mark
2016-04-29
Dynamic Wave Arc (DWA) is a clinical approach designed to maximize the versatility of Vero SBRT system by synchronizing the gantry-ring noncoplanar movement with D-MLC optimization. The purpose of this study was to verify the delivery accuracy of DWA approach and to evaluate the potential dosimetric benefits. DWA is an extended form of VMAT with a continuous varying ring position. The main difference in the optimization modules of VMAT and DWA is during the angular spacing, where the DWA algorithm does not consider the gantry spacing, but only the Euclidian norm of the ring and gantry angle. A preclinical version of RayStation v4.6 (RaySearch Laboratories, Sweden) was used to create patient specific wave arc trajectories for 31 patients with various anatomical tumor regions (prostate, oligometatstatic cases, centrally-located non-small cell lung cancer (NSCLC) and locally advanced pancreatic cancer-LAPC). DWA was benchmarked against the current clinical approaches and coplanar VMAT. Each plan was evaluated with regards to dose distribution, modulation complexity (MCS), monitor units and treatment time efficiency. The delivery accuracy was evaluated using a 2D diode array that takes in consideration the multi-dimensionality of DWA during dose reconstruction. In centrally-located NSCLC cases, DWA improved the low dose spillage with 20 %, while the target coverage was increased with 17 % compared to 3D CRT. The structures that significantly benefited from using DWA were proximal bronchus and esophagus, with the maximal dose being reduced by 17 % and 24 %, respectively. For prostate and LAPC, neither technique seemed clearly superior to the other; however, DWA reduced with more than 65 % of the delivery time over IMRT. A steeper dose gradient outside the target was observed for all treatment sites (p < 0.01) with DWA. Except the oligometastatic cases, where the DWA-MCSs indicate a higher modulation, both DWA and VMAT modalities provide plans of similar complexity. The average ɣ (3 % /3 mm) passing rate for DWA plans was 99.2 ± 1 % (range from 96.8 to 100 %). DWA proven to be a fully functional treatment technique, allowing additional flexibility in dose shaping, while preserving dosimetrically robust delivery and treatment times comparable with coplanar VMAT.
Wollum, Alexandra; Burstein, Roy; Fullman, Nancy; Dwyer-Lindgren, Laura; Gakidou, Emmanuela
2015-09-02
Nigeria has made notable gains in improving childhood survival but the country still accounts for a large portion of the world's overall disease burden, particularly among women and children. To date, no systematic analyses have comprehensively assessed trends for health outcomes and interventions across states in Nigeria. We extracted data from 19 surveys to generate estimates for 20 key maternal and child health (MCH) interventions and outcomes for 36 states and the Federal Capital Territory from 2000 to 2013. Source-specific estimates were generated for each indicator, after which a two-step statistical model was applied using a mixed-effects model followed by Gaussian process regression to produce state-level trends. National estimates were calculated by population-weighting state values. Under-5 mortality decreased in all states from 2000 to 2013, but a large gap remained across them. Malaria intervention coverage stayed low despite increases between 2009 and 2013, largely driven by rising rates of insecticide-treated net ownership. Overall, vaccination coverage improved, with notable increases in the coverage of three-dose oral polio vaccine. Nevertheless, immunization coverage remained low for most vaccines, including measles. Coverage of other MCH interventions, such as antenatal care and skilled birth attendance, generally stagnated and even declined in many states, and the range between the lowest- and highest-performing states remained wide in 2013. Countrywide, a measure of overall intervention coverage increased from 33% in 2000 to 47% in 2013 with considerable variation across states, ranging from 21% in Sokoto to 66% in Ekiti. We found that Nigeria made notable gains for a subset of MCH indicators between 2000 and 2013, but also experienced stalled progress and even declines for others. Despite progress for a subset of indicators, Nigeria's absolute levels of intervention coverage remained quite low. As Nigeria rolls out its National Health Bill and seeks to strengthen its delivery of health services, continued monitoring of local health trends will help policymakers track successes and promptly address challenges as they arise. Subnational benchmarking ought to occur regularly in Nigeria and throughout sub-Saharan Africa to inform local decision-making and bolster health system performance.
Optimizing Estimated Loss Reduction for Active Sampling in Rank Learning
2008-01-01
active learning framework for SVM-based and boosting-based rank learning. Our approach suggests sampling based on maximizing the estimated loss differential over unlabeled data. Experimental results on two benchmark corpora show that the proposed model substantially reduces the labeling effort, and achieves superior performance rapidly with as much as 30% relative improvement over the margin-based sampling
Monte Carlo Perturbation Theory Estimates of Sensitivities to System Dimensions
Burke, Timothy P.; Kiedrowski, Brian C.
2017-12-11
Here, Monte Carlo methods are developed using adjoint-based perturbation theory and the differential operator method to compute the sensitivities of the k-eigenvalue, linear functions of the flux (reaction rates), and bilinear functions of the forward and adjoint flux (kinetics parameters) to system dimensions for uniform expansions or contractions. The calculation of sensitivities to system dimensions requires computing scattering and fission sources at material interfaces using collisions occurring at the interface—which is a set of events with infinitesimal probability. Kernel density estimators are used to estimate the source at interfaces using collisions occurring near the interface. The methods for computing sensitivitiesmore » of linear and bilinear ratios are derived using the differential operator method and adjoint-based perturbation theory and are shown to be equivalent to methods previously developed using a collision history–based approach. The methods for determining sensitivities to system dimensions are tested on a series of fast, intermediate, and thermal critical benchmarks as well as a pressurized water reactor benchmark problem with iterated fission probability used for adjoint-weighting. The estimators are shown to agree within 5% and 3σ of reference solutions obtained using direct perturbations with central differences for the majority of test problems.« less
Statistics based sampling for controller and estimator design
NASA Astrophysics Data System (ADS)
Tenne, Dirk
The purpose of this research is the development of statistical design tools for robust feed-forward/feedback controllers and nonlinear estimators. This dissertation is threefold and addresses the aforementioned topics nonlinear estimation, target tracking and robust control. To develop statistically robust controllers and nonlinear estimation algorithms, research has been performed to extend existing techniques, which propagate the statistics of the state, to achieve higher order accuracy. The so-called unscented transformation has been extended to capture higher order moments. Furthermore, higher order moment update algorithms based on a truncated power series have been developed. The proposed techniques are tested on various benchmark examples. Furthermore, the unscented transformation has been utilized to develop a three dimensional geometrically constrained target tracker. The proposed planar circular prediction algorithm has been developed in a local coordinate framework, which is amenable to extension of the tracking algorithm to three dimensional space. This tracker combines the predictions of a circular prediction algorithm and a constant velocity filter by utilizing the Covariance Intersection. This combined prediction can be updated with the subsequent measurement using a linear estimator. The proposed technique is illustrated on a 3D benchmark trajectory, which includes coordinated turns and straight line maneuvers. The third part of this dissertation addresses the design of controller which include knowledge of parametric uncertainties and their distributions. The parameter distributions are approximated by a finite set of points which are calculated by the unscented transformation. This set of points is used to design robust controllers which minimize a statistical performance of the plant over the domain of uncertainty consisting of a combination of the mean and variance. The proposed technique is illustrated on three benchmark problems. The first relates to the design of prefilters for a linear and nonlinear spring-mass-dashpot system and the second applies a feedback controller to a hovering helicopter. Lastly, the statistical robust controller design is devoted to a concurrent feed-forward/feedback controller structure for a high-speed low tension tape drive.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mackillop, William J., E-mail: william.mackillop@krcc.on.ca; Department of Public Health Sciences, Queen's University, Kingston, Ontario; Department of Oncology, Queen's University, Kingston, Ontario
Purpose: Palliative radiation therapy (PRT) benefits many patients with incurable cancer, but the overall need for PRT is unknown. Our primary objective was to estimate the appropriate rate of use of PRT in Ontario. Methods and Materials: The Ontario Cancer Registry identified patients who died of cancer in Ontario between 2006 and 2010. Comprehensive RT records were linked to the registry. Multivariate analysis identified social and health system-related factors affecting the use of PRT, enabling us to define a benchmark population of patients with unimpeded access to PRT. The proportion of cases treated at any time (PRT{sub lifetime}), the proportionmore » of cases treated in the last 2 years of life (PRT{sub 2y}), and number of courses of PRT per thousand cancer deaths were measured in the benchmark population. These benchmarks were standardized to the characteristics of the overall population, and province-wide PRT rates were then compared to benchmarks. Results: Cases diagnosed at hospitals with no RT on-site and residents of poorer communities and those who lived farther from an RT center, were significantly less likely than others to receive PRT. However, availability of RT at the diagnosing hospital was the dominant factor. Neither socioeconomic status nor distance from home to nearest RT center had a significant effect on the use of PRT in patients diagnosed at a hospital with RT facilities. The benchmark population therefore consisted of patients diagnosed at a hospital with RT facilities. The standardized benchmark for PRT{sub lifetime} was 33.9%, and the corresponding province-wide rate was 28.5%. The standardized benchmark for PRT{sub 2y} was 32.4%, and the corresponding province-wide rate was 27.0%. The standardized benchmark for the number of courses of PRT per thousand cancer deaths was 652, and the corresponding province-wide rate was 542. Conclusions: Approximately one-third of patients who die of cancer in Ontario need PRT, but many of them are never treated.« less
Benchmarking on Tsunami Currents with ComMIT
NASA Astrophysics Data System (ADS)
Sharghi vand, N.; Kanoglu, U.
2015-12-01
There were no standards for the validation and verification of tsunami numerical models before 2004 Indian Ocean tsunami. Even, number of numerical models has been used for inundation mapping effort, evaluation of critical structures, etc. without validation and verification. After 2004, NOAA Center for Tsunami Research (NCTR) established standards for the validation and verification of tsunami numerical models (Synolakis et al. 2008 Pure Appl. Geophys. 165, 2197-2228), which will be used evaluation of critical structures such as nuclear power plants against tsunami attack. NCTR presented analytical, experimental and field benchmark problems aimed to estimate maximum runup and accepted widely by the community. Recently, benchmark problems were suggested by the US National Tsunami Hazard Mitigation Program Mapping & Modeling Benchmarking Workshop: Tsunami Currents on February 9-10, 2015 at Portland, Oregon, USA (http://nws.weather.gov/nthmp/index.html). These benchmark problems concentrated toward validation and verification of tsunami numerical models on tsunami currents. Three of the benchmark problems were: current measurement of the Japan 2011 tsunami in Hilo Harbor, Hawaii, USA and in Tauranga Harbor, New Zealand, and single long-period wave propagating onto a small-scale experimental model of the town of Seaside, Oregon, USA. These benchmark problems were implemented in the Community Modeling Interface for Tsunamis (ComMIT) (Titov et al. 2011 Pure Appl. Geophys. 168, 2121-2131), which is a user-friendly interface to the validated and verified Method of Splitting Tsunami (MOST) (Titov and Synolakis 1995 J. Waterw. Port Coastal Ocean Eng. 121, 308-316) model and is developed by NCTR. The modeling results are compared with the required benchmark data, providing good agreements and results are discussed. Acknowledgment: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement no 603839 (Project ASTARTE - Assessment, Strategy and Risk Reduction for Tsunamis in Europe)
Bahadori, Amir A; Sato, Tatsuhiko; Slaba, Tony C; Shavers, Mark R; Semones, Edward J; Van Baalen, Mary; Bolch, Wesley E
2013-10-21
NASA currently uses one-dimensional deterministic transport to generate values of the organ dose equivalent needed to calculate stochastic radiation risk following crew space exposures. In this study, organ absorbed doses and dose equivalents are calculated for 50th percentile male and female astronaut phantoms using both the NASA High Charge and Energy Transport Code to perform one-dimensional deterministic transport and the Particle and Heavy Ion Transport Code System to perform three-dimensional Monte Carlo transport. Two measures of radiation risk, effective dose and risk of exposure-induced death (REID) are calculated using the organ dose equivalents resulting from the two methods of radiation transport. For the space radiation environments and simplified shielding configurations considered, small differences (<8%) in the effective dose and REID are found. However, for the galactic cosmic ray (GCR) boundary condition, compensating errors are observed, indicating that comparisons between the integral measurements of complex radiation environments and code calculations can be misleading. Code-to-code benchmarks allow for the comparison of differential quantities, such as secondary particle differential fluence, to provide insight into differences observed in integral quantities for particular components of the GCR spectrum.
NASA Astrophysics Data System (ADS)
Bahadori, Amir A.; Sato, Tatsuhiko; Slaba, Tony C.; Shavers, Mark R.; Semones, Edward J.; Van Baalen, Mary; Bolch, Wesley E.
2013-10-01
NASA currently uses one-dimensional deterministic transport to generate values of the organ dose equivalent needed to calculate stochastic radiation risk following crew space exposures. In this study, organ absorbed doses and dose equivalents are calculated for 50th percentile male and female astronaut phantoms using both the NASA High Charge and Energy Transport Code to perform one-dimensional deterministic transport and the Particle and Heavy Ion Transport Code System to perform three-dimensional Monte Carlo transport. Two measures of radiation risk, effective dose and risk of exposure-induced death (REID) are calculated using the organ dose equivalents resulting from the two methods of radiation transport. For the space radiation environments and simplified shielding configurations considered, small differences (<8%) in the effective dose and REID are found. However, for the galactic cosmic ray (GCR) boundary condition, compensating errors are observed, indicating that comparisons between the integral measurements of complex radiation environments and code calculations can be misleading. Code-to-code benchmarks allow for the comparison of differential quantities, such as secondary particle differential fluence, to provide insight into differences observed in integral quantities for particular components of the GCR spectrum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, M.; Cohen, M.O.
1975-02-01
The adjoint Monte Carlo method previously developed by MAGI has been applied to the calculation of initial radiation dose due to air secondary gamma rays and fission product gamma rays at detector points within buildings for a wide variety of problems. These provide an in-depth survey of structure shielding effects as well as many new benchmark problems for matching by simplified models. Specifically, elevated ring source results were obtained in the following areas: doses at on-and off-centerline detectors in four concrete blockhouse structures; doses at detector positions along the centerline of a high-rise structure without walls; dose mapping at basementmore » detector positions in the high-rise structure; doses at detector points within a complex concrete structure containing exterior windows and walls and interior partitions; modeling of the complex structure by replacing interior partitions by additional material at exterior walls; effects of elevation angle changes; effects on the dose of changes in fission product ambient spectra; and modeling of mutual shielding due to external structures. In addition, point source results yielding dose extremes about the ring source average were obtained. (auth)« less
Phan, Kongkea; Sthiannopkao, Suthipong; Heng, Savoeun; Phan, Samrach; Huoy, Laingshun; Wong, Ming Hung; Kim, Kyoung-Woong
2013-11-15
In the present study, we investigated the potential arsenic exposure of Cambodian residents from their daily food consumption. Environmental and ecological samples such as paddy soils, paddy rice (unhusked), staple rice (uncooked and cooked), fish and vegetables were collected from Kandal, Kratie and Kampong Cham provinces in the Mekong River basin of Cambodia. After acid-digestion, digestates were chemically analyzed by inductively coupled plasma mass spectrometry. Results revealed that the means of total arsenic concentration ([As]tot) in paddy soils and paddy rice from Kandal were significantly higher than those from Kampong Cham province (t-test, p<0.05). Moreover, a significant positive correlation between the [As]tot in paddy soils and paddy rice was found (r(14) = 0.826, p<0.01). Calculations of arsenic intake from food consumption indicated that the upper end of the range of the daily dose of inorganic arsenic for Kandal residents (0.089-8.386 μg d(-1) kg(-1) body wt.) was greater than the lower limits on the benchmark dose for a 0.5% increased incidence of lung cancer (BMDL0.5 is equal to 3.0 μg d(-1) kg(-1) body wt.). The present study suggests that the residents in Kandal are at risk of arsenic intake from their daily food consumption. However, the residents in Kratie and Kampong Cham provinces are less likely to be exposed to arsenic through their daily dietary intake. To the best of our knowledge, this is the first report estimating the daily intake and daily dose of inorganic arsenic from food consumption in the Mekong River basin of Cambodia. Copyright © 2012 Elsevier B.V. All rights reserved.
Januzis, Natalie; Belley, Matthew D; Nguyen, Giao; Toncheva, Greta; Lowry, Carolyn; Miller, Michael J; Smith, Tony P; Yoshizumi, Terry T
2014-05-01
The purpose of this study was three-fold: (1) to measure the transmission properties of various lead shielding materials, (2) to benchmark the accuracy of commercial film badge readings, and (3) to compare the accuracy of effective dose (ED) conversion factors (CF) of the U.S. Nuclear Regulatory Commission methods to the MOSFET method. The transmission properties of lead aprons and the accuracy of film badges were studied using an ion chamber and monitor. ED was determined using an adult male anthropomorphic phantom that was loaded with 20 diagnostic MOSFET detectors and scanned with a whole body CT protocol at 80, 100, and 120 kVp. One commercial film badge was placed at the collar and one at the waist. Individual organ doses and waist badge readings were corrected for lead apron attenuation. ED was computed using ICRP 103 tissue weighting factors, and ED CFs were calculated by taking the ratio of ED and badge reading. The measured single badge CFs were 0.01 (±14.9%), 0.02 (±9.49%), and 0.04 (±15.7%) for 80, 100, and 120 kVp, respectively. Current regulatory ED CF for the single badge method is 0.3; for the double-badge system, they are 0.04 (collar) and 1.5 (under lead apron at the waist). The double-badge system provides a better coefficient for the collar at 0.04; however, exposure readings under the apron are usually negligible to zero. Based on these findings, the authors recommend the use of ED CF of 0.01 for the single badge system from 80 kVp (effective energy 50.4 keV) data.
Percentiles of the product of uncertainty factors for establishing probabilistic reference doses.
Gaylor, D W; Kodell, R L
2000-04-01
Exposure guidelines for potentially toxic substances are often based on a reference dose (RfD) that is determined by dividing a no-observed-adverse-effect-level (NOAEL), lowest-observed-adverse-effect-level (LOAEL), or benchmark dose (BD) corresponding to a low level of risk, by a product of uncertainty factors. The uncertainty factors for animal to human extrapolation, variable sensitivities among humans, extrapolation from measured subchronic effects to unknown results for chronic exposures, and extrapolation from a LOAEL to a NOAEL can be thought of as random variables that vary from chemical to chemical. Selected databases are examined that provide distributions across chemicals of inter- and intraspecies effects, ratios of LOAELs to NOAELs, and differences in acute and chronic effects, to illustrate the determination of percentiles for uncertainty factors. The distributions of uncertainty factors tend to be approximately lognormally distributed. The logarithm of the product of independent uncertainty factors is approximately distributed as the sum of normally distributed variables, making it possible to estimate percentiles for the product. Hence, the size of the products of uncertainty factors can be selected to provide adequate safety for a large percentage (e.g., approximately 95%) of RfDs. For the databases used to describe the distributions of uncertainty factors, using values of 10 appear to be reasonable and conservative. For the databases examined the following simple "Rule of 3s" is suggested that exceeds the estimated 95th percentile of the product of uncertainty factors: If only a single uncertainty factor is required use 33, for any two uncertainty factors use 3 x 33 approximately 100, for any three uncertainty factors use a combined factor of 3 x 100 = 300, and if all four uncertainty factors are needed use a total factor of 3 x 300 = 900. If near the 99th percentile is desired use another factor of 3. An additional factor may be needed for inadequate data or a modifying factor for other uncertainties (e.g., different routes of exposure) not covered above.
Hung, Linda; Bruneval, Fabien; Baishya, Kopinjol; ...
2017-04-07
Energies from the GW approximation and the Bethe–Salpeter equation (BSE) are benchmarked against the excitation energies of transition-metal (Cu, Zn, Ag, and Cd) single atoms and monoxide anions. We demonstrate that best estimates of GW quasiparticle energies at the complete basis set limit should be obtained via extrapolation or closure relations, while numerically converged GW-BSE eigenvalues can be obtained on a finite basis set. Calculations using real-space wave functions and pseudopotentials are shown to give best-estimate GW energies that agree (up to the extrapolation error) with calculations using all-electron Gaussian basis sets. We benchmark the effects of a vertex approximationmore » (ΓLDA) and the mean-field starting point in GW and the BSE, performing computations using a real-space, transition-space basis and scalar-relativistic pseudopotentials. Here, while no variant of GW improves on perturbative G0W0 at predicting ionization energies, G0W0Γ LDA-BSE computations give excellent agreement with experimental absorption spectra as long as off-diagonal self-energy terms are included. We also present G0W0 quasiparticle energies for the CuO –, ZnO –, AgO –, and CdO – anions, in comparison to available anion photoelectron spectra.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greene, David L
2011-01-01
This study evaluates the potential impacts of a national feebate system, a market-based policy that consists of graduated fees on low-fuel-economy (or high-emitting) vehicles and rebates for high-fuel-economy (or lowemitting) vehicles. In their simplest form, feebate systems operate under three conditions: a benchmark divides all vehicles into two categories-those charged fees and those eligible for rebates; the sizes of the fees and rebates are a function of a vehicle's deviation from its benchmark; and placement of the benchmark ensures revenue neutrality or a desired level of subsidy or revenue. A model developed by the University of California for the Californiamore » Air Resources Board was revised and used to estimate the effects of six feebate structures on fuel economy and sales of new light-duty vehicles, given existing and anticipated future fuel economy and emission standards. These estimates for new vehicles were then entered into a vehicle stock model that simulated the evolution of the entire vehicle stock. The results indicate that feebates could produce large, additional reductions in emissions and fuel consumption, in large part by encouraging market acceptance of technologies with advanced fuel economy, such as hybrid electric vehicles.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hung, Linda; Bruneval, Fabien; Baishya, Kopinjol
Energies from the GW approximation and the Bethe–Salpeter equation (BSE) are benchmarked against the excitation energies of transition-metal (Cu, Zn, Ag, and Cd) single atoms and monoxide anions. We demonstrate that best estimates of GW quasiparticle energies at the complete basis set limit should be obtained via extrapolation or closure relations, while numerically converged GW-BSE eigenvalues can be obtained on a finite basis set. Calculations using real-space wave functions and pseudopotentials are shown to give best-estimate GW energies that agree (up to the extrapolation error) with calculations using all-electron Gaussian basis sets. We benchmark the effects of a vertex approximationmore » (ΓLDA) and the mean-field starting point in GW and the BSE, performing computations using a real-space, transition-space basis and scalar-relativistic pseudopotentials. Here, while no variant of GW improves on perturbative G0W0 at predicting ionization energies, G0W0Γ LDA-BSE computations give excellent agreement with experimental absorption spectra as long as off-diagonal self-energy terms are included. We also present G0W0 quasiparticle energies for the CuO –, ZnO –, AgO –, and CdO – anions, in comparison to available anion photoelectron spectra.« less
Simon, Steven L; Hoffman, F Owen; Hofer, Eduard
2015-01-01
Retrospective dose estimation, particularly dose reconstruction that supports epidemiological investigations of health risk, relies on various strategies that include models of physical processes and exposure conditions with detail ranging from simple to complex. Quantification of dose uncertainty is an essential component of assessments for health risk studies since, as is well understood, it is impossible to retrospectively determine the true dose for each person. To address uncertainty in dose estimation, numerical simulation tools have become commonplace and there is now an increased understanding about the needs and what is required for models used to estimate cohort doses (in the absence of direct measurement) to evaluate dose response. It now appears that for dose-response algorithms to derive the best, unbiased estimate of health risk, we need to understand the type, magnitude and interrelationships of the uncertainties of model assumptions, parameters and input data used in the associated dose estimation models. Heretofore, uncertainty analysis of dose estimates did not always properly distinguish between categories of errors, e.g., uncertainty that is specific to each subject (i.e., unshared error), and uncertainty of doses from a lack of understanding and knowledge about parameter values that are shared to varying degrees by numbers of subsets of the cohort. While mathematical propagation of errors by Monte Carlo simulation methods has been used for years to estimate the uncertainty of an individual subject's dose, it was almost always conducted without consideration of dependencies between subjects. In retrospect, these types of simple analyses are not suitable for studies with complex dose models, particularly when important input data are missing or otherwise not available. The dose estimation strategy presented here is a simulation method that corrects the previous deficiencies of analytical or simple Monte Carlo error propagation methods and is termed, due to its capability to maintain separation between shared and unshared errors, the two-dimensional Monte Carlo (2DMC) procedure. Simply put, the 2DMC method simulates alternative, possibly true, sets (or vectors) of doses for an entire cohort rather than a single set that emerges when each individual's dose is estimated independently from other subjects. Moreover, estimated doses within each simulated vector maintain proper inter-relationships such that the estimated doses for members of a cohort subgroup that share common lifestyle attributes and sources of uncertainty are properly correlated. The 2DMC procedure simulates inter-individual variability of possibly true doses within each dose vector and captures the influence of uncertainty in the values of dosimetric parameters across multiple realizations of possibly true vectors of cohort doses. The primary characteristic of the 2DMC approach, as well as its strength, are defined by the proper separation between uncertainties shared by members of the entire cohort or members of defined cohort subsets, and uncertainties that are individual-specific and therefore unshared.
EIA Household Energy Use Data Now Includes Detail on 16 States
2011-01-01
The Energy Information Administration (EIA) is releasing new benchmark estimates for home energy use for the year 2009 that include detailed data for 16 states, 12 more than in past EIA residential energy surveys.
Research on radiation exposure from CT part of hybrid camera and diagnostic CT
NASA Astrophysics Data System (ADS)
Solný, Pavel; Zimák, Jaroslav
2014-11-01
Research on radiation exposure from CT part of hybrid camera in seven different Departments of Nuclear Medicine (DNM) was conducted. Processed data and effective dose (E) estimations led to the idea of phantom verification and comparison of absorbed doses and software estimation. Anonymous data from about 100 examinations from each DNM was gathered. Acquired data was processed and utilized by dose estimation programs (ExPACT, ImPACT, ImpactDose) with respect to the type of examination and examination procedures. Individual effective doses were calculated using enlisted programs. Preserving the same procedure in dose estimation process allows us to compare the resulting E. Some differences and disproportions during dose estimation led to the idea of estimated E verification. Consequently, two different sets of about 100 of TLD 100H detectors were calibrated for measurement inside the Aldersnon RANDO Anthropomorphic Phantom. Standard examination protocols were examined using a 2 Slice CT- part of hybrid SPECT/CT. Moreover, phantom exposure from body examining protocol for 32 Slice and 64 Slice diagnostic CT scanner was also verified. Absorbed dose (DT,R) measured using TLD detectors was compared with software estimation of equivalent dose HT values, computed by E estimation software. Though, only limited number of cavities for detectors enabled measurement within the regions of lung, liver, thyroid and spleen-pancreas region, some basic comparison is possible.
Development and verification of NRC`s single-rod fuel performance codes FRAPCON-3 AND FRAPTRAN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beyer, C.E.; Cunningham, M.E.; Lanning, D.D.
1998-03-01
The FRAPCON and FRAP-T code series, developed in the 1970s and early 1980s, are used by the US Nuclear Regulatory Commission (NRC) to predict fuel performance during steady-state and transient power conditions, respectively. Both code series are now being updated by Pacific Northwest National Laboratory to improve their predictive capabilities at high burnup levels. The newest versions of the codes are called FRAPCON-3 and FRAPTRAN. The updates to fuel property and behavior models are focusing on providing best estimate predictions under steady-state and fast transient power conditions up to extended fuel burnups (> 55 GWd/MTU). Both codes will be assessedmore » against a data base independent of the data base used for code benchmarking and an estimate of code predictive uncertainties will be made based on comparisons to the benchmark and independent data bases.« less
Lim, Keah-Ying; Jiang, Sunny C
2013-12-15
Health risk concerns associated with household use of rooftop-harvested rainwater (HRW) constitute one of the main impediments to exploit the benefits of rainwater harvesting in the United States. However, the benchmark based on the U.S. EPA acceptable annual infection risk level of ≤1 case per 10,000 persons per year (≤10(-4) pppy) developed to aid drinking water regulations may be unnecessarily stringent for sustainable water practice. In this study, we challenge the current risk benchmark by quantifying the potential microbial risk associated with consumption of HRW-irrigated home produce and comparing it against the current risk benchmark. Microbial pathogen data for HRW and exposure rates reported in literature are applied to assess the potential microbial risk posed to household consumers of their homegrown produce. A Quantitative Microbial Risk Assessment (QMRA) model based on worst-case scenario (e.g. overhead irrigation, no pathogen inactivation) is applied to three crops that are most popular among home gardeners (lettuce, cucumbers, and tomatoes) and commonly consumed raw. The infection risks of household consumers attributed to consumption of these home produce vary with the type of produce. The lettuce presents the highest risk, which is followed by tomato and cucumber, respectively. Results show that the 95th percentile values of infection risk per intake event of home produce are one to three orders of magnitude (10(-7) to 10(-5)) lower than U.S. EPA risk benchmark (≤10(-4) pppy). However, annual infection risks under the same scenario (multiple intake events in a year) are very likely to exceed the risk benchmark by one order of magnitude in some cases. Estimated 95th percentile values of the annual risk are in the 10(-4) to 10(-3) pppy range, which are still lower than the 10(-3) to 10(-1) pppy risk range of reclaimed water irrigated produce estimated in comparable studies. We further discuss the desirability of HRW for irrigating home produce based on the relative risk of HRW to reclaimed wastewater for irrigation of food crops. The appropriateness of the ≤10(-4) pppy risk benchmark for assessing safety level of HRW-irrigated fresh produce is questioned by considering the assumptions made for the QMRA model. Consequently, the need of an updated approach to assess appropriateness of sustainable water practice for making guidelines and policies is proposed. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Strigari, L.; Torriani, F.; Manganaro, L.; Inaniwa, T.; Dalmasso, F.; Cirio, R.; Attili, A.
2018-03-01
Few attempts have been made to include the oxygen enhancement ratio (OER) in treatment planning for ion beam therapy, and systematic studies to evaluate the impact of hypoxia in treatment with the beam of different ion species are sorely needed. The radiobiological models used to quantify the OER in such studies are mainly based on the dose-averaged LET estimates, and do not explicitly distinguish between the ion species and fractionation schemes. In this study, a new type of OER modelling, based on the microdosimetric kinetic model, taking into account the specificity of the different ions, LET spectra, tissues and fractionation schemes, has been developed. The model has been benchmarked with published in vitro data, HSG, V79 and CHO cells in aerobic and hypoxic conditions, for different ion irradiation. The model has been included in the simulation of treatments for a clinical case (brain tumour) using proton, lithium, helium, carbon and oxygen ion beams. A study of the tumour control probability (TCP) as a function of oxygen partial pressure, dose per fraction and primary ion type has been performed. The modelled OER depends on both the LET and ion type, also showing a decrease for an increased dose per fraction with a slope that depends on the LET and ion type, in good agreement with the experimental data. In the investigated clinical case, a significant increase in TCP has been found upon increasing the ion charge. Higher OER variations as a function of dose per fraction have also been found for low-LET ions (up to 15% varying from 2 to 8 Gy(RBE) for protons). This model could be exploited in the identification of treatment condition optimality in the presence of hypoxia, including fractionation and primary particle selection.
Uncertainties in estimating heart doses from 2D-tangential breast cancer radiotherapy.
Lorenzen, Ebbe L; Brink, Carsten; Taylor, Carolyn W; Darby, Sarah C; Ewertz, Marianne
2016-04-01
We evaluated the accuracy of three methods of estimating radiation dose to the heart from two-dimensional tangential radiotherapy for breast cancer, as used in Denmark during 1982-2002. Three tangential radiotherapy regimens were reconstructed using CT-based planning scans for 40 patients with left-sided and 10 with right-sided breast cancer. Setup errors and organ motion were simulated using estimated uncertainties. For left-sided patients, mean heart dose was related to maximum heart distance in the medial field. For left-sided breast cancer, mean heart dose estimated from individual CT-scans varied from <1Gy to >8Gy, and maximum dose from 5 to 50Gy for all three regimens, so that estimates based only on regimen had substantial uncertainty. When maximum heart distance was taken into account, the uncertainty was reduced and was comparable to the uncertainty of estimates based on individual CT-scans. For right-sided breast cancer patients, mean heart dose based on individual CT-scans was always <1Gy and maximum dose always <5Gy for all three regimens. The use of stored individual simulator films provides a method for estimating heart doses in left-tangential radiotherapy for breast cancer that is almost as accurate as estimates based on individual CT-scans. Copyright © 2016. Published by Elsevier Ireland Ltd.
Analytical dose evaluation of neutron and secondary gamma-ray skyshine from nuclear facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayashi, K.; Nakamura, T.
1985-11-01
The skyshine dose distributions of neutron and secondary gamma rays were calculated systematically using the Monte Carlo method for distances up to 2 km from the source. The energy of source neutrons ranged from thermal to 400 MeV; their emission angle from 0 to 90 deg from the ver tical was treated with a distribution of the direction cosine containing five equal intervals. Calculated dose distributions D(r) were fitted to the formula; D(r) = Q exp (-r/lambda)/r. The value of Q and lambda are slowly varied functions of energy. This formula was applied to the benchmark problems of neutron skyshinemore » from fission, fusion, and accelerator facilities, and good agreement was achieved. This formula will be quite useful for shielding designs of various nuclear facilities.« less
A Monte-Carlo Benchmark of TRIPOLI-4® and MCNP on ITER neutronics
NASA Astrophysics Data System (ADS)
Blanchet, David; Pénéliau, Yannick; Eschbach, Romain; Fontaine, Bruno; Cantone, Bruno; Ferlet, Marc; Gauthier, Eric; Guillon, Christophe; Letellier, Laurent; Proust, Maxime; Mota, Fernando; Palermo, Iole; Rios, Luis; Guern, Frédéric Le; Kocan, Martin; Reichle, Roger
2017-09-01
Radiation protection and shielding studies are often based on the extensive use of 3D Monte-Carlo neutron and photon transport simulations. ITER organization hence recommends the use of MCNP-5 code (version 1.60), in association with the FENDL-2.1 neutron cross section data library, specifically dedicated to fusion applications. The MCNP reference model of the ITER tokamak, the `C-lite', is being continuously developed and improved. This article proposes to develop an alternative model, equivalent to the 'C-lite', but for the Monte-Carlo code TRIPOLI-4®. A benchmark study is defined to test this new model. Since one of the most critical areas for ITER neutronics analysis concerns the assessment of radiation levels and Shutdown Dose Rates (SDDR) behind the Equatorial Port Plugs (EPP), the benchmark is conducted to compare the neutron flux through the EPP. This problem is quite challenging with regard to the complex geometry and considering the important neutron flux attenuation ranging from 1014 down to 108 n•cm-2•s-1. Such code-to-code comparison provides independent validation of the Monte-Carlo simulations, improving the confidence in neutronic results.
Closed-Form 3-D Localization for Single Source in Uniform Circular Array with a Center Sensor
NASA Astrophysics Data System (ADS)
Bae, Eun-Hyon; Lee, Kyun-Kyung
A novel closed-form algorithm is presented for estimating the 3-D location (azimuth angle, elevation angle, and range) of a single source in a uniform circular array (UCA) with a center sensor. Based on the centrosymmetry of the UCA and noncircularity of the source, the proposed algorithm decouples and estimates the 2-D direction of arrival (DOA), i.e. azimuth and elevation angles, and then estimates the range of the source. Notwithstanding a low computational complexity, the proposed algorithm provides an estimation performance close to that of the benchmark estimator 3-D MUSIC.
Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions
NASA Technical Reports Server (NTRS)
Nearing, Grey S.; Mocko, David M.; Peters-Lidard, Christa D.; Kumar, Sujay V.; Xia, Youlong
2016-01-01
Model benchmarking allows us to separate uncertainty in model predictions caused 1 by model inputs from uncertainty due to model structural error. We extend this method with a large-sample approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances.
Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions
Nearing, Grey S.; Mocko, David M.; Peters-Lidard, Christa D.; Kumar, Sujay V.; Xia, Youlong
2018-01-01
Model benchmarking allows us to separate uncertainty in model predictions caused by model inputs from uncertainty due to model structural error. We extend this method with a “large-sample” approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances. PMID:29697706
Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions.
Nearing, Grey S; Mocko, David M; Peters-Lidard, Christa D; Kumar, Sujay V; Xia, Youlong
2016-03-01
Model benchmarking allows us to separate uncertainty in model predictions caused by model inputs from uncertainty due to model structural error. We extend this method with a "large-sample" approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances.
Nakamura, Nori; Hirai, Yuko; Kodama, Yoshiaki; Hamasaki, Kanya; Cullings, Harry M; Cordova, Kismet A; Awa, Akio
2017-10-01
Retrospective estimation of the doses received by atomic bomb (A-bomb) survivors by cytogenetic methods has been hindered by two factors: One is that the photon energies released from the bomb were widely distributed, and since the aberration yield varies depending on the energy, the use of monoenergetic 60 Co gamma radiation to construct a calibration curve may bias the estimate. The second problem is the increasing proportion of newly formed lymphocytes entering into the lymphocyte pool with increasing time intervals since the exposures. These new cells are derived from irradiated precursor/stem cells whose radiosensitivity may differ from that of blood lymphocytes. To overcome these problems, radiation doses to tooth enamel were estimated using the electron spin resonance (ESR; or EPR, electron paramagnetic resonance) method and compared with the cytogenetically estimated doses from the same survivors. The ESR method is only weakly dependent on the photon energy and independent of the years elapsed since an exposure. Both ESR and cytogenetic doses were estimated from 107 survivors. The latter estimates were made by assuming that although a part of the cells examined could be lymphoid stem or precursor cells at the time of exposure, all the cells had the same radiosensitivity as blood lymphocytes, and that the A-bomb gamma-ray spectrum was the same as that of the 60 Co gamma rays. Subsequently, ESR and cytogenetic endpoints were used to estimate the kerma doses using individual DS02R1 information on shielding conditions. The results showed that the two sets of kerma doses were in close agreement, indicating that perhaps no correction is needed in estimating atomic bomb gamma-ray doses from the cytogenetically estimated 60 Co gamma-ray equivalent doses. The present results will make it possible to directly compare cytogenetic doses with the physically estimated doses of the survivors, which would pave the way for testing whether or not there are any systematic trends or factors affecting physically estimated doses.
Jansen, Esther J S; Dijkman, Koen P; van Lingen, Richard A; de Vries, Willem B; Vijlbrief, Daniel C; de Boode, Willem P; Andriessen, Peter
2017-10-01
The aim of this study was to identify inter-centre differences in persistent ductus arteriosus treatment and their related outcomes. Materials and methods We carried out a retrospective, multicentre study including infants between 24+0 and 27+6 weeks of gestation in the period between 2010 and 2011. In all centres, echocardiography was used as the standard procedure to diagnose a patent ductus arteriosus and to document ductal closure. In total, 367 preterm infants were included. All four participating neonatal ICU had a comparable number of preterm infants; however, differences were observed in the incidence of treatment (33-63%), choice and dosing of medication (ibuprofen or indomethacin), number of pharmacological courses (1-4), and the need for surgical ligation after failure of pharmacological treatment (8-52%). Despite the differences in treatment, we found no difference in short-term morbidity between the centres. Adjusted mortality showed independent risk contribution of gestational age, birth weight, ductal ligation, and perinatal centre. Using benchmarking as a tool identified inter-centre differences. In these four perinatal centres, the factors that explained the differences in patent ductus arteriosus treatment are quite complex. Timing, choice of medication, and dosing are probably important determinants for successful patent ductus arteriosus closure.
Mayo, Charles; Conners, Steve; Warren, Christopher; Miller, Robert; Court, Laurence; Popple, Richard
2013-01-01
Purpose: With emergence of clinical outcomes databases as tools utilized routinely within institutions, comes need for software tools to support automated statistical analysis of these large data sets and intrainstitutional exchange from independent federated databases to support data pooling. In this paper, the authors present a design approach and analysis methodology that addresses both issues. Methods: A software application was constructed to automate analysis of patient outcomes data using a wide range of statistical metrics, by combining use of C#.Net and R code. The accuracy and speed of the code was evaluated using benchmark data sets. Results: The approach provides data needed to evaluate combinations of statistical measurements for ability to identify patterns of interest in the data. Through application of the tools to a benchmark data set for dose-response threshold and to SBRT lung data sets, an algorithm was developed that uses receiver operator characteristic curves to identify a threshold value and combines use of contingency tables, Fisher exact tests, Welch t-tests, and Kolmogorov-Smirnov tests to filter the large data set to identify values demonstrating dose-response. Kullback-Leibler divergences were used to provide additional confirmation. Conclusions: The work demonstrates the viability of the design approach and the software tool for analysis of large data sets. PMID:24320426
Mayo, Charles; Conners, Steve; Warren, Christopher; Miller, Robert; Court, Laurence; Popple, Richard
2013-11-01
With emergence of clinical outcomes databases as tools utilized routinely within institutions, comes need for software tools to support automated statistical analysis of these large data sets and intrainstitutional exchange from independent federated databases to support data pooling. In this paper, the authors present a design approach and analysis methodology that addresses both issues. A software application was constructed to automate analysis of patient outcomes data using a wide range of statistical metrics, by combining use of C#.Net and R code. The accuracy and speed of the code was evaluated using benchmark data sets. The approach provides data needed to evaluate combinations of statistical measurements for ability to identify patterns of interest in the data. Through application of the tools to a benchmark data set for dose-response threshold and to SBRT lung data sets, an algorithm was developed that uses receiver operator characteristic curves to identify a threshold value and combines use of contingency tables, Fisher exact tests, Welch t-tests, and Kolmogorov-Smirnov tests to filter the large data set to identify values demonstrating dose-response. Kullback-Leibler divergences were used to provide additional confirmation. The work demonstrates the viability of the design approach and the software tool for analysis of large data sets.
Key Performance Indicators in the Evaluation of the Quality of Radiation Safety Programs.
Schultz, Cheryl Culver; Shaffer, Sheila; Fink-Bennett, Darlene; Winokur, Kay
2016-08-01
Beaumont is a multiple hospital health care system with a centralized radiation safety department. The health system operates under a broad scope Nuclear Regulatory Commission license but also maintains several other limited use NRC licenses in off-site facilities and clinics. The hospital-based program is expansive including diagnostic radiology and nuclear medicine (molecular imaging), interventional radiology, a comprehensive cardiovascular program, multiple forms of radiation therapy (low dose rate brachytherapy, high dose rate brachytherapy, external beam radiotherapy, and gamma knife), and the Research Institute (including basic bench top, human and animal). Each year, in the annual report, data is analyzed and then tracked and trended. While any summary report will, by nature, include items such as the number of pieces of equipment, inspections performed, staff monitored and educated and other similar parameters, not all include an objective review of the quality and effectiveness of the program. Through objective numerical data Beaumont adopted seven key performance indicators. The assertion made is that key performance indicators can be used to establish benchmarks for evaluation and comparison of the effectiveness and quality of radiation safety programs. Based on over a decade of data collection, and adoption of key performance indicators, this paper demonstrates one way to establish objective benchmarking for radiation safety programs in the health care environment.
Fournier, K; Tebby, C; Zeman, F; Glorennec, P; Zmirou-Navier, D; Bonvallot, N
2016-02-01
Semi-Volatile Organic Compounds (SVOCs) are commonly present in dwellings and several are suspected of having effects on male reproductive function mediated by an endocrine disruption mode of action. To improve knowledge of the health impact of these compounds, cumulative toxicity indicators are needed. This work derives Benchmark Doses (BMD) and Relative Potency Factors (RPF) for SVOCs acting on the male reproductive system through the same mode of action. We included SVOCs fulfilling the following conditions: detection frequency (>10%) in French dwellings, availability of data on the mechanism/mode of action for male reproductive toxicity, and availability of comparable dose-response relationships. Of 58 SVOCs selected, 18 induce a decrease in serum testosterone levels. Six have sufficient and comparable data to derive BMDs based on 10 or 50% of the response. The SVOCs inducing the largest decrease in serum testosterone concentration are: for 10%, bisphenol A (BMD10 = 7.72E-07 mg/kg bw/d; RPF10 = 7,033,679); for 50%, benzo[a]pyrene (BMD50 = 0.030 mg/kg bw/d; RPF50 = 1630), and the one inducing the smallest one is benzyl butyl phthalate (RPF10 and RPF50 = 0.095). This approach encompasses contaminants from diverse chemical families acting through similar modes of action, and makes possible a cumulative risk assessment in indoor environments. The main limitation remains the lack of comparable toxicological data. Copyright © 2015 Elsevier Inc. All rights reserved.
Alcohol calibration of tests measuring skills related to car driving.
Jongen, Stefan; Vuurman, Eric; Ramaekers, Jan; Vermeeren, Annemiek
2014-06-01
Medication and illicit drugs can have detrimental side effects which impair driving performance. A drug's impairing potential should be determined by well-validated, reliable, and sensitive tests and ideally be calibrated by benchmark drugs and doses. To date, no consensus has been reached on the issue of which psychometric tests are best suited for initial screening of a drug's driving impairment potential. The aim of this alcohol calibration study is to determine which performance tests are useful to measure drug-induced impairment. The effects of alcohol are used to compare the psychometric quality between tests and as benchmark to quantify performance changes in each test associated with potentially impairing drug effects. Twenty-four healthy volunteers participated in a double-blind, four-way crossover study. Treatments were placebo and three different doses of alcohol leading to blood alcohol concentrations (BACs) of 0.2, 0.5, and 0.8 g/L. Main effects of alcohol were found in most tests. Compared with placebo, performance in the Divided Attention Test (DAT) was significantly impaired after all alcohol doses and performance in the Psychomotor Vigilance Test (PVT) and the Balance Test was impaired with a BAC of 0.5 and 0.8 g/L. The largest effect sizes were found on postural balance with eyes open and mean reaction time in the divided attention and the psychomotor vigilance test. The preferable tests for initial screening are the DAT and the PVT, as these tests were most sensitive to the impairing effects of alcohol and being considerably valid in assessing potential driving impairment.
Benchmarking the MCNP Monte Carlo code with a photon skyshine experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olsher, R.H.; Hsu, Hsiao Hua; Harvey, W.F.
1993-07-01
The MCNP Monte Carlo transport code is used by the Los Alamos National Laboratory Health and Safety Division for a broad spectrum of radiation shielding calculations. One such application involves the determination of skyshine dose for a variety of photon sources. To verify the accuracy of the code, it was benchmarked with the Kansas State Univ. (KSU) photon skyshine experiment of 1977. The KSU experiment for the unshielded source geometry was simulated in great detail to include the contribution of groundshine, in-silo photon scatter, and the effect of spectral degradation in the source capsule. The standard deviation of the KSUmore » experimental data was stated to be 7%, while the statistical uncertainty of the simulation was kept at or under 1%. The results of the simulation agreed closely with the experimental data, generally to within 6%. At distances of under 100 m from the silo, the modeling of the in-silo scatter was crucial to achieving close agreement with the experiment. Specifically, scatter off the top layer of the source cask accounted for [approximately]12% of the dose at 50 m. At distance >300m, using the [sup 60]Co line spectrum led to a dose overresponse as great as 19% at 700 m. It was necessary to use the actual source spectrum, which includes a Compton tail from photon collisions in the source capsule, to achieve close agreement with experimental data. These results highlight the importance of using Monte Carlo transport techniques to account for the nonideal features of even simple experiments''.« less
A deterministic partial differential equation model for dose calculation in electron radiotherapy.
Duclous, R; Dubroca, B; Frank, M
2010-07-07
High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g.Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung, Compton scattering and the production of delta electrons are added to our model, the computation time will only slightly increase. Its margin of error, on the other hand, will decrease and should be within a few per cent of the actual dose. Therefore, the new model has the potential to become useful for dose calculations in clinical practice.
A deterministic partial differential equation model for dose calculation in electron radiotherapy
NASA Astrophysics Data System (ADS)
Duclous, R.; Dubroca, B.; Frank, M.
2010-07-01
High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g. Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung, Compton scattering and the production of δ electrons are added to our model, the computation time will only slightly increase. Its margin of error, on the other hand, will decrease and should be within a few per cent of the actual dose. Therefore, the new model has the potential to become useful for dose calculations in clinical practice.
NASA Astrophysics Data System (ADS)
Gómez, D. D.; Piñón, D. A.; Smalley, R.; Bevis, M.; Cimbaro, S. R.; Lenzano, L. E.; Barón, J.
2016-03-01
The 2010, (Mw 8.8) Maule, Chile, earthquake produced large co-seismic displacements and non-secular, post-seismic deformation, within latitudes 28°S-40°S extending from the Pacific to the Atlantic oceans. Although these effects are easily resolvable by fitting geodetic extended trajectory models (ETM) to continuous GPS (CGPS) time series, the co- and post-seismic deformation cannot be determined at locations without CGPS (e.g., on passive geodetic benchmarks). To estimate the trajectories of passive geodetic benchmarks, we used CGPS time series to fit an ETM that includes the secular South American plate motion and plate boundary deformation, the co-seismic discontinuity, and the non-secular, logarithmic post-seismic transient produced by the earthquake in the Posiciones Geodésicas Argentinas 2007 (POSGAR07) reference frame (RF). We then used least squares collocation (LSC) to model both the background secular inter-seismic and the non-secular post-seismic components of the ETM at the locations without CGPS. We tested the LSC modeled trajectories using campaign and CGPS data that was not used to generate the model and found standard deviations (95 % confidence level) for position estimates for the north and east components of 3.8 and 5.5 mm, respectively, indicating that the model predicts the post-seismic deformation field very well. Finally, we added the co-seismic displacement field, estimated using an elastic finite element model. The final, trajectory model allows accessing the POSGAR07 RF using post-Maule earthquake coordinates within 5 cm for ˜ 91 % of the passive test benchmarks.
NASA Astrophysics Data System (ADS)
Fewtrell, Timothy J.; Duncan, Alastair; Sampson, Christopher C.; Neal, Jeffrey C.; Bates, Paul D.
2011-01-01
This paper describes benchmark testing of a diffusive and an inertial formulation of the de St. Venant equations implemented within the LISFLOOD-FP hydraulic model using high resolution terrestrial LiDAR data. The models are applied to a hypothetical flooding scenario in a section of Alcester, UK which experienced significant surface water flooding in the June and July floods of 2007 in the UK. The sensitivity of water elevation and velocity simulations to model formulation and grid resolution are analyzed. The differences in depth and velocity estimates between the diffusive and inertial approximations are within 10% of the simulated value but inertial effects persist at the wetting front in steep catchments. Both models portray a similar scale dependency between 50 cm and 5 m resolution which reiterates previous findings that errors in coarse scale topographic data sets are significantly larger than differences between numerical approximations. In particular, these results confirm the need to distinctly represent the camber and curbs of roads in the numerical grid when simulating surface water flooding events. Furthermore, although water depth estimates at grid scales coarser than 1 m appear robust, velocity estimates at these scales seem to be inconsistent compared to the 50 cm benchmark. The inertial formulation is shown to reduce computational cost by up to three orders of magnitude at high resolutions thus making simulations at this scale viable in practice compared to diffusive models. For the first time, this paper highlights the utility of high resolution terrestrial LiDAR data to inform small-scale flood risk management studies.
NASA Astrophysics Data System (ADS)
Ishikawa, Tetsuo; Yasumura, Seiji; Ozasa, Kotaro; Kobashi, Gen; Yasuda, Hiroshi; Miyazaki, Makoto; Akahane, Keiichi; Yonai, Shunsuke; Ohtsuru, Akira; Sakai, Akira; Sakata, Ritsu; Kamiya, Kenji; Abe, Masafumi
2015-08-01
The Fukushima Health Management Survey (including the Basic Survey for external dose estimation and four detailed surveys) was launched after the Fukushima Dai-ichi Nuclear Power Plant accident. The Basic Survey consists of a questionnaire that asks Fukushima Prefecture residents about their behavior in the first four months after the accident; and responses to the questionnaire have been returned from many residents. The individual external doses are estimated by using digitized behavior data and a computer program that included daily gamma ray dose rate maps drawn after the accident. The individual external doses of 421,394 residents for the first four months (excluding radiation workers) had a distribution as follows: 62.0%, <1 mSv 94.0%, <2 mSv 99.4%, <3 mSv. The arithmetic mean and maximum for the individual external doses were 0.8 and 25 mSv, respectively. While most dose estimation studies were based on typical scenarios of evacuation and time spent inside/outside, the Basic Survey estimated doses considering individually different personal behaviors. Thus, doses for some individuals who did not follow typical scenarios could be revealed. Even considering such extreme cases, the estimated external doses were generally low and no discernible increased incidence of radiation-related health effects is expected.
Ishikawa, Tetsuo; Yasumura, Seiji; Ozasa, Kotaro; Kobashi, Gen; Yasuda, Hiroshi; Miyazaki, Makoto; Akahane, Keiichi; Yonai, Shunsuke; Ohtsuru, Akira; Sakai, Akira; Sakata, Ritsu; Kamiya, Kenji; Abe, Masafumi
2015-01-01
The Fukushima Health Management Survey (including the Basic Survey for external dose estimation and four detailed surveys) was launched after the Fukushima Dai-ichi Nuclear Power Plant accident. The Basic Survey consists of a questionnaire that asks Fukushima Prefecture residents about their behavior in the first four months after the accident; and responses to the questionnaire have been returned from many residents. The individual external doses are estimated by using digitized behavior data and a computer program that included daily gamma ray dose rate maps drawn after the accident. The individual external doses of 421,394 residents for the first four months (excluding radiation workers) had a distribution as follows: 62.0%, <1 mSv; 94.0%, <2 mSv; 99.4%, <3 mSv. The arithmetic mean and maximum for the individual external doses were 0.8 and 25 mSv, respectively. While most dose estimation studies were based on typical scenarios of evacuation and time spent inside/outside, the Basic Survey estimated doses considering individually different personal behaviors. Thus, doses for some individuals who did not follow typical scenarios could be revealed. Even considering such extreme cases, the estimated external doses were generally low and no discernible increased incidence of radiation-related health effects is expected. PMID:26239643
Schmidt, Taly Gilat; Wang, Adam S; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh
2016-10-01
The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was [Formula: see text], with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors.
Schmidt, Taly Gilat; Wang, Adam S.; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh
2016-01-01
Abstract. The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was −7%, with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors. PMID:27921070
Alonzo, Frédéric; Hertel-Aas, Turid; Real, Almudena; Lance, Emilie; Garcia-Sanchez, Laurent; Bradshaw, Clare; Vives I Batlle, Jordi; Oughton, Deborah H; Garnier-Laplace, Jacqueline
2016-02-01
In this study, we modelled population responses to chronic external gamma radiation in 12 laboratory species (including aquatic and soil invertebrates, fish and terrestrial mammals). Our aim was to compare radiosensitivity between individual and population endpoints and to examine how internationally proposed benchmarks for environmental radioprotection protected species against various risks at the population level. To do so, we used population matrix models, combining life history and chronic radiotoxicity data (derived from laboratory experiments and described in the literature and the FREDERICA database) to simulate changes in population endpoints (net reproductive rate R0, asymptotic population growth rate λ, equilibrium population size Neq) for a range of dose rates. Elasticity analyses of models showed that population responses differed depending on the affected individual endpoint (juvenile or adult survival, delay in maturity or reduction in fecundity), the considered population endpoint (R0, λ or Neq) and the life history of the studied species. Among population endpoints, net reproductive rate R0 showed the lowest EDR10 (effective dose rate inducing 10% effect) in all species, with values ranging from 26 μGy h(-1) in the mouse Mus musculus to 38,000 μGy h(-1) in the fish Oryzias latipes. For several species, EDR10 for population endpoints were lower than the lowest EDR10 for individual endpoints. Various population level risks, differing in severity for the population, were investigated. Population extinction (predicted when radiation effects caused population growth rate λ to decrease below 1, indicating that no population growth in the long term) was predicted for dose rates ranging from 2700 μGy h(-1) in fish to 12,000 μGy h(-1) in soil invertebrates. A milder risk, that population growth rate λ will be reduced by 10% of the reduction causing extinction, was predicted for dose rates ranging from 24 μGy h(-1) in mammals to 1800 μGy h(-1) in soil invertebrates. These predictions suggested that proposed reference benchmarks from the literature for different taxonomic groups protected all simulated species against population extinction. A generic reference benchmark of 10 μGy h(-1) protected all simulated species against 10% of the effect causing population extinction. Finally, a risk of pseudo-extinction was predicted from 2.0 μGy h(-1) in mammals to 970 μGy h(-1) in soil invertebrates, representing a slight but statistically significant population decline, the importance of which remains to be evaluated in natural settings. Copyright © 2015 Elsevier Ltd. All rights reserved.
Defense Acquisitions: Addressing Incentives is Key to Further Reform Efforts
2014-04-30
championed sound management practices, such as realistic cost estimating, prototyping, and systems engineering . While some progress has been made...other reforms have championed sound management practices, such as realistic cost estimating, prototyping, and systems engineering . DOD’s declining...principles from disciplines such as systems engineering , as well as lessons learned and past reforms. The body of work we have done on benchmarking
Protein Models Docking Benchmark 2
Anishchenko, Ivan; Kundrotas, Petras J.; Tuzikov, Alexander V.; Vakser, Ilya A.
2015-01-01
Structural characterization of protein-protein interactions is essential for our ability to understand life processes. However, only a fraction of known proteins have experimentally determined structures. Such structures provide templates for modeling of a large part of the proteome, where individual proteins can be docked by template-free or template-based techniques. Still, the sensitivity of the docking methods to the inherent inaccuracies of protein models, as opposed to the experimentally determined high-resolution structures, remains largely untested, primarily due to the absence of appropriate benchmark set(s). Structures in such a set should have pre-defined inaccuracy levels and, at the same time, resemble actual protein models in terms of structural motifs/packing. The set should also be large enough to ensure statistical reliability of the benchmarking results. We present a major update of the previously developed benchmark set of protein models. For each interactor, six models were generated with the model-to-native Cα RMSD in the 1 to 6 Å range. The models in the set were generated by a new approach, which corresponds to the actual modeling of new protein structures in the “real case scenario,” as opposed to the previous set, where a significant number of structures were model-like only. In addition, the larger number of complexes (165 vs. 63 in the previous set) increases the statistical reliability of the benchmarking. We estimated the highest accuracy of the predicted complexes (according to CAPRI criteria), which can be attained using the benchmark structures. The set is available at http://dockground.bioinformatics.ku.edu. PMID:25712716
Su, Lin; Yang, Youming; Bednarz, Bryan; Sterpin, Edmond; Du, Xining; Liu, Tianyu; Ji, Wei; Xu, X. George
2014-01-01
Purpose: Using the graphical processing units (GPU) hardware technology, an extremely fast Monte Carlo (MC) code ARCHERRT is developed for radiation dose calculations in radiation therapy. This paper describes the detailed software development and testing for three clinical TomoTherapy® cases: the prostate, lung, and head & neck. Methods: To obtain clinically relevant dose distributions, phase space files (PSFs) created from optimized radiation therapy treatment plan fluence maps were used as the input to ARCHERRT. Patient-specific phantoms were constructed from patient CT images. Batch simulations were employed to facilitate the time-consuming task of loading large PSFs, and to improve the estimation of statistical uncertainty. Furthermore, two different Woodcock tracking algorithms were implemented and their relative performance was compared. The dose curves of an Elekta accelerator PSF incident on a homogeneous water phantom were benchmarked against DOSXYZnrc. For each of the treatment cases, dose volume histograms and isodose maps were produced from ARCHERRT and the general-purpose code, GEANT4. The gamma index analysis was performed to evaluate the similarity of voxel doses obtained from these two codes. The hardware accelerators used in this study are one NVIDIA K20 GPU, one NVIDIA K40 GPU, and six NVIDIA M2090 GPUs. In addition, to make a fairer comparison of the CPU and GPU performance, a multithreaded CPU code was developed using OpenMP and tested on an Intel E5-2620 CPU. Results: For the water phantom, the depth dose curve and dose profiles from ARCHERRT agree well with DOSXYZnrc. For clinical cases, results from ARCHERRT are compared with those from GEANT4 and good agreement is observed. Gamma index test is performed for voxels whose dose is greater than 10% of maximum dose. For 2%/2mm criteria, the passing rates for the prostate, lung case, and head & neck cases are 99.7%, 98.5%, and 97.2%, respectively. Due to specific architecture of GPU, modified Woodcock tracking algorithm performed inferior to the original one. ARCHERRT achieves a fast speed for PSF-based dose calculations. With a single M2090 card, the simulations cost about 60, 50, 80 s for three cases, respectively, with the 1% statistical error in the PTV. Using the latest K40 card, the simulations are 1.7–1.8 times faster. More impressively, six M2090 cards could finish the simulations in 8.9–13.4 s. For comparison, the same simulations on Intel E5-2620 (12 hyperthreading) cost about 500–800 s. Conclusions: ARCHERRT was developed successfully to perform fast and accurate MC dose calculation for radiotherapy using PSFs and patient CT phantoms. PMID:24989378
Su, Lin; Yang, Youming; Bednarz, Bryan; Sterpin, Edmond; Du, Xining; Liu, Tianyu; Ji, Wei; Xu, X George
2014-07-01
Using the graphical processing units (GPU) hardware technology, an extremely fast Monte Carlo (MC) code ARCHERRT is developed for radiation dose calculations in radiation therapy. This paper describes the detailed software development and testing for three clinical TomoTherapy® cases: the prostate, lung, and head & neck. To obtain clinically relevant dose distributions, phase space files (PSFs) created from optimized radiation therapy treatment plan fluence maps were used as the input to ARCHERRT. Patient-specific phantoms were constructed from patient CT images. Batch simulations were employed to facilitate the time-consuming task of loading large PSFs, and to improve the estimation of statistical uncertainty. Furthermore, two different Woodcock tracking algorithms were implemented and their relative performance was compared. The dose curves of an Elekta accelerator PSF incident on a homogeneous water phantom were benchmarked against DOSXYZnrc. For each of the treatment cases, dose volume histograms and isodose maps were produced from ARCHERRT and the general-purpose code, GEANT4. The gamma index analysis was performed to evaluate the similarity of voxel doses obtained from these two codes. The hardware accelerators used in this study are one NVIDIA K20 GPU, one NVIDIA K40 GPU, and six NVIDIA M2090 GPUs. In addition, to make a fairer comparison of the CPU and GPU performance, a multithreaded CPU code was developed using OpenMP and tested on an Intel E5-2620 CPU. For the water phantom, the depth dose curve and dose profiles from ARCHERRT agree well with DOSXYZnrc. For clinical cases, results from ARCHERRT are compared with those from GEANT4 and good agreement is observed. Gamma index test is performed for voxels whose dose is greater than 10% of maximum dose. For 2%/2mm criteria, the passing rates for the prostate, lung case, and head & neck cases are 99.7%, 98.5%, and 97.2%, respectively. Due to specific architecture of GPU, modified Woodcock tracking algorithm performed inferior to the original one. ARCHERRT achieves a fast speed for PSF-based dose calculations. With a single M2090 card, the simulations cost about 60, 50, 80 s for three cases, respectively, with the 1% statistical error in the PTV. Using the latest K40 card, the simulations are 1.7-1.8 times faster. More impressively, six M2090 cards could finish the simulations in 8.9-13.4 s. For comparison, the same simulations on Intel E5-2620 (12 hyperthreading) cost about 500-800 s. ARCHERRT was developed successfully to perform fast and accurate MC dose calculation for radiotherapy using PSFs and patient CT phantoms.
Cumming, Oliver; Elliott, Mark; Overbo, Alycia; Bartram, Jamie
2014-01-01
Safe drinking water and sanitation are important determinants of human health and wellbeing and have recently been declared human rights by the international community. Increased access to both were included in the Millennium Development Goals under a single dedicated target for 2015. This target was reached in 2010 for water but sanitation will fall short; however, there is an important difference in the benchmarks used for assessing global access. For drinking water the benchmark is community-level access whilst for sanitation it is household-level access, so a pit latrine shared between households does not count toward the Millennium Development Goal (MDG) target. We estimated global progress for water and sanitation under two scenarios: with equivalent household- and community-level benchmarks. Our results demonstrate that the “sanitation deficit” is apparent only when household-level sanitation access is contrasted with community-level water access. When equivalent benchmarks are used for water and sanitation, the global deficit is as great for water as it is for sanitation, and sanitation progress in the MDG-period (1990–2015) outstrips that in water. As both drinking water and sanitation access yield greater benefits at the household-level than at the community-level, we conclude that any post–2015 goals should consider a household-level benchmark for both. PMID:25502659
Cumming, Oliver; Elliott, Mark; Overbo, Alycia; Bartram, Jamie
2014-01-01
Safe drinking water and sanitation are important determinants of human health and wellbeing and have recently been declared human rights by the international community. Increased access to both were included in the Millennium Development Goals under a single dedicated target for 2015. This target was reached in 2010 for water but sanitation will fall short; however, there is an important difference in the benchmarks used for assessing global access. For drinking water the benchmark is community-level access whilst for sanitation it is household-level access, so a pit latrine shared between households does not count toward the Millennium Development Goal (MDG) target. We estimated global progress for water and sanitation under two scenarios: with equivalent household- and community-level benchmarks. Our results demonstrate that the "sanitation deficit" is apparent only when household-level sanitation access is contrasted with community-level water access. When equivalent benchmarks are used for water and sanitation, the global deficit is as great for water as it is for sanitation, and sanitation progress in the MDG-period (1990-2015) outstrips that in water. As both drinking water and sanitation access yield greater benefits at the household-level than at the community-level, we conclude that any post-2015 goals should consider a household-level benchmark for both.
NASA Astrophysics Data System (ADS)
Gilat-Schmidt, Taly; Wang, Adam; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh
2016-03-01
The overall goal of this work is to develop a rapid, accurate and fully automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using a deterministic Boltzmann Transport Equation solver and automated CT segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. The investigated algorithm uses a combination of feature-based and atlas-based methods. A multiatlas approach was also investigated. We hypothesize that the auto-segmentation algorithm is sufficiently accurate to provide organ dose estimates since random errors at the organ boundaries will average out when computing the total organ dose. To test this hypothesis, twenty head-neck CT scans were expertly segmented into nine regions. A leave-one-out validation study was performed, where every case was automatically segmented with each of the remaining cases used as the expert atlas, resulting in nineteen automated segmentations for each of the twenty datasets. The segmented regions were applied to gold-standard Monte Carlo dose maps to estimate mean and peak organ doses. The results demonstrated that the fully automated segmentation algorithm estimated the mean organ dose to within 10% of the expert segmentation for regions other than the spinal canal, with median error for each organ region below 2%. In the spinal canal region, the median error was 7% across all data sets and atlases, with a maximum error of 20%. The error in peak organ dose was below 10% for all regions, with a median error below 4% for all organ regions. The multiple-case atlas reduced the variation in the dose estimates and additional improvements may be possible with more robust multi-atlas approaches. Overall, the results support potential feasibility of an automated segmentation algorithm to provide accurate organ dose estimates.
Bedwell, P; Mortimer, K; Wellings, J; Sherwood, J; Leadbetter, S J; Haywood, S M; Charnock, T; Jones, A R; Hort, M C
2015-12-01
The earthquake and tsunami on 11 March 2011, centred off the east coast of Japan, caused considerable destruction and substantial loss of life along large swathes of the Japanese coastline. The tsunami damaged the Fukushima Daiichi nuclear power plant (NPP), resulting in prolonged releases of radioactive material into the environment. This paper assesses the doses received by members of the public in Japan. The assessment is based on an estimated source term and atmospheric dispersion modelling rather than monitoring data. It is evident from this assessment that across the majority of Japan the estimates of dose are very low, for example they are estimated to be less than the annual average dose from natural background radiation in Japan. Even in the regions local to Fukushima Daiichi NPP (and not affected by any form of evacuation) the maximum lifetime effective dose is estimated to be well below the cumulative natural background dose over the same period. The impact of the urgent countermeasures on the estimates of dose was considered. And the relative contribution to dose from the range of exposure pathways and radionuclides were evaluated. Analysis of estimated doses focused on the geographic irregularity and the impact of the meteorological conditions. For example the dose to an infant's thyroid received over the first year was estimated to be greater in Hirono than in the non-evacuated region of Naraha, despite Hirono being further from the release location. A number of factors were identified and thought to contribute towards this outcome, including the local wind pattern which resulted in the recirculation of part of the release. The non-uniform nature of dose estimates strengthens the case for evaluations based on dispersion modelling.
Electric Power Consumption Coefficients for U.S. Industries: Regional Estimation and Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boero, Riccardo
Economic activity relies on electric power provided by electrical generation, transmission, and distribution systems. This paper presents a method developed at Los Alamos National Laboratory to estimate electric power consumption by different industries in the United States. Results are validated through comparisons with existing literature and benchmarking data sources. We also discuss the limitations and applications of the presented method, such as estimating indirect electric power consumption and assessing the economic impact of power outages based on input-output economic models.
Kashcheev, Valery V; Pryakhin, Evgeny A; Menyaylo, Alexander N; Chekin, Sergey Yu; Ivanov, Viktor K
2014-06-01
The current study has two aims: the first is to quantify the difference between radiation risks estimated with the use of organ or effective doses, particularly when planning pediatric and adult computed tomography (CT) examinations. The second aim is to determine the method of calculating organ doses and cancer risk using dose-length product (DLP) for typical routine CT examinations. In both cases, the radiation-induced cancer risks from medical CT examinations were evaluated as a function of gender and age. Lifetime attributable risk values from CT scanning were estimated with the use of ICRP (Publication 103) risk models and Russian national medical statistics data. For populations under the age of 50 y, the risk estimates based on organ doses usually are 30% higher than estimates based on effective doses. In older populations, the difference can be up to a factor of 2.5. The typical distributions of organ doses were defined for Chest Routine, Abdominal Routine, and Head Routine examinations. The distributions of organ doses were dependent on the anatomical region of scanning. The most exposed organs/tissues were thyroid, breast, esophagus, and lungs in cases of Chest Routine examination; liver, stomach, colon, ovaries, and bladder in cases of Abdominal Routine examination; and brain for Head Routine examinations. The conversion factors for calculation of typical organ doses or tissues at risk using DLP were determined. Lifetime attributable risk of cancer estimated with organ doses calculated from DLP was compared with the risk estimated on the basis of organ doses measured with the use of silicon photodiode dosimeters. The estimated difference in LAR is less than 29%.
Benchmarking protein classification algorithms via supervised cross-validation.
Kertész-Farkas, Attila; Dhir, Somdutta; Sonego, Paolo; Pacurar, Mircea; Netoteia, Sergiu; Nijveen, Harm; Kuzniar, Arnold; Leunissen, Jack A M; Kocsor, András; Pongor, Sándor
2008-04-24
Development and testing of protein classification algorithms are hampered by the fact that the protein universe is characterized by groups vastly different in the number of members, in average protein size, similarity within group, etc. Datasets based on traditional cross-validation (k-fold, leave-one-out, etc.) may not give reliable estimates on how an algorithm will generalize to novel, distantly related subtypes of the known protein classes. Supervised cross-validation, i.e., selection of test and train sets according to the known subtypes within a database has been successfully used earlier in conjunction with the SCOP database. Our goal was to extend this principle to other databases and to design standardized benchmark datasets for protein classification. Hierarchical classification trees of protein categories provide a simple and general framework for designing supervised cross-validation strategies for protein classification. Benchmark datasets can be designed at various levels of the concept hierarchy using a simple graph-theoretic distance. A combination of supervised and random sampling was selected to construct reduced size model datasets, suitable for algorithm comparison. Over 3000 new classification tasks were added to our recently established protein classification benchmark collection that currently includes protein sequence (including protein domains and entire proteins), protein structure and reading frame DNA sequence data. We carried out an extensive evaluation based on various machine-learning algorithms such as nearest neighbor, support vector machines, artificial neural networks, random forests and logistic regression, used in conjunction with comparison algorithms, BLAST, Smith-Waterman, Needleman-Wunsch, as well as 3D comparison methods DALI and PRIDE. The resulting datasets provide lower, and in our opinion more realistic estimates of the classifier performance than do random cross-validation schemes. A combination of supervised and random sampling was used to construct model datasets, suitable for algorithm comparison.
Dexter, Franklin; O'Neill, Liam; Xin, Lei; Ledolter, Johannes
2008-12-01
We use resampling of data to explore the basic statistical properties of super-efficient data envelopment analysis (DEA) when used as a benchmarking tool by the manager of a single decision-making unit. Our focus is the gaps in the outputs (i.e., slacks adjusted for upward bias), as they reveal which outputs can be increased. The numerical experiments show that the estimates of the gaps fail to exhibit asymptotic consistency, a property expected for standard statistical inference. Specifically, increased sample sizes were not always associated with more accurate forecasts of the output gaps. The baseline DEA's gaps equaled the mode of the jackknife and the mode of resampling with/without replacement from any subset of the population; usually, the baseline DEA's gaps also equaled the median. The quartile deviations of gaps were close to zero when few decision-making units were excluded from the sample and the study unit happened to have few other units contributing to its benchmark. The results for the quartile deviations can be explained in terms of the effective combinations of decision-making units that contribute to the DEA solution. The jackknife can provide all the combinations contributing to the quartile deviation and only needs to be performed for those units that are part of the benchmark set. These results show that there is a strong rationale for examining DEA results with a sensitivity analysis that excludes one benchmark hospital at a time. This analysis enhances the quality of decision support using DEA estimates for the potential ofa decision-making unit to grow one or more of its outputs.
Payer leverage and hospital compliance with a benchmark: a population-based observational study
Hollingsworth, John M; Krein, Sarah L; Miller, David C; DeMonner, Sonya; Hollenbeck, Brent K
2007-01-01
Background Since 1976, Medicare has linked reimbursement for hospitals performing organ transplants to the attainment of certain benchmarks, including transplant volume. While Medicare is a stakeholder in all transplant services, its role in renal transplantation is likely greater, given its coverage of end-stage renal disease. Thus, Medicare's transplant experience allows us to examine the role of payer leverage in motivating hospital benchmark compliance. Methods Nationally representative discharge data for kidney (n = 29,272), liver (n = 7,988), heart (n = 3,530), and lung (n = 1,880) transplants from the Nationwide Inpatient Sample (1993 – 2003) were employed. Logistic regression techniques with robust variance estimators were used to examine the relationship between hospital volume compliance and Medicare market share; generalized estimating equations were used to explore the association between patient-level operative mortality and hospital volume compliance. Results Medicare's transplant market share varied by organ [57%, 28%, 27%, and 18% for kidney, lung, heart, and liver transplants, respectively (P < 0.001)]. Volume-based benchmark compliance varied by transplant type [85%, 75%, 44%, and 39% for kidney, liver, heart, and lung transplants, respectively (P < 0.001)], despite a lower odds of operative mortality at compliant hospitals. Adjusting for organ supply, high market leverage was independently associated with compliance at hospitals transplanting kidneys (OR, 143.00; 95% CI, 18.53 – 1103.49), hearts (OR, 2.84; 95% CI, 1.51 – 5.34), and lungs (OR, 3.24; 95% CI, 1.57 – 6.67). Conclusion These data highlight the influence of payer leverage–an important contextual factor in value-based purchasing initiatives. For uncommon diagnoses, these data suggest that at least 30% of a provider's patients might need to be "at risk" for an incentive to motivate compliance. PMID:17640364
The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grebe, A.; Leveling, A.; Lu, T.
The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay gamma-quanta by the residuals in the activated structures and scoring the prompt doses of these gamma-quanta at arbitrary distances frommore » those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and showed a good agreement. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.« less
The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose
NASA Astrophysics Data System (ADS)
Grebe, A.; Leveling, A.; Lu, T.; Mokhov, N.; Pronskikh, V.
2018-01-01
The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay γ-quanta by the residuals in the activated structures and scoring the prompt doses of these γ-quanta at arbitrary distances from those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and against experimental data from the CERF facility at CERN, and FermiCORD showed reasonable agreement with these. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.
Marín, Silvia; Pardo, Olga; Báguena, Rosario; Font, Guillermina; Yusà, Vicent
2017-02-01
Dietary exposure of the Valencian region population to lead, cadmium, inorganic arsenic (iAs), chromium, copper, tin and methylmercury (meHg) was assessed in a total diet study carried out in the region of Valencia in 2010-11. A total of 8100 food samples were collected and analysed. Occurrence data were combined with consumption data to estimate dietary exposure in adults (> 15 years of age) and young children (6-15 years of age). The estimated intake was calculated by a probabilistic approach. Average intake levels (optimistic scenario) for lead, iAs, chromium and tin were 0.21, 0.08, 1.79 and 1.87 µg kg - 1 bw day -1 respectively; for Cd and meHg average intake levels were 0.77 and 0.54 µg kg - 1 bw week -1 , respectively, and for Cu, 1.60 mg day -1 . In terms of risk characterisation, the results showed that 2.84% of the adult population may exceed the BMDL 10 (benchmark dose lower confidence limit) established for Pb, which is linked to renal effects; whereas 28.01% of the young children population may exceed the BMDL 01 related to neurodevelopment effects. In addition, 8.47% of the adult population and 12.32% of young children exceeded the meHg tolerable weekly intake (TWI).
A novel approach for estimating ingested dose associated with paracetamol overdose
Zurlinden, Todd J.; Heard, Kennon
2015-01-01
Aim In cases of paracetamol (acetaminophen, APAP) overdose, an accurate estimate of tissue‐specific paracetamol pharmacokinetics (PK) and ingested dose can offer health care providers important information for the individualized treatment and follow‐up of affected patients. Here a novel methodology is presented to make such estimates using a standard serum paracetamol measurement and a computational framework. Methods The core component of the computational framework was a physiologically‐based pharmacokinetic (PBPK) model developed and evaluated using an extensive set of human PK data. Bayesian inference was used for parameter and dose estimation, allowing the incorporation of inter‐study variability, and facilitating the calculation of uncertainty in model outputs. Results Simulations of paracetamol time course concentrations in the blood were in close agreement with experimental data under a wide range of dosing conditions. Also, predictions of administered dose showed good agreement with a large collection of clinical and emergency setting PK data over a broad dose range. In addition to dose estimation, the platform was applied for the determination of optimal blood sampling times for dose reconstruction and quantitation of the potential role of paracetamol conjugate measurement on dose estimation. Conclusions Current therapies for paracetamol overdose rely on a generic methodology involving the use of a clinical nomogram. By using the computational framework developed in this study, serum sample data, and the individual patient's anthropometric and physiological information, personalized serum and liver pharmacokinetic profiles and dose estimate could be generated to help inform an individualized overdose treatment and follow‐up plan. PMID:26441245
A novel approach for estimating ingested dose associated with paracetamol overdose.
Zurlinden, Todd J; Heard, Kennon; Reisfeld, Brad
2016-04-01
In cases of paracetamol (acetaminophen, APAP) overdose, an accurate estimate of tissue-specific paracetamol pharmacokinetics (PK) and ingested dose can offer health care providers important information for the individualized treatment and follow-up of affected patients. Here a novel methodology is presented to make such estimates using a standard serum paracetamol measurement and a computational framework. The core component of the computational framework was a physiologically-based pharmacokinetic (PBPK) model developed and evaluated using an extensive set of human PK data. Bayesian inference was used for parameter and dose estimation, allowing the incorporation of inter-study variability, and facilitating the calculation of uncertainty in model outputs. Simulations of paracetamol time course concentrations in the blood were in close agreement with experimental data under a wide range of dosing conditions. Also, predictions of administered dose showed good agreement with a large collection of clinical and emergency setting PK data over a broad dose range. In addition to dose estimation, the platform was applied for the determination of optimal blood sampling times for dose reconstruction and quantitation of the potential role of paracetamol conjugate measurement on dose estimation. Current therapies for paracetamol overdose rely on a generic methodology involving the use of a clinical nomogram. By using the computational framework developed in this study, serum sample data, and the individual patient's anthropometric and physiological information, personalized serum and liver pharmacokinetic profiles and dose estimate could be generated to help inform an individualized overdose treatment and follow-up plan. © 2015 The British Pharmacological Society.
Normalized dose data for upper gastrointestinal tract contrast studies performed to infants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damilakis, John; Stratakis, John; Raissaki, Maria
The aim of the current study was to (a) provide normalized dose data for the estimation of the radiation dose from upper gastrointestinal tract contrast (UGIC) studies carried out to infants and (b) estimate the average patient dose and risks associated with radiation from UGIC examinations performed in our institution. Organ and effective doses, normalized to entrance skin dose (ESD) and dose area product (DAP) were estimated for UGIC procedures utilizing the Monte Carlo N-particle (MCNP) transport code and two mathematical phantoms, one corresponding to the size of a newborn and one to the size of a 1-year-old child. Themore » validity of the MCNP results was verified by comparison with dose data obtained in physical anthropomorphic phantoms simulating a newborn and a 1-year-old infant using thermoluminescence dosimetry (TLD). Data were also collected from 25 consecutive UGIC examinations performed to infants. Study participants were (a) 12 infants aged from 0.5 to 5.9 months (group 1) and (b) 13 infants aged from 6 to 15 months (group 2). For each examination, ESD and dose to comforters were measured using TLD. Patient effective doses were estimated using normalized dose data obtained in the simulation study. The risk for fatal cancer induction was estimated using appropriate coefficients. The results consist of tabulated dose data normalized to ESD or DAP for the estimation of patient dose. Conversion coefficients were estimated for various tube potentials and beam filtration values. The mean total fluoroscopy time was 1.26 and 1.62 min for groups 1 and 2, respectively. The average effective dose was 1.6 mSv for group 1 and 1.9 mSv for group 2. The risk of cancer attributable to the radiation exposure associated with a typical UGIC study was found to be up to 3 per 10 000 infants undergoing an UGIC examination. The mean radiation dose absorbed by the hands of comforters was 47 {mu}Gy. In conclusion, estimation of radiation doses associated with UGIC studies performed to infants can be made using the normalized dose data provided in the current study. Radiation dose values associated with UGIC examinations carried out to infants are not low and should be minimized as much as possible.« less
Patient doses in the healing arts
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
Determinations of radiation doses to patients from x-ray procedures and radiopharmaceuticals are detailed in this chapter. Instructions are given for estimating doses from x-ray procedures. For selected pediatric procedures, the methodology developed by the Food and Drug Administration is presented. The effect of testicular and ovarian shielding is illustrated in tabular form. Estimates of the Genetically Significant Dose (GSD) and mean annual bone marrow dose from diagnostic x-ray examinations are presented for the US populations (1990). This chapter also provides tables of patient doses from selected nuclear medicine procedures and estimates of fetal doses from {sup 131}I.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simon, S.L.; Kerber, R.L.; Stevens, W.
This paper discusses the dosimetry methodology used to estimate bone marrow dose and the results of dosimetry calculations for 6,507 subjects in an epidemiologic case. control study of leukemia among Utah residents. The estimated doses were used to determine if a higher incidence of leukemia among residents of Utah could have been attributed to exposure to radioactive fallout from above-ground nuclear weapons tests conducted at the Nevada Test Site. The objective of the dosimetry methodology was to estimate absorbed dose to active marrow specific to each case and each control subject. Data on the residence of each subject were availablemore » from records of the Church of Jesus Christ of Latter-day Saints. Deposition of fallout was determined from databases developed using historical measurements and exposure for each subject from each test was estimated using those data. Exposure was converted to dose by applying an age-dependent dose conversion factor and a factor for shielding. The median dose for all case and control subjects was 3.2 mGy. The maximum estimated mean dose for any case or control was 29 {plus_minus} 5.6 mGy (a resident of Washington County, UT). Uncertainties were estimated for each estimated dose. The results of the dosimetry calculations were applied in an epidemiological analysis.« less
Assessing the quality of GEOID12B model through field surveys
NASA Astrophysics Data System (ADS)
Elaksher, Ahmed; Kamtchang, Franck; Wegmann, Christian; Guerrero, Adalberto
2018-01-01
Elevation differences have been determined through conventional ground surveying techniques for over a century. Since the mid-80s GPS, GLONASS and other satellite systems have modernized the means by which elevation differences are observed. In this article, we assessed the quality of GEIOD12B through long-occupation GNSS static surveys. A set of NGS benchmarks was occupied for at least one hour using dual-frequency GNSS receivers. Collected measurements were processed using a single CORS station at most 24 kilometers from the benchmarks. Geoid undulation values were driven by subtracting measured ellipsoidal heights from the orthometric heights posted on the NGS website. To assess the quality of GEOID12B, we compared our computed vertical shifts at the benchmarks with those estimated from GEOID12B published by NGS. In addition, Kriging model was used to interpolate local maps for the geoid undulations from the benchmark heights. The maps were compared with corresponding parts of GEOID12B. No biases were detected in the results and only shifts due to random errors were found. Discrepancies in the range of ten centimetres were noticed between our geoid undulation and the values available from NGS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christie, David, E-mail: david.christie@premion.com.au; Dear, Keith; Le, Thai
2011-07-15
Purpose: To establish benchmark outcomes for combined modality treatment to be used in future prospective studies of osteolymphoma (primary bone lymphoma). Methods and Materials: In 1999, the Trans-Tasman Radiation Oncology Group (TROG) invited the Australasian Leukemia and Lymphoma Group (ALLG) to collaborate on a prospective study of limited chemotherapy and radiotherapy for osteolymphoma. The treatment was designed to maintain efficacy but limit the risk of subsequent pathological fractures. Patient assessment included both functional imaging and isotope bone scanning. Treatment included three cycles of CHOP chemotherapy and radiation to a dose of 45 Gy in 25 fractions using a shrinking fieldmore » technique. Results: The trial closed because of slow accrual after 33 patients had been entered. Accrual was noted to slow down after Rituximab became readily available in Australia. After a median follow-up of 4.3 years, the five-year overall survival and local control rates are estimated at 90% and 72% respectively. Three patients had fractures at presentation that persisted after treatment, one with recurrent lymphoma. Conclusions: Relatively high rates of survival were achieved but the number of local failures suggests that the dose of radiotherapy should remain higher than it is for other types of lymphoma. Disability after treatment due to pathological fracture was not seen.« less
Elschot, Mattijs; Nijsen, Johannes F W; Lam, Marnix G E H; Smits, Maarten L J; Prince, Jip F; Viergever, Max A; van den Bosch, Maurice A A J; Zonnenberg, Bernard A; de Jong, Hugo W A M
2014-10-01
Radiation pneumonitis is a rare but serious complication of radioembolic therapy of liver tumours. Estimation of the mean absorbed dose to the lungs based on pretreatment diagnostic (99m)Tc-macroaggregated albumin ((99m)Tc-MAA) imaging should prevent this, with administered activities adjusted accordingly. The accuracy of (99m)Tc-MAA-based lung absorbed dose estimates was evaluated and compared to absorbed dose estimates based on pretreatment diagnostic (166)Ho-microsphere imaging and to the actual lung absorbed doses after (166)Ho radioembolization. This prospective clinical study included 14 patients with chemorefractory, unresectable liver metastases treated with (166)Ho radioembolization. (99m)Tc-MAA-based and (166)Ho-microsphere-based estimation of lung absorbed doses was performed on pretreatment diagnostic planar scintigraphic and SPECT/CT images. The clinical analysis was preceded by an anthropomorphic torso phantom study with simulated lung shunt fractions of 0 to 30 % to determine the accuracy of the image-based lung absorbed dose estimates after (166)Ho radioembolization. In the phantom study, (166)Ho SPECT/CT-based lung absorbed dose estimates were more accurate (absolute error range 0.1 to -4.4 Gy) than (166)Ho planar scintigraphy-based lung absorbed dose estimates (absolute error range 9.5 to 12.1 Gy). Clinically, the actual median lung absorbed dose was 0.02 Gy (range 0.0 to 0.7 Gy) based on posttreatment (166)Ho-microsphere SPECT/CT imaging. Lung absorbed doses estimated on the basis of pretreatment diagnostic (166)Ho-microsphere SPECT/CT imaging (median 0.02 Gy, range 0.0 to 0.4 Gy) were significantly better predictors of the actual lung absorbed doses than doses estimated on the basis of (166)Ho-microsphere planar scintigraphy (median 10.4 Gy, range 4.0 to 17.3 Gy; p < 0.001), (99m)Tc-MAA SPECT/CT imaging (median 2.5 Gy, range 1.2 to 12.3 Gy; p < 0.001), and (99m)Tc-MAA planar scintigraphy (median 5.5 Gy, range 2.3 to 18.2 Gy; p < 0.001). In clinical practice, lung absorbed doses are significantly overestimated by pretreatment diagnostic (99m)Tc-MAA imaging. Pretreatment diagnostic (166)Ho-microsphere SPECT/CT imaging accurately predicts lung absorbed doses after (166)Ho radioembolization.
Tang, Leilei; Guérard, Melanie; Zeller, Andreas
2014-01-01
Mutagenic and clastogenic effects of some DNA damaging agents such as methyl methanesulfonate (MMS) and ethyl methanesulfonate (EMS) have been demonstrated to exhibit a nonlinear or even "thresholded" dose-response in vitro and in vivo. DNA repair seems to be mainly responsible for these thresholds. To this end, we assessed several mutagenic alkylators in the Ames test with four different strains of Salmonella typhimurium: the alkyl transferases proficient strain TA1535 (Ogt+/Ada+), as well as the alkyl transferases deficient strains YG7100 (Ogt+/Ada-), YG7104 (Ogt-/Ada+) and YG7108 (Ogt-/Ada-). The known genotoxins EMS, MMS, temozolomide (TMZ), ethylnitrosourea (ENU) and methylnitrosourea (MNU) were tested in as many as 22 concentration levels. Dose-response curves were statistically fitted by the PROAST benchmark dose model and the Lutz-Lutz "hockeystick" model. These dose-response curves suggest efficient DNA-repair for lesions inflicted by all agents in strain TA1535. In the absence of Ogt, Ada is predominantly repairing methylations but not ethylations. It is concluded that the capacity of alkyl-transferases to successfully repair DNA lesions up to certain dose levels contributes to genotoxicity thresholds. Copyright © 2013 Wiley Periodicals, Inc.
Low-dose CT image reconstruction using gain intervention-based dictionary learning
NASA Astrophysics Data System (ADS)
Pathak, Yadunath; Arya, K. V.; Tiwari, Shailendra
2018-05-01
Computed tomography (CT) approach is extensively utilized in clinical diagnoses. However, X-ray residue in human body may introduce somatic damage such as cancer. Owing to radiation risk, research has focused on the radiation exposure distributed to patients through CT investigations. Therefore, low-dose CT has become a significant research area. Many researchers have proposed different low-dose CT reconstruction techniques. But, these techniques suffer from various issues such as over smoothing, artifacts, noise, etc. Therefore, in this paper, we have proposed a novel integrated low-dose CT reconstruction technique. The proposed technique utilizes global dictionary-based statistical iterative reconstruction (GDSIR) and adaptive dictionary-based statistical iterative reconstruction (ADSIR)-based reconstruction techniques. In case the dictionary (D) is predetermined, then GDSIR can be used and if D is adaptively defined then ADSIR is appropriate choice. The gain intervention-based filter is also used as a post-processing technique for removing the artifacts from low-dose CT reconstructed images. Experiments have been done by considering the proposed and other low-dose CT reconstruction techniques on well-known benchmark CT images. Extensive experiments have shown that the proposed technique outperforms the available approaches.
Scientific evidence supporting recreational water quality benchmarks primarily stems from epidemiological studies conducted at beaches impacted by human fecal sources. Epidemiological studies conducted at locations impacted by non-human faecal sources have provided ambiguous and ...
Hanford Environmental Dose Reconstruction Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cannon, S.D.; Finch, S.M.
1992-10-01
The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The independent Technical Steering Panel (TSP) provides technical direction. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed from release to impact on humans (dose estimates):Source Terms, Environmental Transport, Environmental Monitoring Data, Demography, Food Consumption, and Agriculture, and Environmental Pathways and Dose Estimates.
Proof of concept and dose estimation with binary responses under model uncertainty.
Klingenberg, B
2009-01-30
This article suggests a unified framework for testing Proof of Concept (PoC) and estimating a target dose for the benefit of a more comprehensive, robust and powerful analysis in phase II or similar clinical trials. From a pre-specified set of candidate models, we choose the ones that best describe the observed dose-response. To decide which models, if any, significantly pick up a dose effect, we construct the permutation distribution of the minimum P-value over the candidate set. This allows us to find critical values and multiplicity adjusted P-values that control the familywise error rate of declaring any spurious effect in the candidate set as significant. Model averaging is then used to estimate a target dose. Popular single or multiple contrast tests for PoC, such as the Cochran-Armitage, Dunnett or Williams tests, are only optimal for specific dose-response shapes and do not provide target dose estimates with confidence limits. A thorough evaluation and comparison of our approach to these tests reveal that its power is as good or better in detecting a dose-response under various shapes with many more additional benefits: It incorporates model uncertainty in PoC decisions and target dose estimation, yields confidence intervals for target dose estimates and extends to more complicated data structures. We illustrate our method with the analysis of a Phase II clinical trial. Copyright (c) 2008 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Akahane, Keiichi; Yonai, Shunsuke; Fukuda, Shigekazu; Miyahara, Nobuyuki; Yasuda, Hiroshi; Iwaoka, Kazuki; Matsumoto, Masaki; Fukumura, Akifumi; Akashi, Makoto
2013-04-01
The great east Japan earthquake and subsequent tsunamis caused Fukushima Dai-ichi Nuclear Power Plant (NPP) accident. National Institute of Radiological Sciences (NIRS) developed the external dose estimation system for Fukushima residents. The system is being used in the Fukushima health management survey. The doses can be obtained by superimposing the behavior data of the residents on the dose rate maps. For grasping the doses, 18 evacuation patterns of the residents were assumed by considering the actual evacuation information before using the survey data. The doses of the residents from the deliberate evacuation area were relatively higher than those from the area within 20 km radius. The estimated doses varied from around 1 to 6 mSv for the residents evacuated from the representative places in the deliberate evacuation area. The maximum dose in 18 evacuation patterns was estimated to be 19 mSv.
Akahane, Keiichi; Yonai, Shunsuke; Fukuda, Shigekazu; Miyahara, Nobuyuki; Yasuda, Hiroshi; Iwaoka, Kazuki; Matsumoto, Masaki; Fukumura, Akifumi; Akashi, Makoto
2013-01-01
The great east Japan earthquake and subsequent tsunamis caused Fukushima Dai-ichi Nuclear Power Plant (NPP) accident. National Institute of Radiological Sciences (NIRS) developed the external dose estimation system for Fukushima residents. The system is being used in the Fukushima health management survey. The doses can be obtained by superimposing the behavior data of the residents on the dose rate maps. For grasping the doses, 18 evacuation patterns of the residents were assumed by considering the actual evacuation information before using the survey data. The doses of the residents from the deliberate evacuation area were relatively higher than those from the area within 20 km radius. The estimated doses varied from around 1 to 6 mSv for the residents evacuated from the representative places in the deliberate evacuation area. The maximum dose in 18 evacuation patterns was estimated to be 19 mSv.
Knights, Jonathan; Rohatagi, Shashank
2015-12-01
Although there is a body of literature focused on minimizing the effect of dosing inaccuracies on pharmacokinetic (PK) parameter estimation, most of the work centers on missing doses. No attempt has been made to specifically characterize the effect of error in reported dosing times. Additionally, existing work has largely dealt with cases in which the compound of interest is dosed at an interval no less than its terminal half-life. This work provides a case study investigating how error in patient reported dosing times might affect the accuracy of structural model parameter estimation under sparse sampling conditions when the dosing interval is less than the terminal half-life of the compound, and the underlying kinetics are monoexponential. Additional effects due to noncompliance with dosing events are not explored and it is assumed that the structural model and reasonable initial estimates of the model parameters are known. Under the conditions of our simulations, with structural model CV % ranging from ~20 to 60 %, parameter estimation inaccuracy derived from error in reported dosing times was largely controlled around 10 % on average. Given that no observed dosing was included in the design and sparse sampling was utilized, we believe these error results represent a practical ceiling given the variability and parameter estimates for the one-compartment model. The findings suggest additional investigations may be of interest and are noteworthy given the inability of current PK software platforms to accommodate error in dosing times.
Pulikottil-Jacob, Ruth; Connock, Martin; Kandala, Ngianga-Bakwin; Mistry, Hema; Grove, Amy; Freeman, Karoline; Costa, Matthew; Sutcliffe, Paul; Clarke, Aileen
2016-01-01
Total hip replacement for end stage arthritis of the hip is currently the most common elective surgical procedure. In 2007 about 7.5% of UK implants were metal-on-metal joint resurfacing (MoM RS) procedures. Due to poor revision performance and concerns about metal debris, the use of RS had declined by 2012 to about a 1% share of UK hip procedures. This study estimated the lifetime cost-effectiveness of metal-on-metal resurfacing (RS) procedures versus commonly employed total hip replacement (THR) methods. We performed a cost-utility analysis using a well-established multi-state semi-Markov model from an NHS and personal and social services perspective. We used individual patient data (IPD) from the National Joint Registry (NJR) for England and Wales on RS and THR surgery for osteoarthritis recorded from April 2003 to December 2012. We used flexible parametric modelling of NJR RS data to guide identification of patient subgroups and RS devices which delivered revision rates within the NICE 5% revision rate benchmark at 10 years. RS procedures overall have an estimated revision rate of 13% at 10 years, compared to <4% for most THR devices. New NICE guidance now recommends a revision rate benchmark of <5% at 10 years. 60% of RS implants in men and 2% in women were predicted to be within the revision benchmark. RS devices satisfying the 5% benchmark were unlikely to be cost-effective compared to THR at a standard UK willingness to pay of £20,000 per quality-adjusted life-year. However, the probability of cost effectiveness was sensitive to small changes in the costs of devices or in quality of life or revision rate estimates. Our results imply that in most cases RS has not been a cost-effective resource and should probably not be adopted by decision makers concerned with the cost effectiveness of hip replacement, or by patients concerned about the likelihood of revision, regardless of patient age or gender.
Accuracy and Precision of USNO GPS Carrier-Phase Time Transfer
2010-01-01
values. Comparison measures used include estimates obtained from two-way satellite time/frequency transfer ( TWSTFT ), and GPS-based estimates obtained...the IGS are used as a benchmark in the computation. Frequency values have a few times 10 -15 fractional frequency uncertainty. TWSTFT values confirm...obtained from two-way satellite time/frequency transfer ( TWSTFT ), BIPM Circular T, and the International GNSS Service (IGS). At present, it is known that
Dosimetric evaluation of a Monte Carlo IMRT treatment planning system incorporating the MIMiC
NASA Astrophysics Data System (ADS)
Rassiah-Szegedi, P.; Fuss, M.; Sheikh-Bagheri, D.; Szegedi, M.; Stathakis, S.; Lancaster, J.; Papanikolaou, N.; Salter, B.
2007-12-01
The high dose per fraction delivered to lung lesions in stereotactic body radiation therapy (SBRT) demands high dose calculation and delivery accuracy. The inhomogeneous density in the thoracic region along with the small fields used typically in intensity-modulated radiation therapy (IMRT) treatments poses a challenge in the accuracy of dose calculation. In this study we dosimetrically evaluated a pre-release version of a Monte Carlo planning system (PEREGRINE 1.6b, NOMOS Corp., Cranberry Township, PA), which incorporates the modeling of serial tomotherapy IMRT treatments with the binary multileaf intensity modulating collimator (MIMiC). The aim of this study is to show the validation process of PEREGRINE 1.6b since it was used as a benchmark to investigate the accuracy of doses calculated by a finite size pencil beam (FSPB) algorithm for lung lesions treated on the SBRT dose regime via serial tomotherapy in our previous study. Doses calculated by PEREGRINE were compared against measurements in homogeneous and inhomogeneous materials carried out on a Varian 600C with a 6 MV photon beam. Phantom studies simulating various sized lesions were also carried out to explain some of the large dose discrepancies seen in the dose calculations with small lesions. Doses calculated by PEREGRINE agreed to within 2% in water and up to 3% for measurements in an inhomogeneous phantom containing lung, bone and unit density tissue.
Connecting the Dots: Linking Environmental Justice Indicators to Daily Dose Model Estimates
Many different quantitative techniques have been developed to either assess Environmental Justice (EJ) issues or estimate exposure and dose for risk assessment. However, very few approaches have been applied to link EJ factors to exposure dose estimate and identify potential impa...
Park, Robert M; Bowler, Rosemarie M; Roels, Harry A
2009-10-01
The exposure-response relationship for manganese (Mn)-induced adverse nervous system effects is not well described. Symptoms and neuropsychological deficits associated with early manganism were previously reported for welders constructing bridge piers during 2003 to 2004. A reanalysis using improved exposure, work history information, and diverse exposure metrics is presented here. Ten neuropsychological performance measures were examined, including working memory index (WMI), verbal intelligence quotient, design fluency, Stroop color word test, Rey-Osterrieth Complex Figure, and Auditory Consonant Trigram tests. Mn blood levels and air sampling data in the form of both personal and area samples were available. The exposure metrics used were cumulative exposure to Mn, body burden assuming simple first-order kinetics for Mn elimination, and cumulative burden (effective dose). Benchmark doses were calculated. Burden with a half-life of about 150 days was the best predictor of blood Mn. WMI performance declined by 3.6 (normal = 100, SD = 15) for each 1.0 mg/m3 x mo exposure (P = 0.02, one tailed). At the group mean exposure metric (burden; half-life = 275 days), WMI performance was at the lowest 17th percentile of normal, and at the maximum observed metric, performance was at the lowest 2.5 percentiles. Four other outcomes also exhibited statistically significant associations (verbal intelligence quotient, verbal comprehension index, design fluency, Stroop color word test); no dose-rate effect was observed for three of the five outcomes. A risk assessment performed for the five stronger effects, choosing various percentiles of normal performance to represent impairment, identified benchmark doses for a 2-year exposure leading to 5% excess impairment prevalence in the range of 0.03 to 0.15 mg/m3, or 30 to 150 microg/m3, total Mn in air, levels that are far below those permitted by current occupational standards. More than one-third of workers would be impaired after working 2 years at 0.2 mg/m3 Mn (the current threshold limit value).
SU-F-T-231: Improving the Efficiency of a Radiotherapy Peer-Review System for Quality Assurance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsu, S; Basavatia, A; Garg, M
Purpose: To improve the efficiency of a radiotherapy peer-review system using a commercially available software application for plan quality evaluation and documentation. Methods: A commercial application, FullAccess (Radialogica LLC, Version 1.4.4), was implemented in a Citrix platform for peer-review process and patient documentation. This application can display images, isodose lines, and dose-volume histograms and create plan reports for peer-review process. Dose metrics in the report can also be benchmarked for plan quality evaluation. Site-specific templates were generated based on departmental treatment planning policies and procedures for each disease site, which generally follow RTOG protocols as well as published prospective clinicalmore » trial data, including both conventional fractionation and hypo-fractionation schema. Once a plan is ready for review, the planner exports the plan to FullAccess, applies the site-specific template, and presents the report for plan review. The plan is still reviewed in the treatment planning system, as that is the legal record. Upon physician’s approval of a plan, the plan is packaged for peer review with the plan report and dose metrics are saved to the database. Results: The reports show dose metrics of PTVs and critical organs for the plans and also indicate whether or not the metrics are within tolerance. Graphical results with green, yellow, and red lights are displayed of whether planning objectives have been met. In addition, benchmarking statistics are collected to see where the current plan falls compared to all historical plans on each metric. All physicians in peer review can easily verify constraints by these reports. Conclusion: We have demonstrated the improvement in a radiotherapy peer-review system, which allows physicians to easily verify planning constraints for different disease sites and fractionation schema, allows for standardization in the clinic to ensure that departmental policies are maintained, and builds a comprehensive database for potential clinical outcome evaluation.« less
SU-E-T-22: A Deterministic Solver of the Boltzmann-Fokker-Planck Equation for Dose Calculation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, X; Gao, H; Paganetti, H
2015-06-15
Purpose: The Boltzmann-Fokker-Planck equation (BFPE) accurately models the migration of photons/charged particles in tissues. While the Monte Carlo (MC) method is popular for solving BFPE in a statistical manner, we aim to develop a deterministic BFPE solver based on various state-of-art numerical acceleration techniques for rapid and accurate dose calculation. Methods: Our BFPE solver is based on the structured grid that is maximally parallelizable, with the discretization in energy, angle and space, and its cross section coefficients are derived or directly imported from the Geant4 database. The physical processes that are taken into account are Compton scattering, photoelectric effect, pairmore » production for photons, and elastic scattering, ionization and bremsstrahlung for charged particles.While the spatial discretization is based on the diamond scheme, the angular discretization synergizes finite element method (FEM) and spherical harmonics (SH). Thus, SH is used to globally expand the scattering kernel and FFM is used to locally discretize the angular sphere. As a Result, this hybrid method (FEM-SH) is both accurate in dealing with forward-peaking scattering via FEM, and efficient for multi-energy-group computation via SH. In addition, FEM-SH enables the analytical integration in energy variable of delta scattering kernel for elastic scattering with reduced truncation error from the numerical integration based on the classic SH-based multi-energy-group method. Results: The accuracy of the proposed BFPE solver was benchmarked against Geant4 for photon dose calculation. In particular, FEM-SH had improved accuracy compared to FEM, while both were within 2% of the results obtained with Geant4. Conclusion: A deterministic solver of the Boltzmann-Fokker-Planck equation is developed for dose calculation, and benchmarked against Geant4. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less
Andersson, K G; Roed, J
2006-01-01
In nuclear preparedness, an essential requirement is the ability to adequately predict the likely consequences of a major accident situation. In this context it is very important to evaluate which contributions to dose are important, and which are not likely to have significance. As an example of this type of evaluation, a case study has been conducted to estimate the doses received over the first 17 years after the Chernobyl accident in a dry-contaminated residential area in the Bryansk region in Russia. Methodologies for estimation of doses received through nine different pathways, including contamination of streets, roofs, exterior walls, and landscape, are established, and best estimates are given for each of the dose contributions. Generally, contaminated soil areas were estimated to have given the highest dose contribution, but a number of other contributions to dose, e.g., from contaminated roofs and inhalation of contaminants during the passage of the contaminated plume, were of the same order of magnitude.
Quantification of residual dose estimation error on log file-based patient dose calculation.
Katsuta, Yoshiyuki; Kadoya, Noriyuki; Fujita, Yukio; Shimizu, Eiji; Matsunaga, Kenichi; Matsushita, Haruo; Majima, Kazuhiro; Jingu, Keiichi
2016-05-01
The log file-based patient dose estimation includes a residual dose estimation error caused by leaf miscalibration, which cannot be reflected on the estimated dose. The purpose of this study is to determine this residual dose estimation error. Modified log files for seven head-and-neck and prostate volumetric modulated arc therapy (VMAT) plans simulating leaf miscalibration were generated by shifting both leaf banks (systematic leaf gap errors: ±2.0, ±1.0, and ±0.5mm in opposite directions and systematic leaf shifts: ±1.0mm in the same direction) using MATLAB-based (MathWorks, Natick, MA) in-house software. The generated modified and non-modified log files were imported back into the treatment planning system and recalculated. Subsequently, the generalized equivalent uniform dose (gEUD) was quantified for the definition of the planning target volume (PTV) and organs at risks. For MLC leaves calibrated within ±0.5mm, the quantified residual dose estimation errors that obtained from the slope of the linear regression of gEUD changes between non- and modified log file doses per leaf gap are in head-and-neck plans 1.32±0.27% and 0.82±0.17Gy for PTV and spinal cord, respectively, and in prostate plans 1.22±0.36%, 0.95±0.14Gy, and 0.45±0.08Gy for PTV, rectum, and bladder, respectively. In this work, we determine the residual dose estimation errors for VMAT delivery using the log file-based patient dose calculation according to the MLC calibration accuracy. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Kobayashi, Masanao; Asada, Yasuki; Matsubara, Kosuke; Suzuki, Shouichi; Matsunaga, Yuta; Haba, Tomonobu; Kawaguchi, Ai; Daioku, Tomihiko; Toyama, Hiroshi; Kato, Ryoichi
2017-05-01
Adequate dose management during computed tomography is important. In the present study, the dosimetric application software ImPACT was added to a functional calculator of the size-specific dose estimate and was part of the scan settings for the auto exposure control (AEC) technique. This study aimed to assess the practicality and accuracy of the modified ImPACT software for dose estimation. We compared the conversion factors identified by the software with the values reported by the American Association of Physicists in Medicine Task Group 204, and we noted similar results. Moreover, doses were calculated with the AEC technique and a fixed-tube current of 200 mA for the chest-pelvis region. The modified ImPACT software could estimate each organ dose, which was based on the modulated tube current. The ability to perform beneficial modifications indicates the flexibility of the ImPACT software. The ImPACT software can be further modified for estimation of other doses. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Hanford Environmental Dose Reconstruction Project. Monthly report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cannon, S.D.; Finch, S.M.
1992-10-01
The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The independent Technical Steering Panel (TSP) provides technical direction. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed from release to impact on humans (dose estimates):Source Terms, Environmental Transport, Environmental Monitoring Data, Demography, Food Consumption, and Agriculture, and Environmental Pathways and Dose Estimates.
Wang, Lei; Baus, Wolfgang; Grimm, Jimm; Lacornerie, Thomas; Nilsson, Joakim; Luchkovskyi, Sergii; Cano, Isabel Palazon; Shou, Zhenyu; Ayadi, Myriam; Treuer, Harald; Viard, Romain; Siebert, Frank‐Andre; Chan, Mark K.H.; Hildebrandt, Guido; Dunst, Jürgen; Imhoff, Detlef; Wurster, Stefan; Wolff, Robert; Romanelli, Pantaleo; Lartigau, Eric; Semrau, Robert; Soltys, Scott G.; Schweikard, Achim
2016-01-01
Stereotactic radiosurgery (SRS) is the accurate, conformal delivery of high‐dose radiation to well‐defined targets while minimizing normal structure doses via steep dose gradients. While inverse treatment planning (ITP) with computerized optimization algorithms are routine, many aspects of the planning process remain user‐dependent. We performed an international, multi‐institutional benchmark trial to study planning variability and to analyze preferable ITP practice for spinal robotic radiosurgery. 10 SRS treatment plans were generated for a complex‐shaped spinal metastasis with 21 Gy in 3 fractions and tight constraints for spinal cord (V14Gy<2 cc, V18Gy<0.1 cc) and target (coverage >95%). The resulting plans were rated on a scale from 1 to 4 (excellent‐poor) in five categories (constraint compliance, optimization goals, low‐dose regions, ITP complexity, and clinical acceptability) by a blinded review panel. Additionally, the plans were mathematically rated based on plan indices (critical structure and target doses, conformity, monitor units, normal tissue complication probability, and treatment time) and compared to the human rankings. The treatment plans and the reviewers' rankings varied substantially among the participating centers. The average mean overall rank was 2.4 (1.2‐4.0) and 8/10 plans were rated excellent in at least one category by at least one reviewer. The mathematical rankings agreed with the mean overall human rankings in 9/10 cases pointing toward the possibility for sole mathematical plan quality comparison. The final rankings revealed that a plan with a well‐balanced trade‐off among all planning objectives was preferred for treatment by most participants, reviewers, and the mathematical ranking system. Furthermore, this plan was generated with simple planning techniques. Our multi‐institutional planning study found wide variability in ITP approaches for spinal robotic radiosurgery. The participants', reviewers', and mathematical match on preferable treatment plans and ITP techniques indicate that agreement on treatment planning and plan quality can be reached for spinal robotic radiosurgery. PACS number(s): 87.55.de PMID:27167291
Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error
NASA Astrophysics Data System (ADS)
Miller, Austin
In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardy, A; Bostani, M; McMillan, K
Purpose: The purpose of this work is to estimate effective and lung doses from a low-dose lung cancer screening CT protocol using Tube Current Modulation (TCM) across patient models of different sizes. Methods: Monte Carlo simulation methods were used to estimate effective and lung doses from a low-dose lung cancer screening protocol for a 64-slice CT (Sensation 64, Siemens Healthcare) that used TCM. Scanning parameters were from the AAPM protocols. Ten GSF voxelized patient models were used and had all radiosensitive organs identified to facilitate estimating both organ and effective doses. Predicted TCM schemes for each patient model were generatedmore » using a validated method wherein tissue attenuation characteristics and scanner limitations were used to determine the TCM output as a function of table position and source angle. The water equivalent diameter (WED) was determined by estimating the attenuation at the center of the scan volume for each patient model. Monte Carlo simulations were performed using the unique TCM scheme for each patient model. Lung doses were tallied and effective doses were estimated using ICRP 103 tissue weighting factors. Effective and lung dose values were normalized by scanspecific 32 cm CTDIvol values based upon the average tube current across the entire simulated scan. Absolute and normalized doses were reported as a function of WED for each patient. Results: For all ten patients modeled, the effective dose using TCM protocols was below 1.5 mSv. Smaller sized patient models experienced lower absolute doses compared to larger sized patients. Normalized effective and lung doses showed some dependence on patient size (R2 = 0.77 and 0.78, respectively). Conclusion: Effective doses for a low-dose lung screening protocol using TCM were below 1.5 mSv for all patient models used in this study. Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics.« less
SITE-SPECIFIC MEASUREMENTS OF RESIDENTIAL RADON PROTECTION CATEGORY
The report describes a series of benchmark measurements of soil radon potential at seven Florida sites and compares the measurements with regional estimates of radon potential from the Florida radon protection map. The measurements and map were developed under the Florida Radon R...
National tourism indicators : historical estimates, 1986-2000
DOT National Transportation Integrated Search
2010-01-01
In the 1997 edition, new and revised benchmarks were introduced for 1992 and 1988. The indicators are used to monitor supply, demand and employment for tourism in Canada on a timely basis. The annual tables are derived using the National Income and E...
Estimating Toxicity Pathway Activating Doses for High Throughput Chemical Risk Assessments
Estimating a Toxicity Pathway Activating Dose (TPAD) from in vitro assays as an analog to a reference dose (RfD) derived from in vivo toxicity tests would facilitate high throughput risk assessments of thousands of data-poor environmental chemicals. Estimating a TPAD requires def...
A re-evaluation of the relativistic redshift on frequency standards at NIST, Boulder, Colorado, USA
NASA Astrophysics Data System (ADS)
Pavlis, Nikolaos K.; Weiss, Marc A.
2017-08-01
We re-evaluated the relativistic redshift correction applicable to the frequency standards at the National Institute of Standards and Technology (NIST) in Boulder, Colorado, USA, based on a precise GPS survey of three benchmarks on the roof of the building where these standards had been previously housed, and on global and regional geoid models supported by data from the GRACE and GOCE missions, including EGM2008, USGG2009, and USGG2012. We also evaluated the redshift offset based on the published NAVD88 geopotential number of the leveling benchmark Q407 located on the side of Building 1 at NIST, Boulder, Colorado, USA, after estimating the bias of the NAVD88 datum at our specific location. Based on these results, our current best estimate of the relativistic redshift correction, if frequency standards were located at the height of the leveling benchmark Q407 outside the second floor of Building 1, with respect to the EGM2008 geoid whose potential has been estimated to be {{W}0}=62 636 855.69 {{m}2} {{s}-2} , is equal to (-1798.50 ± 0.06) × 10-16. The corresponding value, with respect to an equipotential surface defined by the International Astronomical Union’s (IAU) adopted value of {{W}0}=62 636 856.0 {{m}2} {{s}-2} , is (-1798.53 ± 0.06) × 10-16. These values are comparable to the value of (-1798.70 ± 0.30) × 10-16, estimated by Pavlis and Weiss in 2003, with respect to an equipotential surface defined by {{W}0}=62 636 856.88 {{m}2} {{s}-2} . The minus sign implies that clocks run faster in the laboratory in Boulder than a corresponding clock located on the geoid. Contribution of US government, not subject to Copyright.
Hurley, J C
2018-04-10
Regimens containing topical polymyxin appear to be more effective in preventing ventilator-associated pneumonia (VAP) than other methods. To benchmark the incidence rates of Acinetobacter-associated VAP (AAVAP) within component (control and intervention) groups from concurrent controlled studies of polymyxin compared with studies of various VAP prevention methods other than polymyxin (non-polymyxin studies). An AAVAP benchmark was derived using data from 77 observational groups without any VAP prevention method under study. Data from 41 non-polymyxin studies provided additional points of reference. The benchmarking was undertaken by meta-regression using generalized estimating equation methods. Within 20 studies of topical polymyxin, the mean AAVAP was 4.6% [95% confidence interval (CI) 3.0-6.9] and 3.7% (95% CI 2.0-5.3) for control and intervention groups, respectively. In contrast, the AAVAP benchmark was 1.5% (95% CI 1.2-2.0). In the AAVAP meta-regression model, group origin from a trauma intensive care unit (+0.55; +0.16 to +0.94, P = 0.006) or membership of a polymyxin control group (+0.64; +0.21 to +1.31, P = 0.023), but not membership of a polymyxin intervention group (+0.24; -0.37 to +0.84, P = 0.45), were significant positive correlates. The mean incidence of AAVAP within the control groups of studies of topical polymyxin is more than double the benchmark, whereas the incidence rates within the groups of non-polymyxin studies and, paradoxically, polymyxin intervention groups are more similar to the benchmark. These incidence rates, which are paradoxical in the context of an apparent effect against VAP within controlled trials of topical polymyxin-based interventions, force a re-appraisal. Copyright © 2018 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.
Goodkind, Daniel; Lollock, Lisa; Choi, Yoonjoung; McDevitt, Thomas; West, Loraine
2018-01-01
Meeting demand for family planning can facilitate progress towards all major themes of the United Nations Sustainable Development Goals (SDGs): people, planet, prosperity, peace, and partnership. Many policymakers have embraced a benchmark goal that at least 75% of the demand for family planning in all countries be satisfied with modern contraceptive methods by the year 2030. This study examines the demographic impact (and development implications) of achieving the 75% benchmark in 13 developing countries that are expected to be the furthest from achieving that benchmark. Estimation of the demographic impact of achieving the 75% benchmark requires three steps in each country: 1) translate contraceptive prevalence assumptions (with and without intervention) into future fertility levels based on biometric models, 2) incorporate each pair of fertility assumptions into separate population projections, and 3) compare the demographic differences between the two population projections. Data are drawn from the United Nations, the US Census Bureau, and Demographic and Health Surveys. The demographic impact of meeting the 75% benchmark is examined via projected differences in fertility rates (average expected births per woman's reproductive lifetime), total population, growth rates, age structure, and youth dependency. On average, meeting the benchmark would imply a 16 percentage point increase in modern contraceptive prevalence by 2030 and a 20% decline in youth dependency, which portends a potential demographic dividend to spur economic growth. Improvements in meeting the demand for family planning with modern contraceptive methods can bring substantial benefits to developing countries. To our knowledge, this is the first study to show formally how such improvements can alter population size and age structure. Declines in youth dependency portend a demographic dividend, an added bonus to the already well-known benefits of meeting existing demands for family planning.
SOME PROBLEMS OF "SAFE DOSE" ESTIMATION
In environmental carcinogenic risk assessment, the usually defined "safe doses" appear subjective in some sense. n this paper a method of standardizing "safe doses" based on some objective parameters is introduced and a procedure of estimating safe doses under the competing risks...
Estimation Of Organ Doses From Solar Particle Events For Future Space Exploration Missions
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee; Cucinotta, Francis A.
2006-01-01
Radiation protection practices define the effective dose as a weighted sum of equivalent dose over major organ sites for radiation cancer risks. Since a crew personnel dosimeter does not make direct measurement of the effective dose, it has been estimated with skin-dose measurements and radiation transport codes for ISS and STS missions. If sufficient protection is not provided near solar maximum, the radiation risk can be significant due to exposure to sporadic solar particle events (SPEs) as well as to the continuous galactic cosmic radiation (GCR) on future exploratory-class and long-duration missions. For accurate estimates of overall fatal cancer risks from SPEs, the specific doses at various blood forming organs (BFOs) were considered, because proton fluences and doses vary considerably across marrow regions. Previous estimates of BFO doses from SPEs have used an average body-shielding distribution for the bone marrow based on the computerized anatomical man model (CAM). With the development of an 82-point body-shielding distribution at BFOs, the mean and variance of SPE doses in the major active marrow regions (head and neck, chest, abdomen, pelvis and thighs) will be presented. Consideration of the detailed distribution of bone marrow sites is one of many requirements to improve the estimation of effective doses for radiation cancer risks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grimes, Joshua, E-mail: grimes.joshua@mayo.edu; Celler, Anna
2014-09-15
Purpose: The authors’ objective was to compare internal dose estimates obtained using the Organ Level Dose Assessment with Exponential Modeling (OLINDA/EXM) software, the voxel S value technique, and Monte Carlo simulation. Monte Carlo dose estimates were used as the reference standard to assess the impact of patient-specific anatomy on the final dose estimate. Methods: Six patients injected with{sup 99m}Tc-hydrazinonicotinamide-Tyr{sup 3}-octreotide were included in this study. A hybrid planar/SPECT imaging protocol was used to estimate {sup 99m}Tc time-integrated activity coefficients (TIACs) for kidneys, liver, spleen, and tumors. Additionally, TIACs were predicted for {sup 131}I, {sup 177}Lu, and {sup 90}Y assuming themore » same biological half-lives as the {sup 99m}Tc labeled tracer. The TIACs were used as input for OLINDA/EXM for organ-level dose calculation and voxel level dosimetry was performed using the voxel S value method and Monte Carlo simulation. Dose estimates for {sup 99m}Tc, {sup 131}I, {sup 177}Lu, and {sup 90}Y distributions were evaluated by comparing (i) organ-level S values corresponding to each method, (ii) total tumor and organ doses, (iii) differences in right and left kidney doses, and (iv) voxelized dose distributions calculated by Monte Carlo and the voxel S value technique. Results: The S values for all investigated radionuclides used by OLINDA/EXM and the corresponding patient-specific S values calculated by Monte Carlo agreed within 2.3% on average for self-irradiation, and differed by as much as 105% for cross-organ irradiation. Total organ doses calculated by OLINDA/EXM and the voxel S value technique agreed with Monte Carlo results within approximately ±7%. Differences between right and left kidney doses determined by Monte Carlo were as high as 73%. Comparison of the Monte Carlo and voxel S value dose distributions showed that each method produced similar dose volume histograms with a minimum dose covering 90% of the volume (D90) agreeing within ±3%, on average. Conclusions: Several aspects of OLINDA/EXM dose calculation were compared with patient-specific dose estimates obtained using Monte Carlo. Differences in patient anatomy led to large differences in cross-organ doses. However, total organ doses were still in good agreement since most of the deposited dose is due to self-irradiation. Comparison of voxelized doses calculated by Monte Carlo and the voxel S value technique showed that the 3D dose distributions produced by the respective methods are nearly identical.« less
Padole, Atul; Deedar Ali Khawaja, Ranish; Otrakji, Alexi; Zhang, Da; Liu, Bob; Xu, X George; Kalra, Mannudeep K
2016-05-01
The aim of this study was to compare the directly measured and the estimated computed tomography (CT) organ doses obtained from commercial radiation dose-tracking (RDT) software for CT performed with modulated tube current or automatic exposure control (AEC) technique and fixed tube current (mAs). With the institutional review board (IRB) approval, the ionization chambers were surgically implanted in a human cadaver (88 years old, male, 68 kg) in six locations such as liver, stomach, colon, left kidney, small intestine, and urinary bladder. The cadaver was scanned with routine abdomen pelvis protocol on a 128-slice, dual-source multidetector computed tomography (MDCT) scanner using both AEC and fixed mAs. The effective and quality reference mAs of 100, 200, and 300 were used for AEC and fixed mAs, respectively. Scanning was repeated three times for each setting, and measured and estimated organ doses (from RDT software) were recorded (N = 3*3*2 = 18). Mean CTDIvol for AEC and fixed mAs were 4, 8, 13 mGy and 7, 14, 21 mGy, respectively. The most estimated organ doses were significantly greater (P < 0.01) than the measured organ doses for both AEC and fixed mAs. At AEC, the mean estimated organ doses (for six organs) were 14.7 mGy compared to mean measured organ doses of 12.3 mGy. Similarly, at fixed mAs, the mean estimated organ doses (for six organs) were 24 mGy compared to measured organ doses of 22.3 mGy. The differences among the measured and estimated organ doses were higher for AEC technique compared to the fixed mAs for most organs (P < 0.01). The most CT organ doses estimated from RDT software are greater compared to directly measured organ doses, particularly when AEC technique is used for CT scanning. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Limits on estimating the width of thin tubular structures in 3D images.
Wörz, Stefan; Rohr, Karl
2006-01-01
This work studies limits on estimating the width of thin tubular structures in 3D images. Based on nonlinear estimation theory we analyze the minimal stochastic error of estimating the width. Given a 3D analytic model of the image intensities of tubular structures, we derive a closed-form expression for the Cramér-Rao bound of the width estimate under image noise. We use the derived lower bound as a benchmark and compare it with three previously proposed accuracy limits for vessel width estimation. Moreover, by experimental investigations we demonstrate that the derived lower bound can be achieved by fitting a 3D parametric intensity model directly to the image data.
NEUROTOXIC EFFECTS OF ENVIRONMENTAL AGENTS: DATA GAPS THAT CHALLENGE DOSE-RESPONSE ESTIMATION
Neurotoxic effects of environmental agents: Data gaps that challenge dose-response estimation
S Gutter*, P Mendola+, SG Selevan**, D Rice** (*UNC Chapel Hill; +US EPA, NHEERL; **US EPA, NCEA)
Dose-response estimation is a critical feature of risk assessment. It can be...
Shuryak, Igor
2018-06-05
Water bodies polluted by the Mayak nuclear plant in Russia provide valuable information on multi-generation effects of radioactive contamination on freshwater organisms. For example, lake Karachay was probably the most radioactive lake in the world: its water contained ∼2 × 10 7 Bq/L of radionuclides and estimated dose rates to plankton exceeded 5 Gy/h. We performed quantitative modeling of radiation effects on phytoplankton and zooplankton species richness and abundance in Mayak-contaminated water bodies. Due to collinearity between radioactive contamination, water body size and salinity, we combined these variables into one (called HabitatFactors). We employed a customized machine learning approach, where synthetic noise variables acted as benchmarks of predictor performance. HabitatFactors was the only predictor that outperformed noise variables and, therefore, we used it for parametric modeling of plankton responses. Best-fit model predictions suggested 50% species richness reduction at HabitatFactors values corresponding to dose rates of 10 4 -10 5 μGy/h for phytoplankton, and 10 3 -10 4 μGy/h for zooplankton. Under conditions similar to those in lake Karachay, best-fit models predicted 81-98% species richness reductions for various taxa (Cyanobacteria, Bacillariophyta, Chlorophyta, Rotifera, Cladocera and Copepoda), ∼20-300-fold abundance reduction for total zooplankton, but no abundance reduction for phytoplankton. Rotifera was the only taxon whose fractional abundance increased with contamination level, reaching 100% in lake Karachay, but Rotifera species richness declined with contamination level, as in other taxa. Under severe radioactive and chemical contamination, one species of Cyanobacteria (Geitlerinema amphibium) dominated phytoplankton, and rotifers from the genus Brachionus dominated zooplankton. The modeling approaches proposed here are applicable to other radioecological data sets. The results provide quantitative information and easily interpretable model parameter estimates for the shapes and magnitudes of freshwater plankton responses to a wide range of radioactive contamination levels. Copyright © 2018 Elsevier Ltd. All rights reserved.
Thompson, Chad M; Bichteler, Anne; Rager, Julia E; Suh, Mina; Proctor, Deborah M; Haws, Laurie C; Harris, Mark A
2016-04-01
Recent analyses-highlighted by the International Workshops on Genotoxicity Testing Working Group on Quantitative Approaches to Genetic Toxicology Risk Assessment-have identified a correlation between (log) estimates of a carcinogen's in vivo genotoxic potency and in vivo carcinogenic potency in typical laboratory animal models, even when the underlying data have not been matched for tissue, species, or strain. Such a correlation could have important implications for risk assessment, including informing the mode of action (MOA) of specific carcinogens. When in vivo genotoxic potency is weak relative to carcinogenic potency, MOAs other than genotoxicity (e.g., endocrine disruption or regenerative hyperplasia) may be operational. Herein, we review recent in vivo genotoxicity and carcinogenicity data for hexavalent chromium (Cr(VI)), following oral ingestion, in relevant tissues and species in the context of the aforementioned correlation. Potency estimates were generated using benchmark doses, or no-observable-adverse-effect-levels when data were not amenable to dose-response modeling. While the ratio between log values for carcinogenic and genotoxic potency was ≥1 for many compounds, the ratios for several Cr(VI) datasets (including in target tissue) were less than unity. In fact, the ratios for Cr(VI) clustered closely with ratios for chloroform and diethanolamine, two chemicals posited to have non-genotoxic MOAs. These findings suggest that genotoxicity may not play a major role in the cancers observed in rodents following exposure to high concentrations of Cr(VI) in drinking water-a finding consistent with recent MOA and adverse outcome pathway (AOP) analyses concerning Cr(VI). This semi-quantitative analysis, therefore, may be useful to augment traditional MOA and AOP analyses. More case examples will be needed to further explore the general applicability and validity of this approach for human health risk assessment. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Kwon, Deukwoo; Hoffman, F Owen; Moroz, Brian E; Simon, Steven L
2016-02-10
Most conventional risk analysis methods rely on a single best estimate of exposure per person, which does not allow for adjustment for exposure-related uncertainty. Here, we propose a Bayesian model averaging method to properly quantify the relationship between radiation dose and disease outcomes by accounting for shared and unshared uncertainty in estimated dose. Our Bayesian risk analysis method utilizes multiple realizations of sets (vectors) of doses generated by a two-dimensional Monte Carlo simulation method that properly separates shared and unshared errors in dose estimation. The exposure model used in this work is taken from a study of the risk of thyroid nodules among a cohort of 2376 subjects who were exposed to fallout from nuclear testing in Kazakhstan. We assessed the performance of our method through an extensive series of simulations and comparisons against conventional regression risk analysis methods. When the estimated doses contain relatively small amounts of uncertainty, the Bayesian method using multiple a priori plausible draws of dose vectors gave similar results to the conventional regression-based methods of dose-response analysis. However, when large and complex mixtures of shared and unshared uncertainties are present, the Bayesian method using multiple dose vectors had significantly lower relative bias than conventional regression-based risk analysis methods and better coverage, that is, a markedly increased capability to include the true risk coefficient within the 95% credible interval of the Bayesian-based risk estimate. An evaluation of the dose-response using our method is presented for an epidemiological study of thyroid disease following radiation exposure. Copyright © 2015 John Wiley & Sons, Ltd.
Neil, Amanda; Pfeffer, Sally; Burnett, Leslie
2013-01-01
This paper details the development of a new type of pathology laboratory productivity unit, the benchmarking complexity unit (BCU). The BCU provides a comparative index of laboratory efficiency, regardless of test mix. It also enables estimation of a measure of how much complex pathology a laboratory performs, and the identification of peer organisations for the purposes of comparison and benchmarking. The BCU is based on the theory that wage rates reflect productivity at the margin. A weighting factor for the ratio of medical to technical staff time was dynamically calculated based on actual participant site data. Given this weighting, a complexity value for each test, at each site, was calculated. The median complexity value (number of BCUs) for that test across all participating sites was taken as its complexity value for the Benchmarking in Pathology Program. The BCU allowed implementation of an unbiased comparison unit and test listing that was found to be a robust indicator of the relative complexity for each test. Employing the BCU data, a number of Key Performance Indicators (KPIs) were developed, including three that address comparative organisational complexity, analytical depth and performance efficiency, respectively. Peer groups were also established using the BCU combined with simple organisational and environmental metrics. The BCU has enabled productivity statistics to be compared between organisations. The BCU corrects for differences in test mix and workload complexity of different organisations and also allows for objective stratification into peer groups.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, S; Ho, M; Chen, C
Purpose: The use of log files to perform patient specific quality assurance for both protons and IMRT has been established. Here, we extend that approach to a proprietary log file format and compare our results to measurements in phantom. Our goal was to generate a system that would permit gross errors to be found within 3 fractions until direct measurements. This approach could eventually replace direct measurements. Methods: Spot scanning protons pass through multi-wire ionization chambers which provide information about the charge, location, and size of each delivered spot. We have generated a program that calculates the dose in phantommore » from these log files and compares the measurements with the plan. The program has 3 different spot shape models: single Gaussian, double Gaussian and the ASTROID model. The program was benchmarked across different treatment sites for 23 patients and 74 fields. Results: The dose calculated from the log files were compared to those generate by the treatment planning system (Raystation). While the dual Gaussian model often gave better agreement, overall, the ASTROID model gave the most consistent results. Using a 5%–3 mm gamma with a 90% passing criteria and excluding doses below 20% of prescription all patient samples passed. However, the degree of agreement of the log file approach was slightly worse than that of the chamber array measurement approach. Operationally, this implies that if the beam passes the log file model, it should pass direct measurement. Conclusion: We have established and benchmarked a model for log file QA in an IBA proteus plus system. The choice of optimal spot model for a given class of patients may be affected by factors such as site, field size, and range shifter and will be investigated further.« less
Dunnick, June K; Shockley, Keith R; Morgan, Daniel L; Brix, Amy; Travlos, Gregory S; Gerrish, Kevin; Michael Sanders, J; Ton, T V; Pandiri, Arun R
2017-04-01
N,N-dimethyl-p-toluidine (DMPT), an accelerant for methyl methacrylate monomers in medical devices, was a liver carcinogen in male and female F344/N rats and B6C3F1 mice in a 2-year oral exposure study. p-Toluidine, a structurally related chemical, was a liver carcinogen in mice but not in rats in an 18-month feed exposure study. In this current study, liver transcriptomic data were used to characterize mechanisms in DMPT and p-toluidine liver toxicity and for conducting benchmark dose (BMD) analysis. Male F344/N rats were exposed orally to DMPT or p-toluidine (0, 1, 6, 20, 60 or 120 mg/kg/day) for 5 days. The liver was examined for lesions and transcriptomic alterations. Both chemicals caused mild hepatic toxicity at 60 and 120 mg/kg and dose-related transcriptomic alterations in the liver. There were 511 liver transcripts differentially expressed for DMPT and 354 for p-toluidine at 120 mg/kg/day (false discovery rate threshold of 5 %). The liver transcriptomic alterations were characteristic of an anti-oxidative damage response (activation of the Nrf2 pathway) and hepatic toxicity. The top cellular processes in gene ontology (GO) categories altered in livers exposed to DMPT or p-toluidine were used for BMD calculations. The lower confidence bound benchmark doses for these chemicals were 2 mg/kg/day for DMPT and 7 mg/kg/day for p-toluidine. These studies show the promise of using 5-day target organ transcriptomic data to identify chemical-induced molecular changes that can serve as markers for preliminary toxicity risk assessment.
Schentag, J J; Paladino, J A; Birmingham, M C; Zimmer, G; Carr, J R; Hanson, S C
1995-01-01
To apply basic benchmarking techniques to hospital antibiotic expenditures and clinical pharmacy personnel and their duties, to identify cost savings strategies for clinical pharmacy services. Prospective survey of 18 hospitals ranging in size from 201 to 942 beds. Each was asked to provide antibiotic expenditures, an overview of their clinical pharmacy services, and to describe the duties of clinical pharmacists involved in antibiotic management activities. Specific information was sought on the use of pharmacokinetic dosing services, antibiotic streamlining, and oral switch in each of the hospitals. Most smaller hospitals (< 300 beds) did not employ clinical pharmacists with the specific duties of antibiotic management or streamlining. At these institutions, antibiotic management services consisted of formulary enforcement and aminoglycoside and/or vancomycin dosing services. The larger hospitals we surveyed employed clinical pharmacists designated as antibiotic management specialists, but their usual activities were aminoglycoside and/or vancomycin dosing services and formulary enforcement. In virtually all hospitals, the yearly expenses for antibiotics exceeded those of Millard Fillmore Hospitals by $2,000-3,000 per occupied bed. In a 500-bed hospital, this difference in expenditures would exceed $1.5 million yearly. Millard Fillmore Health System has similar types of patients, but employs clinical pharmacists to perform streamlining and/or switch functions at days 2-4, when cultures come back from the laboratory. The antibiotic streamlining and oral switch duties of clinical pharmacy specialists are associated with the majority of cost savings in hospital antibiotic management programs. The savings are considerable to the extent that most hospitals with 200-300 beds could readily cost-justify a full-time clinical pharmacist to perform these activities on a daily basis. Expenses of the program would be offset entirely by the reduction in the actual pharmacy expenditures on antibiotics.
Pediatric chest and abdominopelvic CT: organ dose estimation based on 42 patient models.
Tian, Xiaoyu; Li, Xiang; Segars, W Paul; Paulson, Erik K; Frush, Donald P; Samei, Ehsan
2014-02-01
To estimate organ dose from pediatric chest and abdominopelvic computed tomography (CT) examinations and evaluate the dependency of organ dose coefficients on patient size and CT scanner models. The institutional review board approved this HIPAA-compliant study and did not require informed patient consent. A validated Monte Carlo program was used to perform simulations in 42 pediatric patient models (age range, 0-16 years; weight range, 2-80 kg; 24 boys, 18 girls). Multidetector CT scanners were modeled on those from two commercial manufacturers (LightSpeed VCT, GE Healthcare, Waukesha, Wis; SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). Organ doses were estimated for each patient model for routine chest and abdominopelvic examinations and were normalized by volume CT dose index (CTDI(vol)). The relationships between CTDI(vol)-normalized organ dose coefficients and average patient diameters were evaluated across scanner models. For organs within the image coverage, CTDI(vol)-normalized organ dose coefficients largely showed a strong exponential relationship with the average patient diameter (R(2) > 0.9). The average percentage differences between the two scanner models were generally within 10%. For distributed organs and organs on the periphery of or outside the image coverage, the differences were generally larger (average, 3%-32%) mainly because of the effect of overranging. It is feasible to estimate patient-specific organ dose for a given examination with the knowledge of patient size and the CTDI(vol). These CTDI(vol)-normalized organ dose coefficients enable one to readily estimate patient-specific organ dose for pediatric patients in clinical settings. This dose information, and, as appropriate, attendant risk estimations, can provide more substantive information for the individual patient for both clinical and research applications and can yield more expansive information on dose profiles across patient populations within a practice. © RSNA, 2013.
NASA Astrophysics Data System (ADS)
Beyerle, Andrea; Schulz, Holger; Kissel, Thomas; Stoeger, Tobias
2009-02-01
Nanotechnology is a broad, revolutionary field with promising advantages for new medicine. In this context the rapid development and improvement of so called nanocarriers is of high pharmaceutical interest and some devices are already on the market. In our project we aim to develop well characterized nanoscaled drug delivery systems for an inhalative application. To this end, we focus on the most adverse side-effects within the lung, the cytotoxic and the proinflammatory responses to these nanoparticles (NPs). Before performing any animal experiments, we start with an in vitro screening for analyzing the cytotoxic and proinflammatory effects of the investigated particles on two murine lung target cell lines, the alveolar epithelial like typ II cell line (LA4) and the alveolar macrophage cell line (MH-S). Three different endpoints were estimated, (i) cellular metabolic activity, determined by the WST-1 assay, (ii) membrane integrity, by detection of LDH release and hemolytic activity, and (iii) secretion of inflammatory mediators. To analyze the relative particle toxicity we choose two reference particles as benchmarks, (i) fine a-quartz, and (ii) ultrafine ZnO particles. The investigation of dose-response and kinetics of proinflammatory and toxic effects caused to the named cell lines provide an insight to a close evaluation of our cell based screening strategy. oc-quartz is well known for its inflammatory and toxic potential caused by inhalation, and nanosized ZnO particles - used in a broad field of nanotechnology like electronics, but also cosmetics and pharmaceuticals - is to a high degree cytotoxic and proinflammatory in vitro. Preliminary experiments indicated not only particle and cell specific inflammatory responses, but also different susceptibilities of the cell types being exposed to our benchmark particles regarding their size and surface activities. Exposure to the μm-sized a-quartz particles affected the viability of epithelia cells less than that of macrophages, pointing to the impact of particle uptake by phagocytosis. In contrast, the nanosized ZnO particles caused much stronger decrease in cell viability and higher levels of LDH in the macrophage cell line compared to epithelial cells, even though the hemolytic activity was much higher for the a-quartz particles than for the nanosized ZnO. For the proinflammatory effects, we observed a clear dose-dependent release of acute phase cytokines (TNF-α, IL-6, G-CSF> CXCL10>CCL2) for both alveolar cell lines after Min-U-Sil exposure. After ZnO treatment the cytokine responses were negligible compare to control cells. In conclusion, our data attach value to the use of different cell types to detect different pathways of toxicity generated by different particle properties. Therefore, we will establish both lung target cell lines for an in vitro screening to analyze proinflammatory and cytotoxicity effects of nanocarriers. The implementation of the two reference particles facilitate the validated classification of the cytotoxic responses caused by the NPs investigated.
Natto, S A; Lewis, D G; Ryde, S J
1998-01-01
The Monte Carlo computer code MCNP (version 4A) has been used to develop a personal computer-based model of the Swansea in vivo neutron activation analysis (IVNAA) system. The model included specification of the neutron source (252Cf), collimators, reflectors and shielding. The MCNP model was 'benchmarked' against fast neutron and thermal neutron fluence data obtained experimentally from the IVNAA system. The Swansea system allows two irradiation geometries using 'short' and 'long' collimators, which provide alternative dose rates for IVNAA. The data presented here relate to the short collimator, although results of similar accuracy were obtained using the long collimator. The fast neutron fluence was measured in air at a series of depths inside the collimator. The measurements agreed with the MCNP simulation within the statistical uncertainty (5-10%) of the calculations. The thermal neutron fluence was measured and calculated inside the cuboidal water phantom. The depth of maximum thermal fluence was 3.2 cm (measured) and 3.0 cm (calculated). The width of the 50% thermal fluence level across the phantom at its mid-depth was found to be the same by both MCNP and experiment. This benchmarking exercise has given us a high degree of confidence in MCNP as a tool for the design of IVNAA systems.
NASA Technical Reports Server (NTRS)
Berger, Thomas; Matthiae, Daniel; Koerner, Christine; George, Kerry; Rhone, Jordan; Cucinotta, Francis; Reitz, Guenther
2010-01-01
The adequate knowledge of the radiation environment and the doses incurred during a space mission is essential for estimating an astronaut's health risk. The space radiation environment is complex and variable, and exposures inside the spacecraft and the astronaut's body are compounded by the interactions of the primary particles with the atoms of the structural materials and with the body itself Astronauts' radiation exposures are measured by means of personal dosimetry, but there remains substantial uncertainty associated with the computational extrapolation of skin dose to organ dose, which can lead to over- or underestimation of the health risk. Comparisons of models to data showed that the astronaut's Effective dose (E) can be predicted to within about a +10% accuracy using space radiation transport models for galactic cosmic rays (GCR) and trapped radiation behind shielding. However for solar particle event (SPE) with steep energy spectra and for extra-vehicular activities on the surface of the moon where only tissue shielding is present, transport models predict that there are large differences in model assumptions in projecting organ doses. Therefore experimental verification of SPE induced organ doses may be crucial for the design of lunar missions. In the research experiment "Depth dose distribution study within a phantom torso" at the NASA Space Radiation Laboratory (NSRL) at BNL, Brookhaven, USA the large 1972 SPE spectrum was simulated using seven different proton energies from 50 up to 450 MeV. A phantom torso constructed of natural bones and realistic distributions of human tissue equivalent materials, which is comparable to the torso of the MATROSHKA phantom currently on the ISS, was equipped with a comprehensive set of thermoluminescence detectors and human cells. The detectors are applied to assess the depth dose distribution and radiation transport codes (e.g. GEANT4) are used to assess the radiation field and interactions of the radiation field with the phantom torso. Lymphocyte cells are strategically embedded at selected locations at the skin and internal organs and are processed after irradiation to assess the effects of shielding on the yield of chromosome damage. The initial focus of the present experiment is to correlate biological results with physical dosimetry measurements in the phantom torso. Further on, the results of the passive dosimetry within the anthropomorphic phantoms represent the best tool to generate reliable data to benchmark computational radiation transport models in a radiation field of interest. The presentation will give first results of the physical dose distribution, the comparison with GEANT4 computer simulations based on a Voxel model of the phantom, and a comparison with the data from the chromosome aberration study.
Simon, Steven L; Baverstock, Keith F; Lindholm, Carita
2003-06-01
The presently available evidence about the magnitude of doses received by members of the public living in villages in the vicinity of Semipalatinsk nuclear test in Kazakhstan, particularly with respect to external radiation, while preliminary, is conflicting. The village of Dolon, in particular, has been identified for many years as the most highly exposed location in the vicinity of the test site. Previous publications cited external doses of more than 2 Gy to residents of Dolon while an expert group assembled by the WHO in 1997 estimated that external doses were likely to have been less than 0.5 Gy. In 2001, a larger expert group workshop was held in Helsinki jointly by the WHO, the National Cancer Institute of the United States, and the Radiation and Nuclear Safety Authority of Finland, with the expressed purpose to acquire data to evaluate the state of knowledge concerning doses received in Kazakhstan. This paper summarizes evidence presented at that workshop. External dose estimates from calculations based on sparse physical measurements and bio-dosimetric estimates based on chromosome abnormalities and electron paramagnetic resonance from a relatively small sample of teeth do not agree well. The physical dose estimates are generally higher than the biodosimetric estimates (1 Gy or more compared to 0.5 Gy or less). When viewed in its entirety, the present body of evidence does not appear to support external doses greater than 0.5 Gy; however, research is continuing to try and resolve the difference in dose estimates from the different methods. Thyroid doses from internal irradiation, which can only be estimated via calculation, are expected to have been several times greater than the doses from external irradiation, especially where received by small children.
Re-analysis of Alaskan benchmark glacier mass-balance data using the index method
Van Beusekom, Ashely E.; O'Nell, Shad R.; March, Rod S.; Sass, Louis C.; Cox, Leif H.
2010-01-01
At Gulkana and Wolverine Glaciers, designated the Alaskan benchmark glaciers, we re-analyzed and re-computed the mass balance time series from 1966 to 2009 to accomplish our goal of making more robust time series. Each glacier's data record was analyzed with the same methods. For surface processes, we estimated missing information with an improved degree-day model. Degree-day models predict ablation from the sum of daily mean temperatures and an empirical degree-day factor. We modernized the traditional degree-day model and derived new degree-day factors in an effort to match the balance time series more closely. We estimated missing yearly-site data with a new balance gradient method. These efforts showed that an additional step needed to be taken at Wolverine Glacier to adjust for non-representative index sites. As with the previously calculated mass balances, the re-analyzed balances showed a continuing trend of mass loss. We noted that the time series, and thus our estimate of the cumulative mass loss over the period of record, was very sensitive to the data input, and suggest the need to add data-collection sites and modernize our weather stations.
Improving Upon String Methods for Transition State Discovery.
Chaffey-Millar, Hugh; Nikodem, Astrid; Matveev, Alexei V; Krüger, Sven; Rösch, Notker
2012-02-14
Transition state discovery via application of string methods has been researched on two fronts. The first front involves development of a new string method, named the Searching String method, while the second one aims at estimating transition states from a discretized reaction path. The Searching String method has been benchmarked against a number of previously existing string methods and the Nudged Elastic Band method. The developed methods have led to a reduction in the number of gradient calls required to optimize a transition state, as compared to existing methods. The Searching String method reported here places new beads on a reaction pathway at the midpoint between existing beads, such that the resolution of the path discretization in the region containing the transition state grows exponentially with the number of beads. This approach leads to favorable convergence behavior and generates more accurate estimates of transition states from which convergence to the final transition states occurs more readily. Several techniques for generating improved estimates of transition states from a converged string or nudged elastic band have been developed and benchmarked on 13 chemical test cases. Optimization approaches for string methods, and pitfalls therein, are discussed.
De Hertogh, Benoît; De Meulder, Bertrand; Berger, Fabrice; Pierre, Michael; Bareke, Eric; Gaigneaux, Anthoula; Depiereux, Eric
2010-01-11
Recent reanalysis of spike-in datasets underscored the need for new and more accurate benchmark datasets for statistical microarray analysis. We present here a fresh method using biologically-relevant data to evaluate the performance of statistical methods. Our novel method ranks the probesets from a dataset composed of publicly-available biological microarray data and extracts subset matrices with precise information/noise ratios. Our method can be used to determine the capability of different methods to better estimate variance for a given number of replicates. The mean-variance and mean-fold change relationships of the matrices revealed a closer approximation of biological reality. Performance analysis refined the results from benchmarks published previously.We show that the Shrinkage t test (close to Limma) was the best of the methods tested, except when two replicates were examined, where the Regularized t test and the Window t test performed slightly better. The R scripts used for the analysis are available at http://urbm-cluster.urbm.fundp.ac.be/~bdemeulder/.
Setting Achievement Targets for School Children.
ERIC Educational Resources Information Center
Thanassoulis, Emmanuel
1999-01-01
Develops an approach for setting performance targets for schoolchildren, using data-envelopment analysis to identify benchmark pupils who achieve the best observed performance (allowing for contextual factors). These pupils' achievement forms the basis of targets estimated. The procedure also identifies appropriate role models for weaker students'…
Globally, billions of metric tons of contaminated sediments are present in aquatic systems representing a potentially significant ecological risk. Estimated costs to manage (i.e., remediate and monitor) these sediments are in the billions of U.S. dollars. Biologically-based app...
Community-based benchmarking improves spike rate inference from two-photon calcium imaging data.
Berens, Philipp; Freeman, Jeremy; Deneux, Thomas; Chenkov, Nikolay; McColgan, Thomas; Speiser, Artur; Macke, Jakob H; Turaga, Srinivas C; Mineault, Patrick; Rupprecht, Peter; Gerhard, Stephan; Friedrich, Rainer W; Friedrich, Johannes; Paninski, Liam; Pachitariu, Marius; Harris, Kenneth D; Bolte, Ben; Machado, Timothy A; Ringach, Dario; Stone, Jasmine; Rogerson, Luke E; Sofroniew, Nicolas J; Reimer, Jacob; Froudarakis, Emmanouil; Euler, Thomas; Román Rosón, Miroslav; Theis, Lucas; Tolias, Andreas S; Bethge, Matthias
2018-05-01
In recent years, two-photon calcium imaging has become a standard tool to probe the function of neural circuits and to study computations in neuronal populations. However, the acquired signal is only an indirect measurement of neural activity due to the comparatively slow dynamics of fluorescent calcium indicators. Different algorithms for estimating spike rates from noisy calcium measurements have been proposed in the past, but it is an open question how far performance can be improved. Here, we report the results of the spikefinder challenge, launched to catalyze the development of new spike rate inference algorithms through crowd-sourcing. We present ten of the submitted algorithms which show improved performance compared to previously evaluated methods. Interestingly, the top-performing algorithms are based on a wide range of principles from deep neural networks to generative models, yet provide highly correlated estimates of the neural activity. The competition shows that benchmark challenges can drive algorithmic developments in neuroscience.
Men, Wu; Deng, Fangfang; He, Jianhua; Yu, Wen; Wang, Fenfen; Li, Yiliang; Lin, Feng; Lin, Jing; Lin, Longshan; Zhang, Yusheng; Yu, Xingguang
2017-10-01
This study investigated the radioactive impacts on 10 nekton species in the Northwest Pacific more than one year after the Fukushima Nuclear Accident (FNA) from the two perspectives of contamination and harm. Squids were especially used for the spatial and temporal comparisons to demonstrate the impacts from the FNA. The radiation doses to nekton species and humans were assessed to link this radioactivity contamination to possible harm. The total dose rates to nektons were lower than the ERICA ecosystem screening benchmark of 10μGy/h. Further dose-contribution analysis showed that the internal doses from the naturally occurring nuclide 210 Po were the main dose contributor. The dose rates from 134 Cs, 137 Cs, 90 Sr and 110m Ag were approximately three or four orders of magnitude lower than those from naturally occurring radionuclides. The 210 Po-derived dose was also the main contributor of the total human dose from immersion in the seawater and the ingestion of nekton species. The human doses from anthropogenic radionuclides were ~ 100 to ~ 10,000 times lower than the doses from naturally occurring radionuclides. A morbidity assessment was performed based on the Linear No Threshold assumptions of exposure and showed 7 additional cancer cases per 100,000,000 similarly exposed people. Taken together, there is no need for concern regarding the radioactive harm in the open ocean area of the Northwest Pacific. Copyright © 2017 Elsevier Inc. All rights reserved.
Assessment of radiation doses from residential smoke detectors that contain americium-241
NASA Astrophysics Data System (ADS)
Odonnell, F. R.; Etnier, E. L.; Holton, G. A.; Travis, C. C.
1981-10-01
External dose equivalents and internal dose commitments were estimated for individuals and populations from annual distribution, use, and disposal of 10 million ionization chamber smoke detectors that contain 110 kBq americium-241 each. Under exposure scenarios developed for normal distribution, use, and disposal using the best available information, annual external dose equivalents to average individuals were estimated to range from 4 fSv to 20 nSv for total body and from 7 fSv to 40 nSv for bone. Internal dose commitments to individuals under post disposal scenarios were estimated to range from 0.006 to 80 micro-Sv (0.0006 to 8 mrem) to total body and from 0.06 to 800 micro-Sv to bone. The total collective dose (the sum of external dose equivalents and 50-year internal dose commitments) for all individuals involved with distribution, use, or disposal of 10 million smoke detectors was estimated to be about 0.38 person-Sv (38 person-rem) to total body and 00 ft squared.
Estimated collective effective dose to the population from nuclear medicine examinations in Slovenia
Skrk, Damijan; Zontar, Dejan
2013-01-01
Background A national survey of patient exposure from nuclear medicine diagnostic procedures was performed by Slovenian Radiation Protection Administration in order to estimate their contribution to the collective effective dose to the population of Slovenia. Methods A set of 36 examinations with the highest contributions to the collective effective dose was identified. Data about frequencies and average administered activities of radioisotopes used for those examinations were collected from all nuclear medicine departments in Slovenia. A collective effective dose to the population and an effective dose per capita were estimated from the collected data using dose conversion factors. Results The total collective effective dose to the population from nuclear medicine diagnostic procedures in 2011 was estimated to 102 manSv, giving an effective dose per capita of 0.05 mSv. Conclusions The comparison of results of this study with studies performed in other countries indicates that the nuclear medicine providers in Slovenia are well aware of the importance of patient protection measures and of optimisation of procedures. PMID:24133396
Land, Charles E; Kwon, Deukwoo; Hoffman, F Owen; Moroz, Brian; Drozdovitch, Vladimir; Bouville, André; Beck, Harold; Luckyanov, Nicholas; Weinstock, Robert M; Simon, Steven L
2015-02-01
Dosimetic uncertainties, particularly those that are shared among subgroups of a study population, can bias, distort or reduce the slope or significance of a dose response. Exposure estimates in studies of health risks from environmental radiation exposures are generally highly uncertain and thus, susceptible to these methodological limitations. An analysis was published in 2008 concerning radiation-related thyroid nodule prevalence in a study population of 2,994 villagers under the age of 21 years old between August 1949 and September 1962 and who lived downwind from the Semipalatinsk Nuclear Test Site in Kazakhstan. This dose-response analysis identified a statistically significant association between thyroid nodule prevalence and reconstructed doses of fallout-related internal and external radiation to the thyroid gland; however, the effects of dosimetric uncertainty were not evaluated since the doses were simple point "best estimates". In this work, we revised the 2008 study by a comprehensive treatment of dosimetric uncertainties. Our present analysis improves upon the previous study, specifically by accounting for shared and unshared uncertainties in dose estimation and risk analysis, and differs from the 2008 analysis in the following ways: 1. The study population size was reduced from 2,994 to 2,376 subjects, removing 618 persons with uncertain residence histories; 2. Simulation of multiple population dose sets (vectors) was performed using a two-dimensional Monte Carlo dose estimation method; and 3. A Bayesian model averaging approach was employed for evaluating the dose response, explicitly accounting for large and complex uncertainty in dose estimation. The results were compared against conventional regression techniques. The Bayesian approach utilizes 5,000 independent realizations of population dose vectors, each of which corresponds to a set of conditional individual median internal and external doses for the 2,376 subjects. These 5,000 population dose vectors reflect uncertainties in dosimetric parameters, partly shared and partly independent, among individual members of the study population. Risk estimates for thyroid nodules from internal irradiation were higher than those published in 2008, which results, to the best of our knowledge, from explicitly accounting for dose uncertainty. In contrast to earlier findings, the use of Bayesian methods led to the conclusion that the biological effectiveness for internal and external dose was similar. Estimates of excess relative risk per unit dose (ERR/Gy) for males (177 thyroid nodule cases) were almost 30 times those for females (571 cases) and were similar to those reported for thyroid cancers related to childhood exposures to external and internal sources in other studies. For confirmed cases of papillary thyroid cancers (3 in males, 18 in females), the ERR/Gy was also comparable to risk estimates from other studies, but not significantly different from zero. These findings represent the first reported dose response for a radiation epidemiologic study considering all known sources of shared and unshared errors in dose estimation and using a Bayesian model averaging (BMA) method for analysis of the dose response.
Ralston, Shawn; Garber, Matthew; Narang, Steve; Shen, Mark; Pate, Brian; Pope, John; Lossius, Michele; Croland, Trina; Bennett, Jeff; Jewell, Jennifer; Krugman, Scott; Robbins, Elizabeth; Nazif, Joanne; Liewehr, Sheila; Miller, Ansley; Marks, Michelle; Pappas, Rita; Pardue, Jeanann; Quinonez, Ricardo; Fine, Bryan R; Ryan, Michael
2013-01-01
Acute viral bronchiolitis is the most common diagnosis resulting in hospital admission in pediatrics. Utilization of non-evidence-based therapies and testing remains common despite a large volume of evidence to guide quality improvement efforts. Our objective was to reduce utilization of unnecessary therapies in the inpatient care of bronchiolitis across a diverse network of clinical sites. We formed a voluntary quality improvement collaborative of pediatric hospitalists for the purpose of benchmarking the use of bronchodilators, steroids, chest radiography, chest physiotherapy, and viral testing in bronchiolitis using hospital administrative data. We shared resources within the network, including protocols, scores, order sets, and key bibliographies, and established group norms for decreasing utilization. Aggregate data on 11,568 hospitalizations for bronchiolitis from 17 centers was analyzed for this report. The network was organized in 2008. By 2010, we saw a 46% reduction in overall volume of bronchodilators used, a 3.4 dose per patient absolute decrease in utilization (95% confidence interval [CI] 1.4-5.8). Overall exposure to any dose of bronchodilator decreased by 12 percentage points as well (95% CI 5%-25%). There was also a statistically significant decline in chest physiotherapy usage, but not for steroids, chest radiography, or viral testing. Benchmarking within a voluntary pediatric hospitalist collaborative facilitated decreased utilization of bronchodilators and chest physiotherapy in bronchiolitis. Copyright © 2012 Society of Hospital Medicine.
SU-C-207-02: A Method to Estimate the Average Planar Dose From a C-Arm CBCT Acquisition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Supanich, MP
2015-06-15
Purpose: The planar average dose in a C-arm Cone Beam CT (CBCT) acquisition had been estimated in the past by averaging the four peripheral dose measurements in a CTDI phantom and then using the standard 2/3rds peripheral and 1/3 central CTDIw method (hereafter referred to as Dw). The accuracy of this assumption has not been investigated and the purpose of this work is to test the presumed relationship. Methods: Dose measurements were made in the central plane of two consecutively placed 16cm CTDI phantoms using a 0.6cc ionization chamber at each of the 4 peripheral dose bores and in themore » central dose bore for a C-arm CBCT protocol. The same setup was scanned with a circular cut-out of radiosensitive gafchromic film positioned between the two phantoms to capture the planar dose distribution. Calibration curves for color pixel value after scanning were generated from film strips irradiated at different known dose levels. The planar average dose for red and green pixel values was calculated by summing the dose values in the irradiated circular film cut out. Dw was calculated using the ionization chamber measurements and film dose values at the location of each of the dose bores. Results: The planar average dose using both the red and green pixel color calibration curves were within 10% agreement of the planar average dose estimated using the Dw method of film dose values at the bore locations. Additionally, an average of the planar average doses calculated using the red and green calibration curves differed from the ionization chamber Dw estimate by only 5%. Conclusion: The method of calculating the planar average dose at the central plane of a C-arm CBCT non-360 rotation by calculating Dw from peripheral and central dose bore measurements is a reasonable approach to estimating the planar average dose. Research Grant, Siemens AG.« less
Levin, S G; Young, R W; Stohler, R L
1992-11-01
This paper presents an estimate of the median lethal dose for humans exposed to total-body irradiation and not subsequently treated for radiation sickness. The median lethal dose was estimated from calculated doses to young adults who were inside two reinforced concrete buildings that remained standing in Nagasaki after the atomic detonation. The individuals in this study, none of whom have previously had calculated doses, were identified from a detailed survey done previously. Radiation dose to the bone marrow, which was taken as the critical radiation site, was calculated for each individual by the Engineering Physics and Mathematics Division of the Oak Ridge National Laboratory using a new three-dimensional discrete-ordinates radiation transport code that was developed and validated for this study using the latest site geometry, radiation yield, and spectra data. The study cohort consisted of 75 individuals who either survived > 60 d or died between the second and 60th d postirradiation due to radiation injury, without burns or other serious injury. Median lethal dose estimates were calculated using both logarithmic (2.9 Gy) and linear (3.4 Gy) dose scales. Both calculations, which met statistical validity tests, support previous estimates of the median lethal dose based solely on human data, which cluster around 3 Gy.
Ishikawa, Tetsuo; Yasumura, Seiji; Ohtsuru, Akira; Sakai, Akira; Akahane, Keiichi; Yonai, Shunsuke; Sakata, Ritsu; Ozasa, Kotaro; Hayashi, Masayuki; Ohira, Tetsuya; Kamiya, Kenji; Abe, Masafumi
2016-06-01
Many studies have been conducted on radiation doses to residents after the Fukushima Daiichi Nuclear Power Plant (FDNPP) accident. Time spent outdoors is an influential factor for external dose estimation. Since little information was available on actual time spent outdoors for residents, different values of average time spent outdoors per day have been used in dose estimation studies on the FDNPP accident. The most conservative value of 24 h was sometimes used, while 2.4 h was adopted for indoor workers in the UNSCEAR 2013 report. Fukushima Medical University has been estimating individual external doses received by residents as a part of the Fukushima Health Management Survey by collecting information on the records of moves and activities (the Basic Survey) after the accident from each resident. In the present study, these records were analyzed to estimate an average time spent outdoors per day. As an example, in Iitate Village, its arithmetic mean was 2.08 h (95% CI: 1.64-2.51) for a total of 170 persons selected from respondents to the Basic Survey. This is a much smaller value than commonly assumed. When 2.08 h is used for the external dose estimation, the dose is about 25% (23-26% when using the above 95% CI) less compared with the dose estimated for the commonly used value of 8 h.
SU-F-P-44: A Direct Estimate of Peak Skin Dose for Interventional Fluoroscopy Procedures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weir, V; Zhang, J
Purpose: There is an increasing demand for medical physicist to calculate peak skin dose (PSD) for interventional fluoroscopy procedures. The dose information (Dose-Area-Product and Air Kerma) displayed in the console cannot directly be used for this purpose. Our clinical experience shows that the use of the existing methods may overestimate or underestimate PSD. This study attempts to develop a direct estimate of PSD from the displayed dose metrics. Methods: An anthropomorphic torso phantom was used for dose measurements for a common fluoroscopic procedure. Entrance skin doses were measured with a Piranha solid state point detector placed on the table surfacemore » below the torso phantom. An initial “reference dose rate” (RE) measurement was conducted by comparing the displayed dose rate (mGy/min) to the dose rate measured. The distance from table top to focal spot was taken as the reference distance (RD at the RE. Table height was then adjusted. The displayed air kerma and DAP were recorded and sent to three physicists to estimate PSD. An inverse square correction was applied to correct displayed air kerma at various table heights. The PSD estimated by physicists and the PSD by the proposed method were then compared with the measurements. The estimated DAPs were compared to displayed DAP readings (mGycm2). Results: The difference between estimated PSD by the proposed method and direct measurements was less than 5%. For the same set of data, the estimated PSD by each of three physicists is different from measurements by ±52%. The DAP calculated by the proposed method and displayed DAP readings in the console is less than 20% at various table heights. Conclusion: PSD may be simply estimated from displayed air kerma or DAP if the distance between table top and tube focal spot or if x-ray beam area on table top is available.« less
Assessment of simulated high-dose partial-body irradiation by PCC-R assay.
Romero, Ivonne; García, Omar; Lamadrid, Ana I; Gregoire, Eric; González, Jorge E; Morales, Wilfredo; Martin, Cécile; Barquinero, Joan-Francesc; Voisin, Philippe
2013-09-01
The estimation of the dose and the irradiated fraction of the body is important information in the primary medical response in case of a radiological accident. The PCC-R assay has been developed for high-dose estimations, but little attention has been given to its applicability for partial-body irradiations. In the present work we estimated the doses and the percentage of the irradiated fraction in simulated partial-body radiation exposures at high doses using the PCC-R assay. Peripheral whole blood of three healthy donors was exposed to doses from 0-20 Gy, with ⁶⁰Co gamma radiation. To simulate partial body irradiations, irradiated and non-irradiated blood was mixed to obtain proportions of irradiated blood from 10-90%. Lymphocyte cultures were treated with Colcemid and Calyculin-A before harvest. Conventional and triage scores were performed for each dose, proportion of irradiated blood and donor. The Papworth's u test was used to evaluate the PCC-R distribution per cell. A dose-response relationship was fitted according to the maximum likelihood method using the frequencies of PCC-R obtained from 100% irradiated blood. The dose to the partially irradiated blood was estimated using the Contaminated Poisson method. A new D₀ value of 10.9 Gy was calculated and used to estimate the initial fraction of irradiated cells. The results presented here indicate that by PCC-R it is possible to distinguish between simulated partial- and whole-body irradiations by the u-test, and to accurately estimate the dose from 10-20 Gy, and the initial fraction of irradiated cells in the interval from 10-90%.
Biological dosimetry in a group of radiologists by the analysis of dicentrics and translocations.
Montoro, A; Rodríguez, P; Almonacid, M; Villaescusa, J I; Verdú, G; Caballín, M R; Barrios, L; Barquinero, J F
2005-11-01
The results of a cytogenetic study carried out in a group of nine radiologists are presented. Chromosome aberrations were detected by fluorescence plus Giemsa staining and fluorescence in situ hybridization. Dose estimates were obtained by extrapolating the yield of dicentrics and translocations to their respective dose-effect curves. In seven individuals, the 95% confidence limits of the doses estimated by dicentrics did not include 0 Gy. The 99 dicentrics observed in 17,626 cells gave a collective estimated dose of 115 mGy (95% confidence limits 73-171). For translocations, five individuals had estimated doses that were clearly higher than the total accumulated recorded dose. The 82 total apparently simple translocations observed in 9722 cells gave a collective estimated dose of 275 mGy (132-496). The mean genomic frequencies (x100 +/- SE) of complete and total apparently simple translocations observed in the group of radiologists (1.91 +/- 0.30 and 2.67 +/- 0.34, respectively) were significantly higher than those observed in a matched control group (0.53 +/- 0.10 and 0.87 +/- 0.13, P < 0.01 in both cases) and in another occupationally exposed matched group (0.79 +/- 0.12 and 1.14 +/-0.14, P < 0.03 and P < 0.01, respectively). The discrepancies observed between the physically recorded doses and the biologically estimated doses indicate that the radiologists did not always wear their dosimeters or that the dosimeters were not always in the radiation field.
Temporal analysis of the October 1989 proton flare using computerized anatomical models
NASA Technical Reports Server (NTRS)
Simonsen, L. C.; Cucinotta, F. A.; Atwell, W.; Nealy, J. E.
1993-01-01
The GOES-7 time history data of hourly averaged integral proton fluxes at various particle kinetic energies are analyzed for the solar proton event that occurred between October 19 and 29, 1989. By analyzing the time history data, the dose rates which may vary over many orders of magnitude in the early phases of the flare can be estimated as well as the cumulative dose as a function of time. Basic transport calculations are coupled with detailed body organ thickness distributions from computerized anatomical models to estimate dose rates and cumulative doses to 20 critical body organs. For a 5-cm-thick water shield, cumulative skin, eye, and blood-forming-organ dose equivalents of 1.27, 1.23, and 0.41 Sv, respectively, are estimated. These results are approximately 40-50 percent less than the widely used 0- and 5-cm slab dose estimates. The risk of cancer incidence and mortality are also estimated for astronauts protected by various water shield thicknesses.
Radiation dose optimization in the decommissioning plan for Loviisa NPP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holmberg, R.; Eurajoki, T.
1995-03-01
Finnish rules for nuclear power require a detailed decommissioning plan to be made and kept up to date already during plant operation. The main reasons for this {open_quotes}premature{close_quotes} plan, is, firstly, the need to demonstrate the feasibility of decommissioning, and, secondly, to make realistic cost estimates in order to fund money for this future operation. The decomissioning for Lovissa Nuclear Power Plant (NPP) (2{times}445 MW, PWR) was issued in 1987. It must be updated about every five years. One important aspect of the plant is an estimate of radiation doses to the decomissioning workers. The doses were recently re-estimated becausemore » of a need to decrease the total collective dose estimate in the original plan, 23 manSv. In the update, the dose was reduced by one-third. Part of the reduction was due to changes in the protection and procedures, in which ALARA considerations were taken into account, and partly because of re-estimation of the doses.« less
Benchmarking a Soil Moisture Data Assimilation System for Agricultural Drought Monitoring
NASA Technical Reports Server (NTRS)
Hun, Eunjin; Crow, Wade T.; Holmes, Thomas; Bolten, John
2014-01-01
Despite considerable interest in the application of land surface data assimilation systems (LDAS) for agricultural drought applications, relatively little is known about the large-scale performance of such systems and, thus, the optimal methodological approach for implementing them. To address this need, this paper evaluates an LDAS for agricultural drought monitoring by benchmarking individual components of the system (i.e., a satellite soil moisture retrieval algorithm, a soil water balance model and a sequential data assimilation filter) against a series of linear models which perform the same function (i.e., have the same basic inputoutput structure) as the full system component. Benchmarking is based on the calculation of the lagged rank cross-correlation between the normalized difference vegetation index (NDVI) and soil moisture estimates acquired for various components of the system. Lagged soil moistureNDVI correlations obtained using individual LDAS components versus their linear analogs reveal the degree to which non-linearities andor complexities contained within each component actually contribute to the performance of the LDAS system as a whole. Here, a particular system based on surface soil moisture retrievals from the Land Parameter Retrieval Model (LPRM), a two-layer Palmer soil water balance model and an Ensemble Kalman filter (EnKF) is benchmarked. Results suggest significant room for improvement in each component of the system.
RETRANO3 benchmarks for Beaver Valley plant transients and FSAR analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beaumont, E.T.; Feltus, M.A.
1993-01-01
Any best-estimate code (e.g., RETRANO3) results must be validated against plant data and final safety analysis report (FSAR) predictions. The need for two independent means of benchmarking is necessary to ensure that the results were not biased toward a particular data set and to have a certain degree of accuracy. The code results need to be compared with previous results and show improvements over previous code results. Ideally, the two best means of benchmarking a thermal hydraulics code are comparing results from previous versions of the same code along with actual plant data. This paper describes RETRAN03 benchmarks against RETRAN02more » results, actual plant data, and FSAR predictions. RETRAN03, the Electric Power Research Institute's latest version of the RETRAN thermal-hydraulic analysis codes, offers several upgrades over its predecessor, RETRAN02 Mod5. RETRAN03 can use either implicit or semi-implicit numerics, whereas RETRAN02 Mod5 uses only semi-implicit numerics. Another major upgrade deals with slip model options. RETRAN03 added several new models, including a five-equation model for more accurate modeling of two-phase flow. RETPAN02 Mod5 should give similar but slightly more conservative results than RETRAN03 when executed with RETRAN02 Mod5 options.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greiner, Miles
Radial hydride formation in high-burnup used fuel cladding has the potential to radically reduce its ductility and suitability for long-term storage and eventual transport. To avoid this formation, the maximum post-reactor temperature must remain sufficiently low to limit the cladding hoop stress, and so that hydrogen from the existing circumferential hydrides will not dissolve and become available to re-precipitate into radial hydrides under the slow cooling conditions during drying, transfer and early dry-cask storage. The objective of this research is to develop and experimentallybenchmark computational fluid dynamics simulations of heat transfer in post-pool-storage drying operations, when high-burnup fuel cladding ismore » likely to experience its highest temperature. These benchmarked tools can play a key role in evaluating dry cask storage systems for extended storage of high-burnup fuels and post-storage transportation, including fuel retrievability. The benchmarked tools will be used to aid the design of efficient drying processes, as well as estimate variations of surface temperatures as a means of inferring helium integrity inside the canister or cask. This work will be conducted effectively because the principal investigator has experience developing these types of simulations, and has constructed a test facility that can be used to benchmark them.« less
Shared Dosimetry Error in Epidemiological Dose-Response Analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stram, Daniel O.; Preston, Dale L.; Sokolnikov, Mikhail
2015-03-23
Radiation dose reconstruction systems for large-scale epidemiological studies are sophisticated both in providing estimates of dose and in representing dosimetry uncertainty. For example, a computer program was used by the Hanford Thyroid Disease Study to provide 100 realizations of possible dose to study participants. The variation in realizations reflected the range of possible dose for each cohort member consistent with the data on dose determinates in the cohort. Another example is the Mayak Worker Dosimetry System 2013 which estimates both external and internal exposures and provides multiple realizations of "possible" dose history to workers given dose determinants. This paper takesmore » up the problem of dealing with complex dosimetry systems that provide multiple realizations of dose in an epidemiologic analysis. In this paper we derive expected scores and the information matrix for a model used widely in radiation epidemiology, namely the linear excess relative risk (ERR) model that allows for a linear dose response (risk in relation to radiation) and distinguishes between modifiers of background rates and of the excess risk due to exposure. We show that treating the mean dose for each individual (calculated by averaging over the realizations) as if it was true dose (ignoring both shared and unshared dosimetry errors) gives asymptotically unbiased estimates (i.e. the score has expectation zero) and valid tests of the null hypothesis that the ERR slope β is zero. Although the score is unbiased the information matrix (and hence the standard errors of the estimate of β) is biased for β≠0 when ignoring errors in dose estimates, and we show how to adjust the information matrix to remove this bias, using the multiple realizations of dose. Use of these methods for several studies, including the Mayak Worker Cohort and the U.S. Atomic Veterans Study, is discussed.« less
Mandal, Abhijit; Ram, Chhape; Mourya, Ankur; Singh, Navin
2017-01-01
To establish trends of estimation error of dose calculation by anisotropic analytical algorithm (AAA) with respect to dose measured by thermoluminescent dosimeters (TLDs) in air-water heterogeneity for small field size photon. TLDs were irradiated along the central axis of the photon beam in four different solid water phantom geometries using three small field size single beams. The depth dose profiles were estimated using AAA calculation model for each field sizes. The estimated and measured depth dose profiles were compared. The over estimation (OE) within air cavity were dependent on field size (f) and distance (x) from solid water-air interface and formulated as OE = - (0.63 f + 9.40) x2+ (-2.73 f + 58.11) x + (0.06 f2 - 1.42 f + 15.67). In postcavity adjacent point and distal points from the interface have dependence on field size (f) and equations are OE = 0.42 f2 - 8.17 f + 71.63, OE = 0.84 f2 - 1.56 f + 17.57, respectively. The trend of estimation error of AAA dose calculation algorithm with respect to measured value have been formulated throughout the radiation path length along the central axis of 6 MV photon beam in air-water heterogeneity combination for small field size photon beam generated from a 6 MV linear accelerator.
Use of computer code for dose distribution studies in A 60CO industrial irradiator
NASA Astrophysics Data System (ADS)
Piña-Villalpando, G.; Sloan, D. P.
1995-09-01
This paper presents a benchmark comparison between calculated and experimental absorbed dose values tor a typical product, in a 60Co industrial irradiator, located at ININ, México. The irradiator is a two levels, two layers system with overlapping product configuration with activity around 300kCi. Experimental values were obtanied from routine dosimetry, using red acrylic pellets. Typical product was Petri dishes packages, apparent density 0.13 g/cm3; that product was chosen because uniform size, large quantity and low density. Minimum dose was fixed in 15 kGy. Calculated values were obtained from QAD-CGGP code. This code uses a point kernel technique, build-up factors fitting was done by geometrical progression and combinatorial geometry is used for system description. Main modifications for the code were related with source sumilation, using punctual sources instead of pencils and an energy and anisotropic emission spectrums were included. Results were, for maximum dose, calculated value (18.2 kGy) was 8% higher than experimental average value (16.8 kGy); for minimum dose, calculated value (13.8 kGy) was 3% higher than experimental average value (14.3 kGy).
Variability within Systemic In Vivo Toxicity Studies (ASCCT)
In vivo studies have long been considered the gold standard for toxicology screening. Often time models developed in silico and/or using in vitro data to estimate points of departures (POD) are compared to the in vivo data to benchmark and evaluate quality and goodness of fit. ...
Benchmarking the performance of a land data assimilation system for agricultural drought monitoring
USDA-ARS?s Scientific Manuscript database
The application of land data assimilation systems to operational agricultural drought monitoring requires the development of (at least) three separate system sub-components: 1) a retrieval model to invert satellite-derived observations into soil moisture estimates, 2) a prognostic soil water balance...
SU-E-T-129: Are Knowledge-Based Planning Dose Estimates Valid for Distensible Organs?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lalonde, R; Heron, D; Huq, M
2015-06-15
Purpose: Knowledge-based planning programs have become available to assist treatment planning in radiation therapy. Such programs can be used to generate estimated DVHs and planning constraints for organs at risk (OARs), based upon a model generated from previous plans. These estimates are based upon the planning CT scan. However, for distensible OARs like the bladder and rectum, daily variations in volume may make the dose estimates invalid. The purpose of this study is to determine whether knowledge-based DVH dose estimates may be valid for distensible OARs. Methods: The Varian RapidPlan™ knowledge-based planning module was used to generate OAR dose estimatesmore » and planning objectives for 10 prostate cases previously planned with VMAT, and final plans were calculated for each. Five weekly setup CBCT scans of each patient were then downloaded and contoured (assuming no change in size and shape of the target volume), and rectum and bladder DVHs were recalculated for each scan. Dose volumes were then compared at 75, 60,and 40 Gy for the bladder and rectum between the planning scan and the CBCTs. Results: Plan doses and estimates matched well at all dose points., Volumes of the rectum and bladder varied widely between planning CT and the CBCTs, ranging from 0.46 to 2.42 for the bladder and 0.71 to 2.18 for the rectum, causing relative dose volumes to vary between planning CT and CBCT, but absolute dose volumes were more consistent. The overall ratio of CBCT/plan dose volumes was 1.02 ±0.27 for rectum and 0.98 ±0.20 for bladder in these patients. Conclusion: Knowledge-based planning dose volume estimates for distensible OARs are still valid, in absolute volume terms, between treatment planning scans and CBCT’s taken during daily treatment. Further analysis of the data is being undertaken to determine how differences depend upon rectum and bladder filling state. This work has been supported by Varian Medical Systems.« less
Emigh, Brent; Gordon, Christopher L; Connolly, Bairbre L; Falkiner, Michelle; Thomas, Karen E
2013-09-01
There is a need for updated radiation dose estimates in pediatric fluoroscopy given the routine use of new dose-saving technologies and increased radiation safety awareness in pediatric imaging. To estimate effective doses for standardized pediatric upper gastrointestinal (UGI) examinations at our institute using direct dose measurement, as well as provide dose-area product (DAP) to effective dose conversion factors to be used for the estimation of UGI effective doses for boys and girls up to 10 years of age at other centers. Metal oxide semiconductor field-effect transistor (MOSFET) dosimeters were placed within four anthropomorphic phantoms representing children ≤10 years of age and exposed to mock UGI examinations using exposures much greater than used clinically to minimize measurement error. Measured effective dose was calculated using ICRP 103 weights and scaled to our institution's standardized clinical UGI (3.6-min fluoroscopy, four spot exposures and four examination beam projections) as determined from patient logs. Results were compared to Monte Carlo simulations and related to fluoroscope-displayed DAP. Measured effective doses for standardized pediatric UGI examinations in our institute ranged from 0.35 to 0.79 mSv in girls and were 3-8% lower for boys. Simulation-derived and measured effective doses were in agreement (percentage differences <19%, T > 0.18). DAP-to-effective dose conversion factors ranged from 6.5 ×10(-4) mSv per Gy-cm(2) to 4.3 × 10(-3) mSv per Gy-cm(2) for girls and were similarly lower for boys. Using modern fluoroscopy equipment, the effective dose associated with the UGI examination in children ≤10 years at our institute is < 1 mSv. Estimations of effective dose associated with pediatric UGI examinations can be made for children up to the age of 10 using the DAP-normalized conversion factors provided in this study. These estimates can be further refined to reflect individual hospital examination protocols through the use of direct organ dose measurement using MOSFETs, which were shown to agree with Monte Carlo simulated doses.
Distributed and decentralized state estimation in gas networks as distributed parameter systems.
Ahmadian Behrooz, Hesam; Boozarjomehry, R Bozorgmehry
2015-09-01
In this paper, a framework for distributed and decentralized state estimation in high-pressure and long-distance gas transmission networks (GTNs) is proposed. The non-isothermal model of the plant including mass, momentum and energy balance equations are used to simulate the dynamic behavior. Due to several disadvantages of implementing a centralized Kalman filter for large-scale systems, the continuous/discrete form of extended Kalman filter for distributed and decentralized estimation (DDE) has been extended for these systems. Accordingly, the global model is decomposed into several subsystems, called local models. Some heuristic rules are suggested for system decomposition in gas pipeline networks. In the construction of local models, due to the existence of common states and interconnections among the subsystems, the assimilation and prediction steps of the Kalman filter are modified to take the overlapping and external states into account. However, dynamic Riccati equation for each subsystem is constructed based on the local model, which introduces a maximum error of 5% in the estimated standard deviation of the states in the benchmarks studied in this paper. The performance of the proposed methodology has been shown based on the comparison of its accuracy and computational demands against their counterparts in centralized Kalman filter for two viable benchmarks. In a real life network, it is shown that while the accuracy is not significantly decreased, the real-time factor of the state estimation is increased by a factor of 10. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Yang, Yu-Jiao; Wang, Shuai; Zhang, Biao; Shen, Hong-Bin
2018-06-25
As a relatively new technology to solve the three-dimensional (3D) structure of a protein or protein complex, single-particle reconstruction (SPR) of cryogenic electron microscopy (cryo-EM) images shows much superiority and is in a rapidly developing stage. Resolution measurement in SPR, which evaluates the quality of a reconstructed 3D density map, plays a critical role in promoting methodology development of SPR and structural biology. Because there is no benchmark map in the generation of a new structure, how to realize the resolution estimation of a new map is still an open problem. Existing approaches try to generate a hypothetical benchmark map by reconstructing two 3D models from two halves of the original 2D images for cross-reference, which may result in a premature estimation with a half-data model. In this paper, we report a new self-reference-based resolution estimation protocol, called SRes, that requires only a single reconstructed 3D map. The core idea of SRes is to perform a multiscale spectral analysis (MSSA) on the map through multiple size-variable masks segmenting the map. The MSSA-derived multiscale spectral signal-to-noise ratios (mSSNRs) reveal that their corresponding estimated resolutions will show a cliff jump phenomenon, indicating a significant change in the SSNR properties. The critical point on the cliff borderline is demonstrated to be the right estimator for the resolution of the map.
Burkill, Sarah; Couper, Mick P; Conrad, Frederick; Clifton, Soazig; Tanton, Clare; Phelps, Andrew; Datta, Jessica; Mercer, Catherine H; Sonnenberg, Pam; Prah, Philip; Mitchell, Kirstin R; Wellings, Kaye; Johnson, Anne M; Copas, Andrew J
2014-01-01
Background Nonprobability Web surveys using volunteer panels can provide a relatively cheap and quick alternative to traditional health and epidemiological surveys. However, concerns have been raised about their representativeness. Objective The aim was to compare results from different Web panels with a population-based probability sample survey (n=8969 aged 18-44 years) that used computer-assisted self-interview (CASI) for sensitive behaviors, the third British National Survey of Sexual Attitudes and Lifestyles (Natsal-3). Methods Natsal-3 questions were included on 4 nonprobability Web panel surveys (n=2000 to 2099), 2 using basic quotas based on age and sex, and 2 using modified quotas based on additional variables related to key estimates. Results for sociodemographic characteristics were compared with external benchmarks and for sexual behaviors and opinions with Natsal-3. Odds ratios (ORs) were used to express differences between the benchmark data and each survey for each variable of interest. A summary measure of survey performance was the average absolute OR across variables. Another summary measure was the number of key estimates for which the survey differed significantly (at the 5% level) from the benchmarks. Results For sociodemographic variables, the Web surveys were less representative of the general population than Natsal-3. For example, for men, the average absolute OR for Natsal-3 was 1.14, whereas for the Web surveys the average absolute ORs ranged from 1.86 to 2.30. For all Web surveys, approximately two-thirds of the key estimates of sexual behaviors were different from Natsal-3 and the average absolute ORs ranged from 1.32 to 1.98. Differences were appreciable even for questions asked by CASI in Natsal-3. No single Web survey performed consistently better than any other did. Modified quotas slightly improved results for men, but not for women. Conclusions Consistent with studies from other countries on less sensitive topics, volunteer Web panels provided appreciably biased estimates. The differences seen with Natsal-3 CASI questions, where mode effects may be similar, suggest a selection bias in the Web surveys. The use of more complex quotas may lead to some improvement, but many estimates are still likely to differ. Volunteer Web panels are not recommended if accurate prevalence estimates for the general population are a key objective. PMID:25488851
Uncertainty of fast biological radiation dose assessment for emergency response scenarios.
Ainsbury, Elizabeth A; Higueras, Manuel; Puig, Pedro; Einbeck, Jochen; Samaga, Daniel; Barquinero, Joan Francesc; Barrios, Lleonard; Brzozowska, Beata; Fattibene, Paola; Gregoire, Eric; Jaworska, Alicja; Lloyd, David; Oestreicher, Ursula; Romm, Horst; Rothkamm, Kai; Roy, Laurence; Sommer, Sylwester; Terzoudi, Georgia; Thierens, Hubert; Trompier, Francois; Vral, Anne; Woda, Clemens
2017-01-01
Reliable dose estimation is an important factor in appropriate dosimetric triage categorization of exposed individuals to support radiation emergency response. Following work done under the EU FP7 MULTIBIODOSE and RENEB projects, formal methods for defining uncertainties on biological dose estimates are compared using simulated and real data from recent exercises. The results demonstrate that a Bayesian method of uncertainty assessment is the most appropriate, even in the absence of detailed prior information. The relative accuracy and relevance of techniques for calculating uncertainty and combining assay results to produce single dose and uncertainty estimates is further discussed. Finally, it is demonstrated that whatever uncertainty estimation method is employed, ignoring the uncertainty on fast dose assessments can have an important impact on rapid biodosimetric categorization.
Quantum-enhanced multiparameter estimation in multiarm interferometers
Ciampini, Mario A.; Spagnolo, Nicolò; Vitelli, Chiara; Pezzè, Luca; Smerzi, Augusto; Sciarrino, Fabio
2016-01-01
Quantum metrology is the state-of-the-art measurement technology. It uses quantum resources to enhance the sensitivity of phase estimation over that achievable by classical physics. While single parameter estimation theory has been widely investigated, much less is known about the simultaneous estimation of multiple phases, which finds key applications in imaging and sensing. In this manuscript we provide conditions of useful particle (qudit) entanglement for multiphase estimation and adapt them to multiarm Mach-Zehnder interferometry. We theoretically discuss benchmark multimode Fock states containing useful qudit entanglement and overcoming the sensitivity of separable qudit states in three and four arm Mach-Zehnder-like interferometers - currently within the reach of integrated photonics technology. PMID:27381743
DOE Office of Scientific and Technical Information (OSTI.GOV)
Su, Lin; Du, Xining; Liu, Tianyu
Purpose: Using the graphical processing units (GPU) hardware technology, an extremely fast Monte Carlo (MC) code ARCHER{sub RT} is developed for radiation dose calculations in radiation therapy. This paper describes the detailed software development and testing for three clinical TomoTherapy® cases: the prostate, lung, and head and neck. Methods: To obtain clinically relevant dose distributions, phase space files (PSFs) created from optimized radiation therapy treatment plan fluence maps were used as the input to ARCHER{sub RT}. Patient-specific phantoms were constructed from patient CT images. Batch simulations were employed to facilitate the time-consuming task of loading large PSFs, and to improvemore » the estimation of statistical uncertainty. Furthermore, two different Woodcock tracking algorithms were implemented and their relative performance was compared. The dose curves of an Elekta accelerator PSF incident on a homogeneous water phantom were benchmarked against DOSXYZnrc. For each of the treatment cases, dose volume histograms and isodose maps were produced from ARCHER{sub RT} and the general-purpose code, GEANT4. The gamma index analysis was performed to evaluate the similarity of voxel doses obtained from these two codes. The hardware accelerators used in this study are one NVIDIA K20 GPU, one NVIDIA K40 GPU, and six NVIDIA M2090 GPUs. In addition, to make a fairer comparison of the CPU and GPU performance, a multithreaded CPU code was developed using OpenMP and tested on an Intel E5-2620 CPU. Results: For the water phantom, the depth dose curve and dose profiles from ARCHER{sub RT} agree well with DOSXYZnrc. For clinical cases, results from ARCHER{sub RT} are compared with those from GEANT4 and good agreement is observed. Gamma index test is performed for voxels whose dose is greater than 10% of maximum dose. For 2%/2mm criteria, the passing rates for the prostate, lung case, and head and neck cases are 99.7%, 98.5%, and 97.2%, respectively. Due to specific architecture of GPU, modified Woodcock tracking algorithm performed inferior to the original one. ARCHER{sub RT} achieves a fast speed for PSF-based dose calculations. With a single M2090 card, the simulations cost about 60, 50, 80 s for three cases, respectively, with the 1% statistical error in the PTV. Using the latest K40 card, the simulations are 1.7–1.8 times faster. More impressively, six M2090 cards could finish the simulations in 8.9–13.4 s. For comparison, the same simulations on Intel E5-2620 (12 hyperthreading) cost about 500–800 s. Conclusions: ARCHER{sub RT} was developed successfully to perform fast and accurate MC dose calculation for radiotherapy using PSFs and patient CT phantoms.« less
Chernobyl accident: reconstruction of thyroid dose for inhabitants of the Republic of Belarus.
Gavrilin, Y I; Khrouch, V T; Shinkarev, S M; Krysenko, N A; Skryabin, A M; Bouville, A; Anspaugh, L R
1999-02-01
The Chernobyl accident in April 1986 resulted in widespread contamination of the environment with radioactive materials, including (131)I and other radioiodines. This environmental contamination led to substantial radiation doses in the thyroids of many inhabitants of the Republic of Belarus. The reconstruction of thyroid doses received by Belarussians is based primarily on exposure rates measured against the neck of more than 200,000 people in the more contaminated territories; these measurements were carried out within a few weeks after the accident and before the decay of (131)I to negligible levels. Preliminary estimates of thyroid dose have been divided into 3 classes: Class 1 ("measured" doses), Class 2 (doses "derived by affinity"), and Class 3 ("empirically-derived" doses). Class 1 doses are estimated directly from the measured thyroidal (131)I content of the person considered, plus information on lifestyle and dietary habits. Such estimates are available for about 130,000 individuals from the contaminated areas of the Gomel and Mogilev Oblasts and from the city of Minsk. Maximum individual doses are estimated to range up to about 60 Gy. For every village with a sufficient number of residents with Class 1 doses, individual thyroid dose distributions are determined for several age groups and levels of milk consumption. These data are used to derive Class 2 thyroid dose estimates for unmeasured inhabitants of these villages. For any village where the number of residents with Class 1 thyroid doses is small or equal to zero, individual thyroid doses of Class 3 are derived from the relationship obtained between the mean adult thyroid dose and the deposition density of (131)I or 137Cs in villages with Class 2 thyroid doses presenting characteristics similar to those of the village considered. In order to improve the reliability of the Class 3 thyroid doses, an extensive program of measurement of (129)I in soils is envisaged.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou Jinghao; Kim, Sung; Jabbour, Salma
2010-03-15
Purpose: In the external beam radiation treatment of prostate cancers, successful implementation of adaptive radiotherapy and conformal radiation dose delivery is highly dependent on precise and expeditious segmentation and registration of the prostate volume between the simulation and the treatment images. The purpose of this study is to develop a novel, fast, and accurate segmentation and registration method to increase the computational efficiency to meet the restricted clinical treatment time requirement in image guided radiotherapy. Methods: The method developed in this study used soft tissues to capture the transformation between the 3D planning CT (pCT) images and 3D cone-beam CTmore » (CBCT) treatment images. The method incorporated a global-to-local deformable mesh model based registration framework as well as an automatic anatomy-constrained robust active shape model (ACRASM) based segmentation algorithm in the 3D CBCT images. The global registration was based on the mutual information method, and the local registration was to minimize the Euclidian distance of the corresponding nodal points from the global transformation of deformable mesh models, which implicitly used the information of the segmented target volume. The method was applied on six data sets of prostate cancer patients. Target volumes delineated by the same radiation oncologist on the pCT and CBCT were chosen as the benchmarks and were compared to the segmented and registered results. The distance-based and the volume-based estimators were used to quantitatively evaluate the results of segmentation and registration. Results: The ACRASM segmentation algorithm was compared to the original active shape model (ASM) algorithm by evaluating the values of the distance-based estimators. With respect to the corresponding benchmarks, the mean distance ranged from -0.85 to 0.84 mm for ACRASM and from -1.44 to 1.17 mm for ASM. The mean absolute distance ranged from 1.77 to 3.07 mm for ACRASM and from 2.45 to 6.54 mm for ASM. The volume overlap ratio ranged from 79% to 91% for ACRASM and from 44% to 80% for ASM. These data demonstrated that the segmentation results of ACRASM were in better agreement with the corresponding benchmarks than those of ASM. The developed registration algorithm was quantitatively evaluated by comparing the registered target volumes from the pCT to the benchmarks on the CBCT. The mean distance and the root mean square error ranged from 0.38 to 2.2 mm and from 0.45 to 2.36 mm, respectively, between the CBCT images and the registered pCT. The mean overlap ratio of the prostate volumes ranged from 85.2% to 95% after registration. The average time of the ACRASM-based segmentation was under 1 min. The average time of the global transformation was from 2 to 4 min on two 3D volumes and the average time of the local transformation was from 20 to 34 s on two deformable superquadrics mesh models. Conclusions: A novel and fast segmentation and deformable registration method was developed to capture the transformation between the planning and treatment images for external beam radiotherapy of prostate cancers. This method increases the computational efficiency and may provide foundation to achieve real time adaptive radiotherapy.« less
Oral toxicity of 3-nitro-1,2,4-triazol-5-one in rats.
Crouse, Lee C B; Lent, Emily May; Leach, Glenn J
2015-01-01
3-Nitro-1,2,4-triazol-5-one (NTO), an insensitive explosive, was evaluated to assess potential environmental and human health effects. A 14-day oral toxicity study in Sprague-Dawley rats was conducted with NTO in polyethylene glycol -200 by gavage at doses of 0, 250, 500, 1000, 1500, or 2000 mg/kg-d. Body mass and food consumption decreased in males (2000 mg/kg-d), and testes mass was reduced at doses of 500 mg/kg-d and greater. Based on the findings in the 14-day study, a 90-day study was conducted at doses of 0, 30, 100, 315, or 1000 mg/kg-d NTO. There was no effect on food consumption, body mass, or neurobehavioral parameters. Males in the 315 and 1000 mg/kg-d groups had reduced testes mass with associated tubular degeneration and atrophy. The testicular effects were the most sensitive adverse effect and were used to derive a benchmark dose (BMD) of 70 mg/kg-d with a 10% effect level (BMDL10) of 40 mg/kg-d. © The Author(s) 2015.
Peak skin and eye lens radiation dose from brain perfusion CT based on Monte Carlo simulation.
Zhang, Di; Cagnon, Chris H; Villablanca, J Pablo; McCollough, Cynthia H; Cody, Dianna D; Stevens, Donna M; Zankl, Maria; Demarco, John J; Turner, Adam C; Khatonabadi, Maryam; McNitt-Gray, Michael F
2012-02-01
The purpose of our study was to accurately estimate the radiation dose to skin and the eye lens from clinical CT brain perfusion studies, investigate how well scanner output (expressed as volume CT dose index [CTDI(vol)]) matches these estimated doses, and investigate the efficacy of eye lens dose reduction techniques. Peak skin dose and eye lens dose were estimated using Monte Carlo simulation methods on a voxelized patient model and 64-MDCT scanners from four major manufacturers. A range of clinical protocols was evaluated. CTDI(vol) for each scanner was obtained from the scanner console. Dose reduction to the eye lens was evaluated for various gantry tilt angles as well as scan locations. Peak skin dose and eye lens dose ranged from 81 mGy to 348 mGy, depending on the scanner and protocol used. Peak skin dose and eye lens dose were observed to be 66-79% and 59-63%, respectively, of the CTDI(vol) values reported by the scanners. The eye lens dose was significantly reduced when the eye lenses were not directly irradiated. CTDI(vol) should not be interpreted as patient dose; this study has shown it to overestimate dose to the skin or eye lens. These results may be used to provide more accurate estimates of actual dose to ensure that protocols are operated safely below thresholds. Tilting the gantry or moving the scanning region further away from the eyes are effective for reducing lens dose in clinical practice. These actions should be considered when they are consistent with the clinical task and patient anatomy.
DOSE COEFFICIENTS FOR LIVER CHEMOEMBOLISATION PROCEDURES USING MONTE CARLO CODE.
Karavasilis, E; Dimitriadis, A; Gonis, H; Pappas, P; Georgiou, E; Yakoumakis, E
2016-12-01
The aim of the present study is the estimation of radiation burden during liver chemoembolisation procedures. Organ dose and effective dose conversion factors, normalised to dose-area product (DAP), were estimated for chemoembolisation procedures using a Monte Carlo transport code in conjunction with an adult mathematical phantom. Exposure data from 32 patients were used to determine the exposure projections for the simulations. Equivalent organ (H T ) and effective (E) doses were estimated using individual DAP values. The organs receiving the highest amount of doses during these exams were lumbar spine, liver and kidneys. The mean effective dose conversion factor was 1.4 Sv Gy -1 m -2 Dose conversion factors can be useful for patient-specific radiation burden during chemoembolisation procedures. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rupcich, Franco; Badal, Andreu; Kyprianou, Iacovos
Purpose: The purpose of this study was to develop a database for estimating organ dose in a voxelized patient model for coronary angiography and brain perfusion CT acquisitions with any spectra and angular tube current modulation setting. The database enables organ dose estimation for existing and novel acquisition techniques without requiring Monte Carlo simulations. Methods: The study simulated transport of monoenergetic photons between 5 and 150 keV for 1000 projections over 360 Degree-Sign through anthropomorphic voxelized female chest and head (0 Degree-Sign and 30 Degree-Sign tilt) phantoms and standard head and body CTDI dosimetry cylinders. The simulations resulted in tablesmore » of normalized dose deposition for several radiosensitive organs quantifying the organ dose per emitted photon for each incident photon energy and projection angle for coronary angiography and brain perfusion acquisitions. The values in a table can be multiplied by an incident spectrum and number of photons at each projection angle and then summed across all energies and angles to estimate total organ dose. Scanner-specific organ dose may be approximated by normalizing the database-estimated organ dose by the database-estimated CTDI{sub vol} and multiplying by a physical CTDI{sub vol} measurement. Two examples are provided demonstrating how to use the tables to estimate relative organ dose. In the first, the change in breast and lung dose during coronary angiography CT scans is calculated for reduced kVp, angular tube current modulation, and partial angle scanning protocols relative to a reference protocol. In the second example, the change in dose to the eye lens is calculated for a brain perfusion CT acquisition in which the gantry is tilted 30 Degree-Sign relative to a nontilted scan. Results: Our database provides tables of normalized dose deposition for several radiosensitive organs irradiated during coronary angiography and brain perfusion CT scans. Validation results indicate total organ doses calculated using our database are within 1% of those calculated using Monte Carlo simulations with the same geometry and scan parameters for all organs except red bone marrow (within 6%), and within 23% of published estimates for different voxelized phantoms. Results from the example of using the database to estimate organ dose for coronary angiography CT acquisitions show 2.1%, 1.1%, and -32% change in breast dose and 2.1%, -0.74%, and 4.7% change in lung dose for reduced kVp, tube current modulated, and partial angle protocols, respectively, relative to the reference protocol. Results show -19.2% difference in dose to eye lens for a tilted scan relative to a nontilted scan. The reported relative changes in organ doses are presented without quantification of image quality and are for the sole purpose of demonstrating the use of the proposed database. Conclusions: The proposed database and calculation method enable the estimation of organ dose for coronary angiography and brain perfusion CT scans utilizing any spectral shape and angular tube current modulation scheme by taking advantage of the precalculated Monte Carlo simulation results. The database can be used in conjunction with image quality studies to develop optimized acquisition techniques and may be particularly beneficial for optimizing dual kVp acquisitions for which numerous kV, mA, and filtration combinations may be investigated.« less
Simplification of an MCNP model designed for dose rate estimation
NASA Astrophysics Data System (ADS)
Laptev, Alexander; Perry, Robert
2017-09-01
A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.
NASA Astrophysics Data System (ADS)
Srivastava, Prashant K.; Petropoulos, George P.; Gupta, Manika; Islam, Tanvir
2015-04-01
Soil Moisture Deficit (SMD) is a key variable in the water and energy exchanges that occur at the land-surface/atmosphere interface. Monitoring SMD is an alternate method of irrigation scheduling and represents the use of the suitable quantity of water at the proper time by combining measurements of soil moisture deficit. In past it is found that LST has a strong relation to SMD, which can be estimated by MODIS or numerical weather prediction model such as WRF (Weather Research and Forecasting model). By looking into the importance of SMD, this work focused on the application of Artificial Neural Network (ANN) for evaluating its capabilities towards SMD estimation using the LST data estimated from MODIS and WRF mesoscale model. The benchmark SMD estimated from Probability Distribution Model (PDM) over the Brue catchment, Southwest of England, U.K. is used for all the calibration and validation experiments. The performances between observed and simulated SMD are assessed in terms of the Nash-Sutcliffe Efficiency (NSE), the Root Mean Square Error (RMSE) and the percentage of bias (%Bias). The application of the ANN confirmed a high capability WRF and MODIS LST for prediction of SMD. Performance during the ANN calibration and validation showed a good agreement between benchmark and estimated SMD with MODIS LST information with significantly higher performance than WRF simulated LST. The work presented showed the first comprehensive application of LST from MODIS and WRF mesoscale model for hydrological SMD estimation, particularly for the maritime climate. More studies in this direction are recommended to hydro-meteorological community, so that useful information will be accumulated in the technical literature domain for different geographical locations and climatic conditions. Keyword: WRF, Land Surface Temperature, MODIS satellite, Soil Moisture Deficit, Neural Network
Lui, Kung-Jong; Chang, Kuang-Chao
2015-01-01
When comparing two doses of a new drug with a placebo, we may consider using a crossover design subject to the condition that the high dose cannot be administered before the low dose. Under a random-effects logistic regression model, we focus our attention on dichotomous responses when the high dose cannot be used first under a three-period crossover trial. We derive asymptotic test procedures for testing equality between treatments. We further derive interval estimators to assess the magnitude of the relative treatment effects. We employ Monte Carlo simulation to evaluate the performance of these test procedures and interval estimators in a variety of situations. We use the data taken as a part of trial comparing two different doses of an analgesic with a placebo for the relief of primary dysmenorrhea to illustrate the use of the proposed test procedures and estimators.
DOSESCREEN: a computer program to aid dose placement
Kimberly C. Smith; Jacqueline L. Robertson
1984-01-01
Careful selection of an experimental design for a bioassay substantially improves the precision of effective dose (ED) estimates. Design considerations typically include determination of sample size, dose selection, and allocation of subjects to doses. DOSESCREEN is a computer program written to help investigators select an efficient design for the estimation of an...
Park, Robert M; Gilbert, Stephen J
2018-06-01
The butter flavoring additive, diacetyl (DA), can cause bronchiolitis obliterans (BO) by inhalation. A risk assessment was performed using data from a microwave popcorn manufacturing plant. Current employees' medical history and pulmonary function tests together with air sampling over a 2.7-year period were used to analyze forced expiratory volume in 1 second (FEV1) and FEV1/forced vital capacity (FVC). The exposure responses for declining pulmonary function and for possible early onset of BO were estimated using multiple regression methods. Several exposure metrics were investigated; benchmark dose and excess lifetime risk of impairment were calculated. Forty-six percent of the population had less than 6 months exposure to DA. Percent-of-predicted FEV1 declined with cumulative exposure (0.40 per ppm-yr, P < 10) as did percent FEV1/FVC (0.13 per ppm-yr, P = 0.0004). Lifetime respiratory impairment prevalence of one per thousand resulted from 0.005 ppm DA and one per thousand lifetime incidence of impairment was predicted for 0.002 ppm DA. DA exposures, often exceeding 1 ppm in the past, place workers at high risk of pulmonary impairment.
Assessment of radiation doses from residential smoke detectors that contain americium-241
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Donnell, F.R.; Etnier, E.L.; Holton, G.A.
1981-10-01
External dose equivalents and internal dose commitments were estimated for individuals and populations from annual distribution, use, and disposal of 10 million ionization chamber smoke detectors that contain 110 kBq (3 ..mu..Ci) americium-241 each. Under exposure scenarios developed for normal distribution, use, and disposal using the best available information, annual external dose equivalents to average individuals were estimated to range from 4 fSv (0.4 prem) to 20 nSv (2 ..mu..rem) for total body and from 7 fSv to 40 nSv for bone. Internal dose commitments to individuals under post disposal scenarios were estimated to range from 0.006 to 80 ..mu..Svmore » (0.0006 to 8 mrem) to total body and from 0.06 to 800 ..mu..Sv to bone. The total collective dose (the sum of external dose equivalents and 50-year internal dose commitments) for all individuals involved with distribution, use, or disposal of 10 million smoke detectors was estimated to be about 0.38 person-Sv (38 person-rem) to total body and 00 ft/sup 2/).« less
beta- and gamma-Comparative dose estimates on Enewetak Atoll.
Crase, K W; Gudiksen, P H; Robison, W L
1982-05-01
Enewetak Atoll is one of the Pacific atolls used for atmospheric testing of U.S. nuclear weapons. Beta dose and gamma-ray exposure measurements were made on two islands of the Enewetak Atoll during July-August 1976 to determine the beta and low energy gamma-contribution to the total external radiation doses to the returning Marshallese. Measurements were made at numerous locations with thermoluminescent dosimeters (TLD), pressurized ionization chambers, portable NaI detectors, and thin-window pancake GM probes. Results of the TLD measurements with and without a beta-attenuator indicate that approx. 29% of the total dose rate at 1 m in air is due to beta- or low energy gamma-contribution. The contribution at any particular site, however, is somewhat dependent on ground cover, since a minimal amount of vegetation will reduce it significantly from that over bare soil, but thick stands of vegetation have little effect on any further reductions. Integral 30-yr external shallow dose estimates for future inhabitants were made and compared with external dose estimates of a previous large scale radiological survey (En73). Integral 30-yr shallow external dose estimates are 25-50% higher than whole body estimates. Due to the low penetrating ability of the beta's or low energy gamma's, however, several remedial actions can be taken to reduce the shallow dose contribution to the total external dose.
ESR dosimetry for atomic bomb survivors and radiologic technologists
NASA Astrophysics Data System (ADS)
Tatsumi-Miyajima, Junko
1987-06-01
An individual absorbed dose for atomic bomb (A-bomb) survivors and radiologic technologists has been estimated using a new personal dosimetry. This dosimetry is based on the electron spin resonance (ESR) spectroscopy of the CO 33- radicals, which are produced in their teeth by radiation. Measurements were carried out to study the characteristics of the dosimetry; the ESR signals of the CO 33- radicals were stable and increased linearly with the radiation dose. In the evaluation of the absorbed dose, the ESR signals were considered to be a function of photon energy. The absorbed doses in ten cases of A-bomb victims and eight cases of radiologic technologists were determined. For A-bomb survivors, the adsorbed doses, which were estimated using the ESR dosimetry, were consistent with the ones obtained using the calculations of the tissue dose in air of A-bomb, and also with the ones obtained using the chromosome measurements. For radiologic technologists, the absorbed doses, which were estimated using the ESR dosimetry, agreed with the ones calculated using the information on the occupational history and conditions. The advantages of this method are that the absorbed dose can be directly estimated by measuring the ESR signals obtained from the teeth of persons, who are exposed to radiation. Therefore, the ESR dosimetry is useful to estimate the accidental exposure and the long term cumulative dose.