Sample records for benchmark dose analysis

  1. EPA and EFSA approaches for Benchmark Dose modeling

    EPA Science Inventory

    Benchmark dose (BMD) modeling has become the preferred approach in the analysis of toxicological dose-response data for the purpose of deriving human health toxicity values. The software packages most often used are Benchmark Dose Software (BMDS, developed by EPA) and PROAST (de...

  2. BENCHMARK DOSE TECHNICAL GUIDANCE DOCUMENT ...

    EPA Pesticide Factsheets

    The purpose of this document is to provide guidance for the Agency on the application of the benchmark dose approach in determining the point of departure (POD) for health effects data, whether a linear or nonlinear low dose extrapolation is used. The guidance includes discussion on computation of benchmark doses and benchmark concentrations (BMDs and BMCs) and their lower confidence limits, data requirements, dose-response analysis, and reporting requirements. This guidance is based on today's knowledge and understanding, and on experience gained in using this approach.

  3. A Consumer's Guide to Benchmark Dose Models: Results of U.S. EPA Testing of 14 Dichotomous, 8 Continuous, and 6 Developmental Models (Presentation)

    EPA Science Inventory

    Benchmark dose risk assessment software (BMDS) was designed by EPA to generate dose-response curves and facilitate the analysis, interpretation and synthesis of toxicological data. Partial results of QA/QC testing of the EPA benchmark dose software (BMDS) are presented. BMDS pr...

  4. Nonparametric estimation of benchmark doses in environmental risk assessment

    PubMed Central

    Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen

    2013-01-01

    Summary An important statistical objective in environmental risk analysis is estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a pre-specified benchmark response in a dose-response experiment. In such settings, representations of the risk are traditionally based on a parametric dose-response model. It is a well-known concern, however, that if the chosen parametric form is misspecified, inaccurate and possibly unsafe low-dose inferences can result. We apply a nonparametric approach for calculating benchmark doses, based on an isotonic regression method for dose-response estimation with quantal-response data (Bhattacharya and Kong, 2007). We determine the large-sample properties of the estimator, develop bootstrap-based confidence limits on the BMDs, and explore the confidence limits’ small-sample properties via a short simulation study. An example from cancer risk assessment illustrates the calculations. PMID:23914133

  5. The Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD) Tool

    EPA Pesticide Factsheets

    Providing quantal response models, which are also used in the U.S. EPA benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates.

  6. Correlation of Noncancer Benchmark Doses in Short- and Long-Term Rodent Bioassays.

    PubMed

    Kratchman, Jessica; Wang, Bing; Fox, John; Gray, George

    2018-05-01

    This study investigated whether, in the absence of chronic noncancer toxicity data, short-term noncancer toxicity data can be used to predict chronic toxicity effect levels by focusing on the dose-response relationship instead of a critical effect. Data from National Toxicology Program (NTP) technical reports have been extracted and modeled using the Environmental Protection Agency's Benchmark Dose Software. Best-fit, minimum benchmark dose (BMD), and benchmark dose lower limits (BMDLs) have been modeled for all NTP pathologist identified significant nonneoplastic lesions, final mean body weight, and mean organ weight of 41 chemicals tested by NTP between 2000 and 2012. Models were then developed at the chemical level using orthogonal regression techniques to predict chronic (two years) noncancer health effect levels using the results of the short-term (three months) toxicity data. The findings indicate that short-term animal studies may reasonably provide a quantitative estimate of a chronic BMD or BMDL. This can allow for faster development of human health toxicity values for risk assessment for chemicals that lack chronic toxicity data. © 2017 Society for Risk Analysis.

  7. EPA's Benchmark Dose Modeling Software

    EPA Science Inventory

    The EPA developed the Benchmark Dose Software (BMDS) as a tool to help Agency risk assessors facilitate applying benchmark dose (BMD) method’s to EPA’s human health risk assessment (HHRA) documents. The application of BMD methods overcomes many well know limitations ...

  8. 77 FR 36533 - Notice of Availability of the Benchmark Dose Technical Guidance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-19

    ... ENVIRONMENTAL PROTECTION AGENCY [FRL-9688-7] Notice of Availability of the Benchmark Dose Technical Guidance AGENCY: Environmental Protection Agency (EPA). ACTION: Notice of Availability. SUMMARY: The U.S. Environmental Protection Agency is announcing the availability of Benchmark Dose Technical...

  9. Benchmark dose analysis via nonparametric regression modeling

    PubMed Central

    Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen

    2013-01-01

    Estimation of benchmark doses (BMDs) in quantitative risk assessment traditionally is based upon parametric dose-response modeling. It is a well-known concern, however, that if the chosen parametric model is uncertain and/or misspecified, inaccurate and possibly unsafe low-dose inferences can result. We describe a nonparametric approach for estimating BMDs with quantal-response data based on an isotonic regression method, and also study use of corresponding, nonparametric, bootstrap-based confidence limits for the BMD. We explore the confidence limits’ small-sample properties via a simulation study, and illustrate the calculations with an example from cancer risk assessment. It is seen that this nonparametric approach can provide a useful alternative for BMD estimation when faced with the problem of parametric model uncertainty. PMID:23683057

  10. Application of Benchmark Dose Methodology to a Variety of Endpoints and Exposures

    EPA Science Inventory

    This latest beta version (1.1b) of the U.S. Environmental Protection Agency (EPA) Benchmark Dose Software (BMDS) is being distributed for public comment. The BMDS system is being developed as a tool to facilitate the application of benchmark dose (BMD) methods to EPA hazardous p...

  11. BENCHMARK DOSES FOR CHEMICAL MIXTURES: EVALUATION OF A MIXTURE OF 18 PHAHS.

    EPA Science Inventory

    Benchmark doses (BMDs), defined as doses of a substance that are expected to result in a pre-specified level of "benchmark" response (BMR), have been used for quantifying the risk associated with exposure to environmental hazards. The lower confidence limit of the BMD is used as...

  12. SU-E-T-148: Benchmarks and Pre-Treatment Reviews: A Study of Quality Assurance Effectiveness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lowenstein, J; Nguyen, H; Roll, J

    Purpose: To determine the impact benchmarks and pre-treatment reviews have on improving the quality of submitted clinical trial data. Methods: Benchmarks are used to evaluate a site’s ability to develop a treatment that meets a specific protocol’s treatment guidelines prior to placing their first patient on the protocol. A pre-treatment review is an actual patient placed on the protocol in which the dosimetry and contour volumes are evaluated to be per protocol guidelines prior to allowing the beginning of the treatment. A key component of these QA mechanisms is that sites are provided timely feedback to educate them on howmore » to plan per the protocol and prevent protocol deviations on patients accrued to a protocol. For both benchmarks and pre-treatment reviews a dose volume analysis (DVA) was performed using MIM softwareTM. For pre-treatment reviews a volume contour evaluation was also performed. Results: IROC Houston performed a QA effectiveness analysis of a protocol which required both benchmarks and pre-treatment reviews. In 70 percent of the patient cases submitted, the benchmark played an effective role in assuring that the pre-treatment review of the cases met protocol requirements. The 35 percent of sites failing the benchmark subsequently modified there planning technique to pass the benchmark before being allowed to submit a patient for pre-treatment review. However, in 30 percent of the submitted cases the pre-treatment review failed where the majority (71 percent) failed the DVA. 20 percent of sites submitting patients failed to correct their dose volume discrepancies indicated by the benchmark case. Conclusion: Benchmark cases and pre-treatment reviews can be an effective QA tool to educate sites on protocol guidelines and to minimize deviations. Without the benchmark cases it is possible that 65 percent of the cases undergoing a pre-treatment review would have failed to meet the protocols requirements.Support: U24-CA-180803.« less

  13. A MULTIMODEL APPROACH FOR CALCULATING BENCHMARK DOSE

    EPA Science Inventory


    A Multimodel Approach for Calculating Benchmark Dose
    Ramon I. Garcia and R. Woodrow Setzer

    In the assessment of dose response, a number of plausible dose- response models may give fits that are consistent with the data. If no dose response formulation had been speci...

  14. Translational benchmark risk analysis

    PubMed Central

    Piegorsch, Walter W.

    2010-01-01

    Translational development – in the sense of translating a mature methodology from one area of application to another, evolving area – is discussed for the use of benchmark doses in quantitative risk assessment. Illustrations are presented with traditional applications of the benchmark paradigm in biology and toxicology, and also with risk endpoints that differ from traditional toxicological archetypes. It is seen that the benchmark approach can apply to a diverse spectrum of risk management settings. This suggests a promising future for this important risk-analytic tool. Extensions of the method to a wider variety of applications represent a significant opportunity for enhancing environmental, biomedical, industrial, and socio-economic risk assessments. PMID:20953283

  15. BENCHMARK DOSE TECHNICAL GUIDANCE DOCUMENT ...

    EPA Pesticide Factsheets

    The U.S. EPA conducts risk assessments for an array of health effects that may result from exposure to environmental agents, and that require an analysis of the relationship between exposure and health-related outcomes. The dose-response assessment is essentially a two-step process, the first being the definition of a point of departure (POD), and the second extrapolation from the POD to low environmentally-relevant exposure levels. The benchmark dose (BMD) approach provides a more quantitative alternative to the first step in the dose-response assessment than the current NOAEL/LOAEL process for noncancer health effects, and is similar to that for determining the POD proposed for cancer endpoints. As the Agency moves toward harmonization of approaches for human health risk assessment, the dichotomy between cancer and noncancer health effects is being replaced by consideration of mode of action and whether the effects of concern are likely to be linear or nonlinear at low doses. Thus, the purpose of this project is to provide guidance for the Agency and the outside community on the application of the BMD approach in determining the POD for all types of health effects data, whether a linear or nonlinear low dose extrapolation is used. A guidance document is being developed under the auspices of EPA's Risk Assessment Forum. The purpose of this project is to provide guidance for the Agency and the outside community on the application of the benchmark dose (BMD) appr

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faillace, E.R.; Cheng, J.J.; Yu, C.

    A series of benchmarking runs were conducted so that results obtained with the RESRAD code could be compared against those obtained with six pathway analysis models used to determine the radiation dose to an individual living on a radiologically contaminated site. The RESRAD computer code was benchmarked against five other computer codes - GENII-S, GENII, DECOM, PRESTO-EPA-CPG, and PATHRAE-EPA - and the uncodified methodology presented in the NUREG/CR-5512 report. Estimated doses for the external gamma pathway; the dust inhalation pathway; and the soil, food, and water ingestion pathways were calculated for each methodology by matching, to the extent possible, inputmore » parameters such as occupancy, shielding, and consumption factors.« less

  17. TH-A-19A-04: Latent Uncertainties and Performance of a GPU-Implemented Pre-Calculated Track Monte Carlo Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Renaud, M; Seuntjens, J; Roberge, D

    Purpose: Assessing the performance and uncertainty of a pre-calculated Monte Carlo (PMC) algorithm for proton and electron transport running on graphics processing units (GPU). While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from recycling a limited number of tracks in the pre-generated track bank is missing from the literature. With a proper uncertainty analysis, an optimal pre-generated track bank size can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pre-generated for electrons and protons using EGSnrc and GEANT4, respectively. The PMC algorithm for track transport was implementedmore » on the CUDA programming framework. GPU-PMC dose distributions were compared to benchmark dose distributions simulated using general-purpose MC codes in the same conditions. A latent uncertainty analysis was performed by comparing GPUPMC dose values to a “ground truth” benchmark while varying the track bank size and primary particle histories. Results: GPU-PMC dose distributions and benchmark doses were within 1% of each other in voxels with dose greater than 50% of Dmax. In proton calculations, a submillimeter distance-to-agreement error was observed at the Bragg Peak. Latent uncertainty followed a Poisson distribution with the number of tracks per energy (TPE) and a track bank of 20,000 TPE produced a latent uncertainty of approximately 1%. Efficiency analysis showed a 937× and 508× gain over a single processor core running DOSXYZnrc for 16 MeV electrons in water and bone, respectively. Conclusion: The GPU-PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty below 1%. The track bank size necessary to achieve an optimal efficiency can be tuned based on the desired uncertainty. Coupled with a model to calculate dose contributions from uncharged particles, GPU-PMC is a candidate for inverse planning of modulated electron radiotherapy and scanned proton beams. This work was supported in part by FRSQ-MSSS (Grant No. 22090), NSERC RG (Grant No. 432290) and CIHR MOP (Grant No. MOP-211360)« less

  18. Benchmark Dose Analysis from Multiple Datasets: The Cumulative Risk Assessment for the N-Methyl Carbamate Pesticides

    EPA Science Inventory

    The US EPA’s N-Methyl Carbamate (NMC) Cumulative Risk assessment was based on the effect on acetylcholine esterase (AChE) activity of exposure to 10 NMC pesticides through dietary, drinking water, and residential exposures, assuming the effects of joint exposure to NMCs is dose-...

  19. ANALYSES OF NEUROBEHAVIORAL SCREENING DATA: BENCHMARK DOSE ESTIMATION.

    EPA Science Inventory

    Analysis of neurotoxicological screening data such as those of the functional observational battery (FOB) traditionally relies on analysis of variance (ANOVA) with repeated measurements, followed by determination of a no-adverse-effect level (NOAEL). The US EPA has proposed the ...

  20. Toxicogenomics and cancer risk assessment: a framework for key event analysis and dose-response assessment for nongenotoxic carcinogens.

    PubMed

    Bercu, Joel P; Jolly, Robert A; Flagella, Kelly M; Baker, Thomas K; Romero, Pedro; Stevens, James L

    2010-12-01

    In order to determine a threshold for nongenotoxic carcinogens, the traditional risk assessment approach has been to identify a mode of action (MOA) with a nonlinear dose-response. The dose-response for one or more key event(s) linked to the MOA for carcinogenicity allows a point of departure (POD) to be selected from the most sensitive effect dose or no-effect dose. However, this can be challenging because multiple MOAs and key events may exist for carcinogenicity and oftentimes extensive research is required to elucidate the MOA. In the present study, a microarray analysis was conducted to determine if a POD could be identified following short-term oral rat exposure with two nongenotoxic rodent carcinogens, fenofibrate and methapyrilene, using a benchmark dose analysis of genes aggregated in Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways and Gene Ontology (GO) biological processes, which likely encompass key event(s) for carcinogenicity. The gene expression response for fenofibrate given to rats for 2days was consistent with its MOA and known key events linked to PPARα activation. The temporal response from daily dosing with methapyrilene demonstrated biological complexity with waves of pathways/biological processes occurring over 1, 3, and 7days; nonetheless, the benchmark dose values were consistent over time. When comparing the dose-response of toxicogenomic data to tumorigenesis or precursor events, the toxicogenomics POD was slightly below any effect level. Our results suggest that toxicogenomic analysis using short-term studies can be used to identify a threshold for nongenotoxic carcinogens based on evaluation of potential key event(s) which then can be used within a risk assessment framework. Copyright © 2010 Elsevier Inc. All rights reserved.

  1. Role of the standard deviation in the estimation of benchmark doses with continuous data.

    PubMed

    Gaylor, David W; Slikker, William

    2004-12-01

    For continuous data, risk is defined here as the proportion of animals with values above a large percentile, e.g., the 99th percentile or below the 1st percentile, for the distribution of values among control animals. It is known that reducing the standard deviation of measurements through improved experimental techniques will result in less stringent (higher) doses for the lower confidence limit on the benchmark dose that is estimated to produce a specified risk of animals with abnormal levels for a biological effect. Thus, a somewhat larger (less stringent) lower confidence limit is obtained that may be used as a point of departure for low-dose risk assessment. It is shown in this article that it is important for the benchmark dose to be based primarily on the standard deviation among animals, s(a), apart from the standard deviation of measurement errors, s(m), within animals. If the benchmark dose is incorrectly based on the overall standard deviation among average values for animals, which includes measurement error variation, the benchmark dose will be overestimated and the risk will be underestimated. The bias increases as s(m) increases relative to s(a). The bias is relatively small if s(m) is less than one-third of s(a), a condition achieved in most experimental designs.

  2. Model Uncertainty and Bayesian Model Averaged Benchmark Dose Estimation for Continuous Data

    EPA Science Inventory

    The benchmark dose (BMD) approach has gained acceptance as a valuable risk assessment tool, but risk assessors still face significant challenges associated with selecting an appropriate BMD/BMDL estimate from the results of a set of acceptable dose-response models. Current approa...

  3. RESULTS OF QA/QC TESTING OF EPA BENCHMARK DOSE SOFTWARE VERSION 1.2

    EPA Science Inventory

    EPA is developing benchmark dose software (BMDS) to support cancer and non-cancer dose-response assessments. Following the recent public review of BMDS version 1.1b, EPA developed a Hill model for evaluating continuous data, and improved the user interface and Multistage, Polyno...

  4. Quality Assurance Testing of Version 1.3 of U.S. EPA Benchmark Dose Software (Presentation)

    EPA Science Inventory

    EPA benchmark dose software (BMDS) issued to evaluate chemical dose-response data in support of Agency risk assessments, and must therefore be dependable. Quality assurance testing methods developed for BMDS were designed to assess model dependability with respect to curve-fitt...

  5. Latent uncertainties of the precalculated track Monte Carlo method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Renaud, Marc-André; Seuntjens, Jan; Roberge, David

    Purpose: While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited numbermore » of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pregenerated for electrons and protons using EGSnrc and GEANT4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (CUDA) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a “ground truth” benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of D{sub max}. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Results: Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the maximum dose. In proton calculations, a small (≤1 mm) distance-to-agreement error was observed at the Bragg peak. Latent uncertainty was characterized for electrons and found to follow a Poisson distribution with the number of unique tracks per energy. A track bank of 12 energies and 60000 unique tracks per pregenerated energy in water had a size of 2.4 GB and achieved a latent uncertainty of approximately 1% at an optimal efficiency gain over DOSXYZnrc. Larger track banks produced a lower latent uncertainty at the cost of increased memory consumption. Using an NVIDIA GTX 590, efficiency analysis showed a 807 × efficiency increase over DOSXYZnrc for 16 MeV electrons in water and 508 × for 16 MeV electrons in bone. Conclusions: The PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty of 1% with a large efficiency gain over conventional MC codes. Before performing clinical dose calculations, models to calculate dose contributions from uncharged particles must be implemented. Following the successful implementation of these models, the PMC method will be evaluated as a candidate for inverse planning of modulated electron radiation therapy and scanned proton beams.« less

  6. Latent uncertainties of the precalculated track Monte Carlo method.

    PubMed

    Renaud, Marc-André; Roberge, David; Seuntjens, Jan

    2015-01-01

    While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited number of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Particle tracks were pregenerated for electrons and protons using EGSnrc and geant4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (cuda) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a "ground truth" benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of Dmax. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the maximum dose. In proton calculations, a small (≤ 1 mm) distance-to-agreement error was observed at the Bragg peak. Latent uncertainty was characterized for electrons and found to follow a Poisson distribution with the number of unique tracks per energy. A track bank of 12 energies and 60000 unique tracks per pregenerated energy in water had a size of 2.4 GB and achieved a latent uncertainty of approximately 1% at an optimal efficiency gain over DOSXYZnrc. Larger track banks produced a lower latent uncertainty at the cost of increased memory consumption. Using an NVIDIA GTX 590, efficiency analysis showed a 807 × efficiency increase over DOSXYZnrc for 16 MeV electrons in water and 508 × for 16 MeV electrons in bone. The PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty of 1% with a large efficiency gain over conventional MC codes. Before performing clinical dose calculations, models to calculate dose contributions from uncharged particles must be implemented. Following the successful implementation of these models, the PMC method will be evaluated as a candidate for inverse planning of modulated electron radiation therapy and scanned proton beams.

  7. CatReg Software for Categorical Regression Analysis (May 2016)

    EPA Science Inventory

    CatReg 3.0 is a Microsoft Windows enhanced version of the Agency’s categorical regression analysis (CatReg) program. CatReg complements EPA’s existing Benchmark Dose Software (BMDS) by greatly enhancing a risk assessor’s ability to determine whether data from separate toxicologic...

  8. Potential uncertainty reduction in model-averaged benchmark dose estimates informed by an additional dose study.

    PubMed

    Shao, Kan; Small, Mitchell J

    2011-10-01

    A methodology is presented for assessing the information value of an additional dosage experiment in existing bioassay studies. The analysis demonstrates the potential reduction in the uncertainty of toxicity metrics derived from expanded studies, providing insights for future studies. Bayesian methods are used to fit alternative dose-response models using Markov chain Monte Carlo (MCMC) simulation for parameter estimation and Bayesian model averaging (BMA) is used to compare and combine the alternative models. BMA predictions for benchmark dose (BMD) are developed, with uncertainty in these predictions used to derive the lower bound BMDL. The MCMC and BMA results provide a basis for a subsequent Monte Carlo analysis that backcasts the dosage where an additional test group would have been most beneficial in reducing the uncertainty in the BMD prediction, along with the magnitude of the expected uncertainty reduction. Uncertainty reductions are measured in terms of reduced interval widths of predicted BMD values and increases in BMDL values that occur as a result of this reduced uncertainty. The methodology is illustrated using two existing data sets for TCDD carcinogenicity, fitted with two alternative dose-response models (logistic and quantal-linear). The example shows that an additional dose at a relatively high value would have been most effective for reducing the uncertainty in BMA BMD estimates, with predicted reductions in the widths of uncertainty intervals of approximately 30%, and expected increases in BMDL values of 5-10%. The results demonstrate that dose selection for studies that subsequently inform dose-response models can benefit from consideration of how these models will be fit, combined, and interpreted. © 2011 Society for Risk Analysis.

  9. A Web-Based System for Bayesian Benchmark Dose Estimation.

    PubMed

    Shao, Kan; Shapiro, Andrew J

    2018-01-11

    Benchmark dose (BMD) modeling is an important step in human health risk assessment and is used as the default approach to identify the point of departure for risk assessment. A probabilistic framework for dose-response assessment has been proposed and advocated by various institutions and organizations; therefore, a reliable tool is needed to provide distributional estimates for BMD and other important quantities in dose-response assessment. We developed an online system for Bayesian BMD (BBMD) estimation and compared results from this software with U.S. Environmental Protection Agency's (EPA's) Benchmark Dose Software (BMDS). The system is built on a Bayesian framework featuring the application of Markov chain Monte Carlo (MCMC) sampling for model parameter estimation and BMD calculation, which makes the BBMD system fundamentally different from the currently prevailing BMD software packages. In addition to estimating the traditional BMDs for dichotomous and continuous data, the developed system is also capable of computing model-averaged BMD estimates. A total of 518 dichotomous and 108 continuous data sets extracted from the U.S. EPA's Integrated Risk Information System (IRIS) database (and similar databases) were used as testing data to compare the estimates from the BBMD and BMDS programs. The results suggest that the BBMD system may outperform the BMDS program in a number of aspects, including fewer failed BMD and BMDL calculations and estimates. The BBMD system is a useful alternative tool for estimating BMD with additional functionalities for BMD analysis based on most recent research. Most importantly, the BBMD has the potential to incorporate prior information to make dose-response modeling more reliable and can provide distributional estimates for important quantities in dose-response assessment, which greatly facilitates the current trend for probabilistic risk assessment. https://doi.org/10.1289/EHP1289.

  10. Patient radiation doses in interventional cardiology in the U.S.: Advisory data sets and possible initial values for U.S. reference levels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Donald L.; Hilohi, C. Michael; Spelic, David C.

    2012-10-15

    Purpose: To determine patient radiation doses from interventional cardiology procedures in the U.S and to suggest possible initial values for U.S. benchmarks for patient radiation dose from selected interventional cardiology procedures [fluoroscopically guided diagnostic cardiac catheterization and percutaneous coronary intervention (PCI)]. Methods: Patient radiation dose metrics were derived from analysis of data from the 2008 to 2009 Nationwide Evaluation of X-ray Trends (NEXT) survey of cardiac catheterization. This analysis used deidentified data and did not require review by an IRB. Data from 171 facilities in 30 states were analyzed. The distributions (percentiles) of radiation dose metrics were determined for diagnosticmore » cardiac catheterizations, PCI, and combined diagnostic and PCI procedures. Confidence intervals for these dose distributions were determined using bootstrap resampling. Results: Percentile distributions (advisory data sets) and possible preliminary U.S. reference levels (based on the 75th percentile of the dose distributions) are provided for cumulative air kerma at the reference point (K{sub a,r}), cumulative air kerma-area product (P{sub KA}), fluoroscopy time, and number of cine runs. Dose distributions are sufficiently detailed to permit dose audits as described in National Council on Radiation Protection and Measurements Report No. 168. Fluoroscopy times are consistent with those observed in European studies, but P{sub KA} is higher in the U.S. Conclusions: Sufficient data exist to suggest possible initial benchmarks for patient radiation dose for certain interventional cardiology procedures in the U.S. Our data suggest that patient radiation dose in these procedures is not optimized in U.S. practice.« less

  11. APPLICATION OF BENCHMARK DOSE METHODOLOGY TO DATA FROM PRENATAL DEVELOPMENTAL TOXICITY STUDIES

    EPA Science Inventory

    The benchmark dose (BMD) concept was applied to 246 conventional developmental toxicity datasets from government, industry and commercial laboratories. Five modeling approaches were used, two generic and three specific to developmental toxicity (DT models). BMDs for both quantal ...

  12. Linking log files with dosimetric accuracy--A multi-institutional study on quality assurance of volumetric modulated arc therapy.

    PubMed

    Pasler, Marlies; Kaas, Jochem; Perik, Thijs; Geuze, Job; Dreindl, Ralf; Künzler, Thomas; Wittkamper, Frits; Georg, Dietmar

    2015-12-01

    To systematically evaluate machine specific quality assurance (QA) for volumetric modulated arc therapy (VMAT) based on log files by applying a dynamic benchmark plan. A VMAT benchmark plan was created and tested on 18 Elekta linacs (13 MLCi or MLCi2, 5 Agility) at 4 different institutions. Linac log files were analyzed and a delivery robustness index was introduced. For dosimetric measurements an ionization chamber array was used. Relative dose deviations were assessed by mean gamma for each control point and compared to the log file evaluation. Fourteen linacs delivered the VMAT benchmark plan, while 4 linacs failed by consistently terminating the delivery. The mean leaf error (±1SD) was 0.3±0.2 mm for all linacs. Large MLC maximum errors up to 6.5 mm were observed at reversal positions. Delivery robustness index accounting for MLC position correction (0.8-1.0) correlated with delivery time (80-128 s) and depended on dose rate performance. Dosimetric evaluation indicated in general accurate plan reproducibility with γ(mean)(±1 SD)=0.4±0.2 for 1 mm/1%. However single control point analysis revealed larger deviations and attributed well to log file analysis. The designed benchmark plan helped identify linac related malfunctions in dynamic mode for VMAT. Log files serve as an important additional QA measure to understand and visualize dynamic linac parameters. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Categorical Regression and Benchmark Dose Software 3.0

    EPA Science Inventory

    The objective of this full-day course is to provide participants with interactive training on the use of the U.S. Environmental Protection Agency’s (EPA) Benchmark Dose software (BMDS, version 3.0, released fall 2018) and Categorical Regression software (CatReg, version 3.1...

  14. A Matter of Timing: Identifying Significant Multi-Dose Radiotherapy Improvements by Numerical Simulation and Genetic Algorithm Search

    PubMed Central

    Angus, Simon D.; Piotrowska, Monika Joanna

    2014-01-01

    Multi-dose radiotherapy protocols (fraction dose and timing) currently used in the clinic are the product of human selection based on habit, received wisdom, physician experience and intra-day patient timetabling. However, due to combinatorial considerations, the potential treatment protocol space for a given total dose or treatment length is enormous, even for relatively coarse search; well beyond the capacity of traditional in-vitro methods. In constrast, high fidelity numerical simulation of tumor development is well suited to the challenge. Building on our previous single-dose numerical simulation model of EMT6/Ro spheroids, a multi-dose irradiation response module is added and calibrated to the effective dose arising from 18 independent multi-dose treatment programs available in the experimental literature. With the developed model a constrained, non-linear, search for better performing cadidate protocols is conducted within the vicinity of two benchmarks by genetic algorithm (GA) techniques. After evaluating less than 0.01% of the potential benchmark protocol space, candidate protocols were identified by the GA which conferred an average of 9.4% (max benefit 16.5%) and 7.1% (13.3%) improvement (reduction) on tumour cell count compared to the two benchmarks, respectively. Noticing that a convergent phenomenon of the top performing protocols was their temporal synchronicity, a further series of numerical experiments was conducted with periodic time-gap protocols (10 h to 23 h), leading to the discovery that the performance of the GA search candidates could be replicated by 17–18 h periodic candidates. Further dynamic irradiation-response cell-phase analysis revealed that such periodicity cohered with latent EMT6/Ro cell-phase temporal patterning. Taken together, this study provides powerful evidence towards the hypothesis that even simple inter-fraction timing variations for a given fractional dose program may present a facile, and highly cost-effecitive means of significantly improving clinical efficacy. PMID:25460164

  15. A matter of timing: identifying significant multi-dose radiotherapy improvements by numerical simulation and genetic algorithm search.

    PubMed

    Angus, Simon D; Piotrowska, Monika Joanna

    2014-01-01

    Multi-dose radiotherapy protocols (fraction dose and timing) currently used in the clinic are the product of human selection based on habit, received wisdom, physician experience and intra-day patient timetabling. However, due to combinatorial considerations, the potential treatment protocol space for a given total dose or treatment length is enormous, even for relatively coarse search; well beyond the capacity of traditional in-vitro methods. In constrast, high fidelity numerical simulation of tumor development is well suited to the challenge. Building on our previous single-dose numerical simulation model of EMT6/Ro spheroids, a multi-dose irradiation response module is added and calibrated to the effective dose arising from 18 independent multi-dose treatment programs available in the experimental literature. With the developed model a constrained, non-linear, search for better performing cadidate protocols is conducted within the vicinity of two benchmarks by genetic algorithm (GA) techniques. After evaluating less than 0.01% of the potential benchmark protocol space, candidate protocols were identified by the GA which conferred an average of 9.4% (max benefit 16.5%) and 7.1% (13.3%) improvement (reduction) on tumour cell count compared to the two benchmarks, respectively. Noticing that a convergent phenomenon of the top performing protocols was their temporal synchronicity, a further series of numerical experiments was conducted with periodic time-gap protocols (10 h to 23 h), leading to the discovery that the performance of the GA search candidates could be replicated by 17-18 h periodic candidates. Further dynamic irradiation-response cell-phase analysis revealed that such periodicity cohered with latent EMT6/Ro cell-phase temporal patterning. Taken together, this study provides powerful evidence towards the hypothesis that even simple inter-fraction timing variations for a given fractional dose program may present a facile, and highly cost-effecitive means of significantly improving clinical efficacy.

  16. Introduction to benchmark dose methods and U.S. EPA's benchmark dose software (BMDS) version 2.1.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, J. Allen, E-mail: davis.allen@epa.gov; Gift, Jeffrey S.; Zhao, Q. Jay

    2011-07-15

    Traditionally, the No-Observed-Adverse-Effect-Level (NOAEL) approach has been used to determine the point of departure (POD) from animal toxicology data for use in human health risk assessments. However, this approach is subject to substantial limitations that have been well defined, such as strict dependence on the dose selection, dose spacing, and sample size of the study from which the critical effect has been identified. Also, the NOAEL approach fails to take into consideration the shape of the dose-response curve and other related information. The benchmark dose (BMD) method, originally proposed as an alternative to the NOAEL methodology in the 1980s, addressesmore » many of the limitations of the NOAEL method. It is less dependent on dose selection and spacing, and it takes into account the shape of the dose-response curve. In addition, the estimation of a BMD 95% lower bound confidence limit (BMDL) results in a POD that appropriately accounts for study quality (i.e., sample size). With the recent advent of user-friendly BMD software programs, including the U.S. Environmental Protection Agency's (U.S. EPA) Benchmark Dose Software (BMDS), BMD has become the method of choice for many health organizations world-wide. This paper discusses the BMD methods and corresponding software (i.e., BMDS version 2.1.1) that have been developed by the U.S. EPA, and includes a comparison with recently released European Food Safety Authority (EFSA) BMD guidance.« less

  17. Properties of model-averaged BMDLs: a study of model averaging in dichotomous response risk estimation.

    PubMed

    Wheeler, Matthew W; Bailer, A John

    2007-06-01

    Model averaging (MA) has been proposed as a method of accounting for model uncertainty in benchmark dose (BMD) estimation. The technique has been used to average BMD dose estimates derived from dichotomous dose-response experiments, microbial dose-response experiments, as well as observational epidemiological studies. While MA is a promising tool for the risk assessor, a previous study suggested that the simple strategy of averaging individual models' BMD lower limits did not yield interval estimators that met nominal coverage levels in certain situations, and this performance was very sensitive to the underlying model space chosen. We present a different, more computationally intensive, approach in which the BMD is estimated using the average dose-response model and the corresponding benchmark dose lower bound (BMDL) is computed by bootstrapping. This method is illustrated with TiO(2) dose-response rat lung cancer data, and then systematically studied through an extensive Monte Carlo simulation. The results of this study suggest that the MA-BMD, estimated using this technique, performs better, in terms of bias and coverage, than the previous MA methodology. Further, the MA-BMDL achieves nominal coverage in most cases, and is superior to picking the "best fitting model" when estimating the benchmark dose. Although these results show utility of MA for benchmark dose risk estimation, they continue to highlight the importance of choosing an adequate model space as well as proper model fit diagnostics.

  18. Mechanism-based risk assessment strategy for drug-induced cholestasis using the transcriptional benchmark dose derived by toxicogenomics.

    PubMed

    Kawamoto, Taisuke; Ito, Yuichi; Morita, Osamu; Honda, Hiroshi

    2017-01-01

    Cholestasis is one of the major causes of drug-induced liver injury (DILI), which can result in withdrawal of approved drugs from the market. Early identification of cholestatic drugs is difficult due to the complex mechanisms involved. In order to develop a strategy for mechanism-based risk assessment of cholestatic drugs, we analyzed gene expression data obtained from the livers of rats that had been orally administered with 12 known cholestatic compounds repeatedly for 28 days at three dose levels. Qualitative analyses were performed using two statistical approaches (hierarchical clustering and principle component analysis), in addition to pathway analysis. The transcriptional benchmark dose (tBMD) and tBMD 95% lower limit (tBMDL) were used for quantitative analyses, which revealed three compound sub-groups that produced different types of differential gene expression; these groups of genes were mainly involved in inflammation, cholesterol biosynthesis, and oxidative stress. Furthermore, the tBMDL values for each test compound were in good agreement with the relevant no observed adverse effect level. These results indicate that our novel strategy for drug safety evaluation using mechanism-based classification and tBMDL would facilitate the application of toxicogenomics for risk assessment of cholestatic DILI.

  19. Avoiding Pitfalls in the Use of the Benchmark Dose Approach to Chemical Risk Assessments; Some Illustrative Case Studies (Presentation)

    EPA Science Inventory

    The USEPA's benchmark dose software (BMDS) version 1.2 has been available over the Internet since April, 2000 (epa.gov/ncea/bmds.htm), and has already been used in risk assessments of some significant environmental pollutants (e.g., diesel exhaust, dichloropropene, hexachlorocycl...

  20. Experimental benchmarking of a Monte Carlo dose simulation code for pediatric CT

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Samei, Ehsan; Yoshizumi, Terry; Colsher, James G.; Jones, Robert P.; Frush, Donald P.

    2007-03-01

    In recent years, there has been a desire to reduce CT radiation dose to children because of their susceptibility and prolonged risk for cancer induction. Concerns arise, however, as to the impact of dose reduction on image quality and thus potentially on diagnostic accuracy. To study the dose and image quality relationship, we are developing a simulation code to calculate organ dose in pediatric CT patients. To benchmark this code, a cylindrical phantom was built to represent a pediatric torso, which allows measurements of dose distributions from its center to its periphery. Dose distributions for axial CT scans were measured on a 64-slice multidetector CT (MDCT) scanner (GE Healthcare, Chalfont St. Giles, UK). The same measurements were simulated using a Monte Carlo code (PENELOPE, Universitat de Barcelona) with the applicable CT geometry including bowtie filter. The deviations between simulated and measured dose values were generally within 5%. To our knowledge, this work is one of the first attempts to compare measured radial dose distributions on a cylindrical phantom with Monte Carlo simulated results. It provides a simple and effective method for benchmarking organ dose simulation codes and demonstrates the potential of Monte Carlo simulation for investigating the relationship between dose and image quality for pediatric CT patients.

  1. Marginal Iodide Deficiency and Thyroid Function: Dose-response analysis for quantitative pharmacokinetic modeling

    EPA Science Inventory

    Severe iodine deficiency is known to cause adverse health outcomes and remains a benchmark for understanding the effects of hypothyroidism. However, the implications of marginal iodine deficiency on function of the thyroid axis remain less well known. The current study examined t...

  2. Benchmark Dose for Urinary Cadmium based on a Marker of Renal Dysfunction: A Meta-Analysis

    PubMed Central

    Woo, Hae Dong; Chiu, Weihsueh A.; Jo, Seongil; Kim, Jeongseon

    2015-01-01

    Background Low doses of cadmium can cause adverse health effects. Benchmark dose (BMD) and the one-sided 95% lower confidence limit of BMD (BMDL) to derive points of departure for urinary cadmium exposure have been estimated in several previous studies, but the methods to derive BMD and the estimated BMDs differ. Objectives We aimed to find the associated factors that affect BMD calculation in the general population, and to estimate the summary BMD for urinary cadmium using reported BMDs. Methods A meta-regression was performed and the pooled BMD/BMDL was estimated using studies reporting a BMD and BMDL, weighted by sample size, that were calculated from individual data based on markers of renal dysfunction. Results BMDs were highly heterogeneous across studies. Meta-regression analysis showed that a significant predictor of BMD was the cut-off point which denotes an abnormal level. Using the 95th percentile as a cut off, BMD5/BMDL5 estimates for 5% benchmark responses (BMR) of β2-microglobulinuria (β2-MG) estimated was 6.18/4.88 μg/g creatinine in conventional quantal analysis and 3.56/3.13 μg/g creatinine in the hybrid approach, and BMD5/BMDL5 estimates for 5% BMR of N-acetyl-β-d-glucosaminidase (NAG) was 10.31/7.61 μg/g creatinine in quantal analysis and 3.21/2.24 g/g creatinine in the hybrid approach. However, the meta-regression showed that BMD and BMDL were significantly associated with the cut-off point, but BMD calculation method did not significantly affect the results. The urinary cadmium BMDL5 of β2-MG was 1.9 μg/g creatinine in the lowest cut-off point group. Conclusion The BMD was significantly associated with the cut-off point defining the abnormal level of renal dysfunction markers. PMID:25970611

  3. Benchmark solutions for the galactic ion transport equations: Energy and spatially dependent problems

    NASA Technical Reports Server (NTRS)

    Ganapol, Barry D.; Townsend, Lawrence W.; Wilson, John W.

    1989-01-01

    Nontrivial benchmark solutions are developed for the galactic ion transport (GIT) equations in the straight-ahead approximation. These equations are used to predict potential radiation hazards in the upper atmosphere and in space. Two levels of difficulty are considered: (1) energy independent, and (2) spatially independent. The analysis emphasizes analytical methods never before applied to the GIT equations. Most of the representations derived have been numerically implemented and compared to more approximate calculations. Accurate ion fluxes are obtained (3 to 5 digits) for nontrivial sources. For monoenergetic beams, both accurate doses and fluxes are found. The benchmarks presented are useful in assessing the accuracy of transport algorithms designed to accommodate more complex radiation protection problems. In addition, these solutions can provide fast and accurate assessments of relatively simple shield configurations.

  4. Benchmarking pediatric cranial CT protocols using a dose tracking software system: a multicenter study.

    PubMed

    De Bondt, Timo; Mulkens, Tom; Zanca, Federica; Pyfferoen, Lotte; Casselman, Jan W; Parizel, Paul M

    2017-02-01

    To benchmark regional standard practice for paediatric cranial CT-procedures in terms of radiation dose and acquisition parameters. Paediatric cranial CT-data were retrospectively collected during a 1-year period, in 3 different hospitals of the same country. A dose tracking system was used to automatically gather information. Dose (CTDI and DLP), scan length, amount of retakes and demographic data were stratified by age and clinical indication; appropriate use of child-specific protocols was assessed. In total, 296 paediatric cranial CT-procedures were collected. Although the median dose of each hospital was below national and international diagnostic reference level (DRL) for all age categories, statistically significant (p-value < 0.001) dose differences among hospitals were observed. The hospital with lowest dose levels showed smallest dose variability and used age-stratified protocols for standardizing paediatric head exams. Erroneous selection of adult protocols for children still occurred, mostly in the oldest age-group. Even though all hospitals complied with national and international DRLs, dose tracking and benchmarking showed that further dose optimization and standardization is possible by using age-stratified protocols for paediatric cranial CT. Moreover, having a dose tracking system revealed that adult protocols are still applied for paediatric CT, a practice that must be avoided. • Significant differences were observed in the delivered dose between age-groups and hospitals. • Using age-adapted scanning protocols gives a nearly linear dose increase. • Sharing dose-data can be a trigger for hospitals to reduce dose levels.

  5. Dose specification for hippocampal sparing whole brain radiotherapy (HS WBRT): considerations from the UK HIPPO trial QA programme.

    PubMed

    Megias, Daniel; Phillips, Mark; Clifton-Hadley, Laura; Harron, Elizabeth; Eaton, David J; Sanghera, Paul; Whitfield, Gillian

    2017-03-01

    The HIPPO trial is a UK randomized Phase II trial of hippocampal sparing (HS) vs conventional whole-brain radiotherapy after surgical resection or radiosurgery in patients with favourable prognosis with 1-4 brain metastases. Each participating centre completed a planning benchmark case as part of the dedicated radiotherapy trials quality assurance programme (RTQA), promoting the safe and effective delivery of HS intensity-modulated radiotherapy (IMRT) in a multicentre trial setting. Submitted planning benchmark cases were reviewed using visualization for radiotherapy software (VODCA) evaluating plan quality and compliance in relation to the HIPPO radiotherapy planning and delivery guidelines. Comparison of the planning benchmark data highlighted a plan specified using dose to medium as an outlier by comparison with those specified using dose to water. Further evaluation identified that the reported plan statistics for dose to medium were lower as a result of the dose calculated at regions of PTV inclusive of bony cranium being lower relative to brain. Specification of dose to water or medium remains a source of potential ambiguity and it is essential that as part of a multicentre trial, consideration is given to reported differences, particularly in the presence of bone. Evaluation of planning benchmark data as part of an RTQA programme has highlighted an important feature of HS IMRT dosimetry dependent on dose being specified to water or medium, informing the development and undertaking of HS IMRT as part of the HIPPO trial. Advances in knowledge: The potential clinical impact of differences between dose to medium and dose to water are demonstrated for the first time, in the setting of HS whole-brain radiotherapy.

  6. Meeting The Joint Commission's Dose Incident Identification and External Benchmarking Requirements Using the ACR's Dose Index Registry.

    PubMed

    Bohl, Michael A; Goswami, Roopa; Strassner, Brett; Stanger, Paula

    2016-08-01

    The purpose of this investigation was to evaluate the potential of using the ACR's Dose Index Registry(®) to meet The Joint Commission's requirements to identify incidents in which the radiation dose index from diagnostic CT examinations exceeded the protocol's expected dose index range. In total, 10,970 records in the Dose Index Registry were statistically analyzed to establish both an upper and lower expected dose index for each protocol. All 2015 studies to date were then retrospectively reviewed to identify examinations whose total examination dose index exceeded the protocol's defined upper threshold. Each dose incident was then logged and reviewed per the new Joint Commission requirements. Facilities may leverage their participation in the ACR's Dose Index Registry to fully meet The Joint Commission's dose incident identification review and external benchmarking requirements. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  7. Demonstration of a software design and statistical analysis methodology with application to patient outcomes data sets

    PubMed Central

    Mayo, Charles; Conners, Steve; Warren, Christopher; Miller, Robert; Court, Laurence; Popple, Richard

    2013-01-01

    Purpose: With emergence of clinical outcomes databases as tools utilized routinely within institutions, comes need for software tools to support automated statistical analysis of these large data sets and intrainstitutional exchange from independent federated databases to support data pooling. In this paper, the authors present a design approach and analysis methodology that addresses both issues. Methods: A software application was constructed to automate analysis of patient outcomes data using a wide range of statistical metrics, by combining use of C#.Net and R code. The accuracy and speed of the code was evaluated using benchmark data sets. Results: The approach provides data needed to evaluate combinations of statistical measurements for ability to identify patterns of interest in the data. Through application of the tools to a benchmark data set for dose-response threshold and to SBRT lung data sets, an algorithm was developed that uses receiver operator characteristic curves to identify a threshold value and combines use of contingency tables, Fisher exact tests, Welch t-tests, and Kolmogorov-Smirnov tests to filter the large data set to identify values demonstrating dose-response. Kullback-Leibler divergences were used to provide additional confirmation. Conclusions: The work demonstrates the viability of the design approach and the software tool for analysis of large data sets. PMID:24320426

  8. Demonstration of a software design and statistical analysis methodology with application to patient outcomes data sets.

    PubMed

    Mayo, Charles; Conners, Steve; Warren, Christopher; Miller, Robert; Court, Laurence; Popple, Richard

    2013-11-01

    With emergence of clinical outcomes databases as tools utilized routinely within institutions, comes need for software tools to support automated statistical analysis of these large data sets and intrainstitutional exchange from independent federated databases to support data pooling. In this paper, the authors present a design approach and analysis methodology that addresses both issues. A software application was constructed to automate analysis of patient outcomes data using a wide range of statistical metrics, by combining use of C#.Net and R code. The accuracy and speed of the code was evaluated using benchmark data sets. The approach provides data needed to evaluate combinations of statistical measurements for ability to identify patterns of interest in the data. Through application of the tools to a benchmark data set for dose-response threshold and to SBRT lung data sets, an algorithm was developed that uses receiver operator characteristic curves to identify a threshold value and combines use of contingency tables, Fisher exact tests, Welch t-tests, and Kolmogorov-Smirnov tests to filter the large data set to identify values demonstrating dose-response. Kullback-Leibler divergences were used to provide additional confirmation. The work demonstrates the viability of the design approach and the software tool for analysis of large data sets.

  9. ORANGE: a Monte Carlo dose engine for radiotherapy.

    PubMed

    van der Zee, W; Hogenbirk, A; van der Marck, S C

    2005-02-21

    This study presents data for the verification of ORANGE, a fast MCNP-based dose engine for radiotherapy treatment planning. In order to verify the new algorithm, it has been benchmarked against DOSXYZ and against measurements. For the benchmarking, first calculations have been done using the ICCR-XIII benchmark. Next, calculations have been done with DOSXYZ and ORANGE in five different phantoms (one homogeneous, two with bone equivalent inserts and two with lung equivalent inserts). The calculations have been done with two mono-energetic photon beams (2 MeV and 6 MeV) and two mono-energetic electron beams (10 MeV and 20 MeV). Comparison of the calculated data (from DOSXYZ and ORANGE) against measurements was possible for a realistic 10 MV photon beam and a realistic 15 MeV electron beam in a homogeneous phantom only. For the comparison of the calculated dose distributions and dose distributions against measurements, the concept of the confidence limit (CL) has been used. This concept reduces the difference between two data sets to a single number, which gives the deviation for 90% of the dose distributions. Using this concept, it was found that ORANGE was always within the statistical bandwidth with DOSXYZ and the measurements. The ICCR-XIII benchmark showed that ORANGE is seven times faster than DOSXYZ, a result comparable with other accelerated Monte Carlo dose systems when no variance reduction is used. As shown for XVMC, using variance reduction techniques has the potential for further acceleration. Using modern computer hardware, this brings the total calculation time for a dose distribution with 1.5% (statistical) accuracy within the clinical range (less then 10 min). This means that ORANGE can be a candidate for a dose engine in radiotherapy treatment planning.

  10. SU-E-I-32: Benchmarking Head CT Doses: A Pooled Vs. Protocol Specific Analysis of Radiation Doses in Adult Head CT Examinations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fujii, K; UCLA School of Medicine, Los Angeles, CA; Bostani, M

    Purpose: The aim of this study was to collect CT dose index data from adult head exams to establish benchmarks based on either: (a) values pooled from all head exams or (b) values for specific protocols. One part of this was to investigate differences in scan frequency and CT dose index data for inpatients versus outpatients. Methods: We collected CT dose index data (CTDIvol) from adult head CT examinations performed at our medical facilities from Jan 1st to Dec 31th, 2014. Four of these scanners were used for inpatients, the other five were used for outpatients. All scanners used Tubemore » Current Modulation. We used X-ray dose management software to mine dose index data and evaluate CTDIvol for 15807 inpatients and 4263 outpatients undergoing Routine Brain, Sinus, Facial/Mandible, Temporal Bone, CTA Brain and CTA Brain-Neck protocols, and combined across all protocols. Results: For inpatients, Routine Brain series represented 84% of total scans performed. For outpatients, Sinus scans represented the largest fraction (36%). The CTDIvol (mean ± SD) across all head protocols was 39 ± 30 mGy (min-max: 3.3–540 mGy). The CTDIvol for Routine Brain was 51 ± 6.2 mGy (min-max: 36–84 mGy). The values for Sinus were 24 ± 3.2 mGy (min-max: 13–44 mGy) and for Facial/Mandible were 22 ± 4.3 mGy (min-max: 14–46 mGy). The mean CTDIvol for inpatients and outpatients was similar across protocols with one exception (CTA Brain-Neck). Conclusion: There is substantial dose variation when results from all protocols are pooled together; this is primarily a function of the differences in technical factors of the protocols themselves. When protocols are analyzed separately, there is much less variability. While analyzing pooled data affords some utility, reviewing protocols segregated by clinical indication provides greater opportunity for optimization and establishing useful benchmarks.« less

  11. High-energy neutron depth-dose distribution experiment.

    PubMed

    Ferenci, M S; Hertel, N E

    2003-01-01

    A unique set of high-energy neutron depth-dose benchmark experiments were performed at the Los Alamos Neutron Science Center/Weapons Neutron Research (LANSCE/WNR) complex. The experiments consisted of filtered neutron beams with energies up to 800 MeV impinging on a 30 x 30 x 30 cm3 liquid, tissue-equivalent phantom. The absorbed dose was measured in the phantom at various depths with tissue-equivalent ion chambers. This experiment is intended to serve as a benchmark experiment for the testing of high-energy radiation transport codes for the international radiation protection community.

  12. Optimizing Radiation Doses for Computed Tomography Across Institutions: Dose Auditing and Best Practices.

    PubMed

    Demb, Joshua; Chu, Philip; Nelson, Thomas; Hall, David; Seibert, Anthony; Lamba, Ramit; Boone, John; Krishnam, Mayil; Cagnon, Christopher; Bostani, Maryam; Gould, Robert; Miglioretti, Diana; Smith-Bindman, Rebecca

    2017-06-01

    Radiation doses for computed tomography (CT) vary substantially across institutions. To assess the impact of institutional-level audit and collaborative efforts to share best practices on CT radiation doses across 5 University of California (UC) medical centers. In this before/after interventional study, we prospectively collected radiation dose metrics on all diagnostic CT examinations performed between October 1, 2013, and December 31, 2014, at 5 medical centers. Using data from January to March (baseline), we created audit reports detailing the distribution of radiation dose metrics for chest, abdomen, and head CT scans. In April, we shared reports with the medical centers and invited radiology professionals from the centers to a 1.5-day in-person meeting to review reports and share best practices. We calculated changes in mean effective dose 12 weeks before and after the audits and meeting, excluding a 12-week implementation period when medical centers could make changes. We compared proportions of examinations exceeding previously published benchmarks at baseline and following the audit and meeting, and calculated changes in proportion of examinations exceeding benchmarks. Of 158 274 diagnostic CT scans performed in the study period, 29 594 CT scans were performed in the 3 months before and 32 839 CT scans were performed 12 to 24 weeks after the audit and meeting. Reductions in mean effective dose were considerable for chest and abdomen. Mean effective dose for chest CT decreased from 13.2 to 10.7 mSv (18.9% reduction; 95% CI, 18.0%-19.8%). Reductions at individual medical centers ranged from 3.8% to 23.5%. The mean effective dose for abdominal CT decreased from 20.0 to 15.0 mSv (25.0% reduction; 95% CI, 24.3%-25.8%). Reductions at individual medical centers ranged from 10.8% to 34.7%. The number of CT scans that had an effective dose measurement that exceeded benchmarks was reduced considerably by 48% and 54% for chest and abdomen, respectively. After the audit and meeting, head CT doses varied less, although some institutions increased and some decreased mean head CT doses and the proportion above benchmarks. Reviewing institutional doses and sharing dose-optimization best practices resulted in lower radiation doses for chest and abdominal CT and more consistent doses for head CT.

  13. Benchmarking B-Cell Epitope Prediction with Quantitative Dose-Response Data on Antipeptide Antibodies: Towards Novel Pharmaceutical Product Development

    PubMed Central

    Caoili, Salvador Eugenio C.

    2014-01-01

    B-cell epitope prediction can enable novel pharmaceutical product development. However, a mechanistically framed consensus has yet to emerge on benchmarking such prediction, thus presenting an opportunity to establish standards of practice that circumvent epistemic inconsistencies of casting the epitope prediction task as a binary-classification problem. As an alternative to conventional dichotomous qualitative benchmark data, quantitative dose-response data on antibody-mediated biological effects are more meaningful from an information-theoretic perspective in the sense that such effects may be expressed as probabilities (e.g., of functional inhibition by antibody) for which the Shannon information entropy (SIE) can be evaluated as a measure of informativeness. Accordingly, half-maximal biological effects (e.g., at median inhibitory concentrations of antibody) correspond to maximally informative data while undetectable and maximal biological effects correspond to minimally informative data. This applies to benchmarking B-cell epitope prediction for the design of peptide-based immunogens that elicit antipeptide antibodies with functionally relevant cross-reactivity. Presently, the Immune Epitope Database (IEDB) contains relatively few quantitative dose-response data on such cross-reactivity. Only a small fraction of these IEDB data is maximally informative, and many more of them are minimally informative (i.e., with zero SIE). Nevertheless, the numerous qualitative data in IEDB suggest how to overcome the paucity of informative benchmark data. PMID:24949474

  14. SU-E-T-466: Implementation of An Extension Module for Dose Response Models in the TOPAS Monte Carlo Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramos-Mendez, J; Faddegon, B; Perl, J

    2015-06-15

    Purpose: To develop and verify an extension to TOPAS for calculation of dose response models (TCP/NTCP). TOPAS wraps and extends Geant4. Methods: The TOPAS DICOM interface was extended to include structure contours, for subsequent calculation of DVH’s and TCP/NTCP. The following dose response models were implemented: Lyman-Kutcher-Burman (LKB), critical element (CE), population based critical volume (CV), parallel-serials, a sigmoid-based model of Niemierko for NTCP and TCP, and a Poisson-based model for TCP. For verification, results for the parallel-serial and Poisson models, with 6 MV x-ray dose distributions calculated with TOPAS and Pinnacle v9.2, were compared to data from the benchmarkmore » configuration of the AAPM Task Group 166 (TG166). We provide a benchmark configuration suitable for proton therapy along with results for the implementation of the Niemierko, CV and CE models. Results: The maximum difference in DVH calculated with Pinnacle and TOPAS was 2%. Differences between TG166 data and Monte Carlo calculations of up to 4.2%±6.1% were found for the parallel-serial model and up to 1.0%±0.7% for the Poisson model (including the uncertainty due to lack of knowledge of the point spacing in TG166). For CE, CV and Niemierko models, the discrepancies between the Pinnacle and TOPAS results are 74.5%, 34.8% and 52.1% when using 29.7 cGy point spacing, the differences being highly sensitive to dose spacing. On the other hand, with our proposed benchmark configuration, the largest differences were 12.05%±0.38%, 3.74%±1.6%, 1.57%±4.9% and 1.97%±4.6% for the CE, CV, Niemierko and LKB models, respectively. Conclusion: Several dose response models were successfully implemented with the extension module. Reference data was calculated for future benchmarking. Dose response calculated for the different models varied much more widely for the TG166 benchmark than for the proposed benchmark, which had much lower sensitivity to the choice of DVH dose points. This work was supported by National Cancer Institute Grant R01CA140735.« less

  15. Benchmark Credentialing Results for NRG-BR001: The First National Cancer Institute-Sponsored Trial of Stereotactic Body Radiation Therapy for Multiple Metastases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Al-Hallaq, Hania A., E-mail: halhallaq@radonc.uchicago.edu; Chmura, Steven J.; Salama, Joseph K.

    Purpose: The NRG-BR001 trial is the first National Cancer Institute–sponsored trial to treat multiple (range 2-4) extracranial metastases with stereotactic body radiation therapy. Benchmark credentialing is required to ensure adherence to this complex protocol, in particular, for metastases in close proximity. The present report summarizes the dosimetric results and approval rates. Methods and Materials: The benchmark used anonymized data from a patient with bilateral adrenal metastases, separated by <5 cm of normal tissue. Because the planning target volume (PTV) overlaps with organs at risk (OARs), institutions must use the planning priority guidelines to balance PTV coverage (45 Gy in 3 fractions) againstmore » OAR sparing. Submitted plans were processed by the Imaging and Radiation Oncology Core and assessed by the protocol co-chairs by comparing the doses to targets, OARs, and conformity metrics using nonparametric tests. Results: Of 63 benchmarks submitted through October 2015, 94% were approved, with 51% approved at the first attempt. Most used volumetric arc therapy (VMAT) (78%), a single plan for both PTVs (90%), and prioritized the PTV over the stomach (75%). The median dose to 95% of the volume was 44.8 ± 1.0 Gy and 44.9 ± 1.0 Gy for the right and left PTV, respectively. The median dose to 0.03 cm{sup 3} was 14.2 ± 2.2 Gy to the spinal cord and 46.5 ± 3.1 Gy to the stomach. Plans that spared the stomach significantly reduced the dose to the left PTV and stomach. Conformity metrics were significantly better for single plans that simultaneously treated both PTVs with VMAT, intensity modulated radiation therapy, or 3-dimensional conformal radiation therapy compared with separate plans. No significant differences existed in the dose at 2 cm from the PTVs. Conclusions: Although most plans used VMAT, the range of conformity and dose falloff was large. The decision to prioritize either OARs or PTV coverage varied considerably, suggesting that the toxicity outcomes in the trial could be affected. Several benchmarks met the dose-volume histogram metrics but produced unacceptable plans owing to low conformity. Dissemination of a frequently-asked-questions document improved the approval rate at the first attempt. Benchmark credentialing was found to be a valuable tool for educating institutions about the protocol requirements.« less

  16. Benchmark Credentialing Results for NRG-BR001: The First National Cancer Institute-Sponsored Trial of Stereotactic Body Radiation Therapy for Multiple Metastases.

    PubMed

    Al-Hallaq, Hania A; Chmura, Steven J; Salama, Joseph K; Lowenstein, Jessica R; McNulty, Susan; Galvin, James M; Followill, David S; Robinson, Clifford G; Pisansky, Thomas M; Winter, Kathryn A; White, Julia R; Xiao, Ying; Matuszak, Martha M

    2017-01-01

    The NRG-BR001 trial is the first National Cancer Institute-sponsored trial to treat multiple (range 2-4) extracranial metastases with stereotactic body radiation therapy. Benchmark credentialing is required to ensure adherence to this complex protocol, in particular, for metastases in close proximity. The present report summarizes the dosimetric results and approval rates. The benchmark used anonymized data from a patient with bilateral adrenal metastases, separated by <5 cm of normal tissue. Because the planning target volume (PTV) overlaps with organs at risk (OARs), institutions must use the planning priority guidelines to balance PTV coverage (45 Gy in 3 fractions) against OAR sparing. Submitted plans were processed by the Imaging and Radiation Oncology Core and assessed by the protocol co-chairs by comparing the doses to targets, OARs, and conformity metrics using nonparametric tests. Of 63 benchmarks submitted through October 2015, 94% were approved, with 51% approved at the first attempt. Most used volumetric arc therapy (VMAT) (78%), a single plan for both PTVs (90%), and prioritized the PTV over the stomach (75%). The median dose to 95% of the volume was 44.8 ± 1.0 Gy and 44.9 ± 1.0 Gy for the right and left PTV, respectively. The median dose to 0.03 cm 3 was 14.2 ± 2.2 Gy to the spinal cord and 46.5 ± 3.1 Gy to the stomach. Plans that spared the stomach significantly reduced the dose to the left PTV and stomach. Conformity metrics were significantly better for single plans that simultaneously treated both PTVs with VMAT, intensity modulated radiation therapy, or 3-dimensional conformal radiation therapy compared with separate plans. No significant differences existed in the dose at 2 cm from the PTVs. Although most plans used VMAT, the range of conformity and dose falloff was large. The decision to prioritize either OARs or PTV coverage varied considerably, suggesting that the toxicity outcomes in the trial could be affected. Several benchmarks met the dose-volume histogram metrics but produced unacceptable plans owing to low conformity. Dissemination of a frequently-asked-questions document improved the approval rate at the first attempt. Benchmark credentialing was found to be a valuable tool for educating institutions about the protocol requirements. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. [Benchmark experiment to verify radiation transport calculations for dosimetry in radiation therapy].

    PubMed

    Renner, Franziska

    2016-09-01

    Monte Carlo simulations are regarded as the most accurate method of solving complex problems in the field of dosimetry and radiation transport. In (external) radiation therapy they are increasingly used for the calculation of dose distributions during treatment planning. In comparison to other algorithms for the calculation of dose distributions, Monte Carlo methods have the capability of improving the accuracy of dose calculations - especially under complex circumstances (e.g. consideration of inhomogeneities). However, there is a lack of knowledge of how accurate the results of Monte Carlo calculations are on an absolute basis. A practical verification of the calculations can be performed by direct comparison with the results of a benchmark experiment. This work presents such a benchmark experiment and compares its results (with detailed consideration of measurement uncertainty) with the results of Monte Carlo calculations using the well-established Monte Carlo code EGSnrc. The experiment was designed to have parallels to external beam radiation therapy with respect to the type and energy of the radiation, the materials used and the kind of dose measurement. Because the properties of the beam have to be well known in order to compare the results of the experiment and the simulation on an absolute basis, the benchmark experiment was performed using the research electron accelerator of the Physikalisch-Technische Bundesanstalt (PTB), whose beam was accurately characterized in advance. The benchmark experiment and the corresponding Monte Carlo simulations were carried out for two different types of ionization chambers and the results were compared. Considering the uncertainty, which is about 0.7 % for the experimental values and about 1.0 % for the Monte Carlo simulation, the results of the simulation and the experiment coincide. Copyright © 2015. Published by Elsevier GmbH.

  18. Comparative Benchmark Dose Modeling as a Tool to Make the First Estimate of Safe Human Exposure Levels to Lunar Dust

    NASA Technical Reports Server (NTRS)

    James, John T.; Lam, Chiu-wing; Scully, Robert R.

    2013-01-01

    Brief exposures of Apollo Astronauts to lunar dust occasionally elicited upper respiratory irritation; however, no limits were ever set for prolonged exposure ot lunar dust. Habitats for exploration, whether mobile of fixed must be designed to limit human exposure to lunar dust to safe levels. We have used a new technique we call Comparative Benchmark Dose Modeling to estimate safe exposure limits for lunar dust collected during the Apollo 14 mission.

  19. What Pertussis Mortality Rates Make Maternal Acellular Pertussis Immunization Cost-Effective in Low- and Middle-Income Countries? A Decision Analysis

    PubMed Central

    Russell, Louise B.; Pentakota, Sri Ram; Toscano, Cristiana Maria; Cosgriff, Ben; Sinha, Anushua

    2016-01-01

    Background. Despite longstanding infant vaccination programs in low- and middle-income countries (LMICs), pertussis continues to cause deaths in the youngest infants. A maternal monovalent acellular pertussis (aP) vaccine, in development, could prevent many of these deaths. We estimated infant pertussis mortality rates at which maternal vaccination would be a cost-effective use of public health resources in LMICs. Methods. We developed a decision model to evaluate the cost-effectiveness of maternal aP immunization plus routine infant vaccination vs routine infant vaccination alone in Bangladesh, Nigeria, and Brazil. For a range of maternal aP vaccine prices, one-way sensitivity analyses identified the infant pertussis mortality rates required to make maternal immunization cost-effective by alternative benchmarks ($100, 0.5 gross domestic product [GDP] per capita, and GDP per capita per disability-adjusted life-year [DALY]). Probabilistic sensitivity analysis provided uncertainty intervals for these mortality rates. Results. Infant pertussis mortality rates necessary to make maternal aP immunization cost-effective exceed the rates suggested by current evidence except at low vaccine prices and/or cost-effectiveness benchmarks at the high end of those considered in this report. For example, at a vaccine price of $0.50/dose, pertussis mortality would need to be 0.051 per 1000 infants in Bangladesh, and 0.018 per 1000 in Nigeria, to cost 0.5 per capita GDP per DALY. In Brazil, a middle-income country, at a vaccine price of $4/dose, infant pertussis mortality would need to be 0.043 per 1000 to cost 0.5 per capita GDP per DALY. Conclusions. For commonly used cost-effectiveness benchmarks, maternal aP immunization would be cost-effective in many LMICs only if the vaccine were offered at less than $1–$2/dose. PMID:27838677

  20. Using the benchmark dose (BMD) methodology to determine an appropriate reduction of certain ingredients in food products.

    PubMed

    Bi, Jian

    2010-01-01

    As the desire to promote health increases, reductions of certain ingredients, for example, sodium, sugar, and fat in food products, are widely requested. However, the reduction is not risk free in sensory and marketing aspects. Over reduction may change the taste and influence the flavor of a product and lead to a decrease in consumer's overall liking or purchase intent for the product. This article uses the benchmark dose (BMD) methodology to determine an appropriate reduction. Calculations of BMD and one-sided lower confidence limit of BMD are illustrated. The article also discusses how to calculate BMD and BMDL for over dispersed binary data in replicated testing based on a corrected beta-binomial model. USEPA Benchmark Dose Software (BMDS) were used and S-Plus programs were developed. The method discussed in the article is originally used to determine an appropriate reduction of certain ingredients, for example, sodium, sugar, and fat in food products, considering both health reason and sensory or marketing risk.

  1. Multiscale benchmarking of drug delivery vectors.

    PubMed

    Summers, Huw D; Ware, Matthew J; Majithia, Ravish; Meissner, Kenith E; Godin, Biana; Rees, Paul

    2016-10-01

    Cross-system comparisons of drug delivery vectors are essential to ensure optimal design. An in-vitro experimental protocol is presented that separates the role of the delivery vector from that of its cargo in determining the cell response, thus allowing quantitative comparison of different systems. The technique is validated through benchmarking of the dose-response of human fibroblast cells exposed to the cationic molecule, polyethylene imine (PEI); delivered as a free molecule and as a cargo on the surface of CdSe nanoparticles and Silica microparticles. The exposure metrics are converted to a delivered dose with the transport properties of the different scale systems characterized by a delivery time, τ. The benchmarking highlights an agglomeration of the free PEI molecules into micron sized clusters and identifies the metric determining cell death as the total number of PEI molecules presented to cells, determined by the delivery vector dose and the surface density of the cargo. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Benchmark studies of induced radioactivity produced in LHC materials, Part II: Remanent dose rates.

    PubMed

    Brugger, M; Khater, H; Mayer, S; Prinz, A; Roesler, S; Ulrici, L; Vincke, H

    2005-01-01

    A new method to estimate remanent dose rates, to be used with the Monte Carlo code FLUKA, was benchmarked against measurements from an experiment that was performed at the CERN-EU high-energy reference field facility. An extensive collection of samples of different materials were placed downstream of, and laterally to, a copper target, intercepting a positively charged mixed hadron beam with a momentum of 120 GeV c(-1). Emphasis was put on the reduction of uncertainties by taking measures such as careful monitoring of the irradiation parameters, using different instruments to measure dose rates, adopting detailed elemental analyses of the irradiated materials and making detailed simulations of the irradiation experiment. The measured and calculated dose rates are in good agreement.

  3. Cloud-Based CT Dose Monitoring using the DICOM-Structured Report: Fully Automated Analysis in Regard to National Diagnostic Reference Levels.

    PubMed

    Boos, J; Meineke, A; Rubbert, C; Heusch, P; Lanzman, R S; Aissa, J; Antoch, G; Kröpil, P

    2016-03-01

    To implement automated CT dose data monitoring using the DICOM-Structured Report (DICOM-SR) in order to monitor dose-related CT data in regard to national diagnostic reference levels (DRLs). We used a novel in-house co-developed software tool based on the DICOM-SR to automatically monitor dose-related data from CT examinations. The DICOM-SR for each CT examination performed between 09/2011 and 03/2015 was automatically anonymized and sent from the CT scanners to a cloud server. Data was automatically analyzed in accordance with body region, patient age and corresponding DRL for volumetric computed tomography dose index (CTDIvol) and dose length product (DLP). Data of 36,523 examinations (131,527 scan series) performed on three different CT scanners and one PET/CT were analyzed. The overall mean CTDIvol and DLP were 51.3% and 52.8% of the national DRLs, respectively. CTDIvol and DLP reached 43.8% and 43.1% for abdominal CT (n=10,590), 66.6% and 69.6% for cranial CT (n=16,098) and 37.8% and 44.0% for chest CT (n=10,387) of the compared national DRLs, respectively. Overall, the CTDIvol exceeded national DRLs in 1.9% of the examinations, while the DLP exceeded national DRLs in 2.9% of the examinations. Between different CT protocols of the same body region, radiation exposure varied up to 50% of the DRLs. The implemented cloud-based CT dose monitoring based on the DICOM-SR enables automated benchmarking in regard to national DRLs. Overall the local dose exposure from CT reached approximately 50% of these DRLs indicating that DRL actualization as well as protocol-specific DRLs are desirable. The cloud-based approach enables multi-center dose monitoring and offers great potential to further optimize radiation exposure in radiological departments. • The newly developed software based on the DICOM-Structured Report enables large-scale cloud-based CT dose monitoring • The implemented software solution enables automated benchmarking in regard to national DRLs • The local radiation exposure from CT reached approximately 50 % of the national DRLs • The cloud-based approach offers great potential for multi-center dose analysis. © Georg Thieme Verlag KG Stuttgart · New York.

  4. Diagnostic reference levels of paediatric computed tomography examinations performed at a dedicated Australian paediatric hospital.

    PubMed

    Bibbo, Giovanni; Brown, Scott; Linke, Rebecca

    2016-08-01

    Diagnostic Reference Levels (DRL) of procedures involving ionizing radiation are important tools to optimizing radiation doses delivered to patients and in identifying cases where the levels of doses are unusually high. This is particularly important for paediatric patients undergoing computed tomography (CT) examinations as these examinations are associated with relatively high-dose. Paediatric CT studies, performed at our institution from January 2010 to March 2014, have been retrospectively analysed to determine the 75th and 95th percentiles of both the volume computed tomography dose index (CTDIvol ) and dose-length product (DLP) for the most commonly performed studies to: establish local diagnostic reference levels for paediatric computed tomography examinations performed at our institution, benchmark our DRL with national and international published paediatric values, and determine the compliance of CT radiographer with established protocols. The derived local 75th percentile DRL have been found to be acceptable when compared with those published by the Australian National Radiation Dose Register and two national children's hospitals, and at the international level with the National Reference Doses for the UK. The 95th percentiles of CTDIvol for the various CT examinations have been found to be acceptable values for the CT scanner Dose-Check Notification. Benchmarking CT radiographers shows that they follow the set protocols for the various examinations without significant variations in the machine setting factors. The derivation of DRL has given us the tool to evaluate and improve the performance of our CT service by improved compliance and a reduction in radiation dose to our paediatric patients. We have also been able to benchmark our performance with similar national and international institutions. © 2016 The Royal Australian and New Zealand College of Radiologists.

  5. Concordance of transcriptional and apical benchmark dose levels for conazole-induced liver effects in mice.

    PubMed

    Bhat, Virunya S; Hester, Susan D; Nesnow, Stephen; Eastmond, David A

    2013-11-01

    The ability to anchor chemical class-based gene expression changes to phenotypic lesions and to describe these changes as a function of dose and time informs mode-of-action determinations and improves quantitative risk assessments. Previous global expression profiling identified a 330-probe cluster differentially expressed and commonly responsive to 3 hepatotumorigenic conazoles (cyproconazole, epoxiconazole, and propiconazole) at 30 days. Extended to 2 more conazoles (triadimefon and myclobutanil), the present assessment encompasses 4 tumorigenic and 1 nontumorigenic conazole. Transcriptional benchmark dose levels (BMDL(T)) were estimated for a subset of the cluster with dose-responsive behavior and a ≥ 5-fold increase or decrease in signal intensity at the highest dose. These genes primarily encompassed CAR/RXR activation, P450 metabolism, liver hypertrophy- glutathione depletion, LPS/IL-1-mediated inhibition of RXR, and NRF2-mediated oxidative stress pathways. Median BMDL(T) estimates from the subset were concordant (within a factor of 2.4) with apical benchmark doses (BMDL(A)) for increased liver weight at 30 days for the 5 conazoles. The 30-day median BMDL(T) estimates were within one-half order of magnitude of the chronic BMDLA for hepatocellular tumors. Potency differences seen in the dose-responsive transcription of certain phase II metabolism, bile acid detoxification, and lipid oxidation genes mirrored each conazole's tumorigenic potency. The 30-day BMDL(T) corresponded to tumorigenic potency on a milligram per kilogram day basis with cyproconazole > epoxiconazole > propiconazole > triadimefon > myclobutanil (nontumorigenic). These results support the utility of measuring short-term gene expression changes to inform quantitative risk assessments from long-term exposures.

  6. Benchmark concentrations for methyl mercury obtained from the 9-year follow-up of the Seychelles Child Development Study.

    PubMed

    van Wijngaarden, Edwin; Beck, Christopher; Shamlaye, Conrad F; Cernichiari, Elsa; Davidson, Philip W; Myers, Gary J; Clarkson, Thomas W

    2006-09-01

    Methyl mercury (MeHg) is highly toxic to the developing nervous system. Human exposure is mainly from fish consumption since small amounts are present in all fish. Findings of developmental neurotoxicity following high-level prenatal exposure to MeHg raised the question of whether children whose mothers consumed fish contaminated with background levels during pregnancy are at an increased risk of impaired neurological function. Benchmark doses determined from studies in New Zealand, and the Faroese and Seychelles Islands indicate that a level of 4-25 parts per million (ppm) measured in maternal hair may carry a risk to the infant. However, there are numerous sources of uncertainty that could affect the derivation of benchmark doses, and it is crucial to continue to investigate the most appropriate derivation of safe consumption levels. Earlier, we published the findings from benchmark analyses applied to the data collected on the Seychelles main cohort at the 66-month follow-up period. Here, we expand on the main cohort analyses by determining the benchmark doses (BMD) of MeHg level in maternal hair based on 643 Seychellois children for whom 26 different neurobehavioral endpoints were measured at 9 years of age. Dose-response models applied to these continuous endpoints incorporated a variety of covariates and included the k-power model, the Weibull model, and the logistic model. The average 95% lower confidence limit of the BMD (BMDL) across all 26 endpoints varied from 20.1 ppm (range=17.2-22.5) for the logistic model to 20.4 ppm (range=17.9-23.0) for the k-power model. These estimates are somewhat lower than those obtained after 66 months of follow-up. The Seychelles Child Development Study continues to provide a firm scientific basis for the derivation of safe levels of MeHg consumption.

  7. The current state of knowledge on the use of the benchmark dose concept in risk assessment.

    PubMed

    Sand, Salomon; Victorin, Katarina; Filipsson, Agneta Falk

    2008-05-01

    This review deals with the current state of knowledge on the use of the benchmark dose (BMD) concept in health risk assessment of chemicals. The BMD method is an alternative to the traditional no-observed-adverse-effect level (NOAEL) and has been presented as a methodological improvement in the field of risk assessment. The BMD method has mostly been employed in the USA but is presently given higher attention also in Europe. The review presents a number of arguments in favor of the BMD, relative to the NOAEL. In addition, it gives a detailed overview of the several procedures that have been suggested and applied for BMD analysis, for quantal as well as continuous data. For quantal data the BMD is generally defined as corresponding to an additional or extra risk of 5% or 10%. For continuous endpoints it is suggested that the BMD is defined as corresponding to a percentage change in response relative to background or relative to the dynamic range of response. Under such definitions, a 5% or 10% change can be considered as default. Besides how to define the BMD and its lower bound, the BMDL, the question of how to select the dose-response model to be used in the BMD and BMDL determination is highlighted. Issues of study design and comparison of dose-response curves and BMDs are also covered. Copyright (c) 2007 John Wiley & Sons, Ltd.

  8. SU-E-J-30: Benchmark Image-Based TCP Calculation for Evaluation of PTV Margins for Lung SBRT Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, M; Chetty, I; Zhong, H

    2014-06-01

    Purpose: Tumor control probability (TCP) calculated with accumulated radiation doses may help design appropriate treatment margins. Image registration errors, however, may compromise the calculated TCP. The purpose of this study is to develop benchmark CT images to quantify registration-induced errors in the accumulated doses and their corresponding TCP. Methods: 4DCT images were registered from end-inhale (EI) to end-exhale (EE) using a “demons” algorithm. The demons DVFs were corrected by an FEM model to get realistic deformation fields. The FEM DVFs were used to warp the EI images to create the FEM-simulated images. The two images combined with the FEM DVFmore » formed a benchmark model. Maximum intensity projection (MIP) images, created from the EI and simulated images, were used to develop IMRT plans. Two plans with 3 and 5 mm margins were developed for each patient. With these plans, radiation doses were recalculated on the simulated images and warped back to the EI images using the FEM DVFs to get the accumulated doses. The Elastix software was used to register the FEM-simulated images to the EI images. TCPs calculated with the Elastix-accumulated doses were compared with those generated by the FEM to get the TCP error of the Elastix registrations. Results: For six lung patients, the mean Elastix registration error ranged from 0.93 to 1.98 mm. Their relative dose errors in PTV were between 0.28% and 6.8% for 3mm margin plans, and between 0.29% and 6.3% for 5mm-margin plans. As the PTV margin reduced from 5 to 3 mm, the mean TCP error of the Elastix-reconstructed doses increased from 2.0% to 2.9%, and the mean NTCP errors decreased from 1.2% to 1.1%. Conclusion: Patient-specific benchmark images can be used to evaluate the impact of registration errors on the computed TCPs, and may help select appropriate PTV margins for lung SBRT patients.« less

  9. Evaluation of triclosan in Minnesota lakes and rivers: Part II - human health risk assessment.

    PubMed

    Yost, Lisa J; Barber, Timothy R; Gentry, P Robinan; Bock, Michael J; Lyndall, Jennifer L; Capdevielle, Marie C; Slezak, Brian P

    2017-08-01

    Triclosan, an antimicrobial compound found in consumer products, has been detected in low concentrations in Minnesota municipal wastewater treatment plant (WWTP) effluent. This assessment evaluates potential health risks for exposure of adults and children to triclosan in Minnesota surface water, sediments, and fish. Potential exposures via fish consumption are considered for recreational or subsistence-level consumers. This assessment uses two chronic oral toxicity benchmarks, which bracket other available toxicity values. The first benchmark is a lower bound on a benchmark dose associated with a 10% risk (BMDL 10 ) of 47mg per kilogram per day (mg/kg-day) for kidney effects in hamsters. This value was identified as the most sensitive endpoint and species in a review by Rodricks et al. (2010) and is used herein to derive an estimated reference dose (RfD (Rodricks) ) of 0.47mg/kg-day. The second benchmark is a reference dose (RfD) of 0.047mg/kg-day derived from a no observed adverse effect level (NOAEL) of 10mg/kg-day for hepatic and hematopoietic effects in mice (Minnesota Department of Health [MDH] 2014). Based on conservative assumptions regarding human exposures to triclosan, calculated risk estimates are far below levels of concern. These estimates are likely to overestimate risks for potential receptors, particularly because sample locations were generally biased towards known discharges (i.e., WWTP effluent). Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Comparative risk assessment of alcohol, tobacco, cannabis and other illicit drugs using the margin of exposure approach.

    PubMed

    Lachenmeier, Dirk W; Rehm, Jürgen

    2015-01-30

    A comparative risk assessment of drugs including alcohol and tobacco using the margin of exposure (MOE) approach was conducted. The MOE is defined as ratio between toxicological threshold (benchmark dose) and estimated human intake. Median lethal dose values from animal experiments were used to derive the benchmark dose. The human intake was calculated for individual scenarios and population-based scenarios. The MOE was calculated using probabilistic Monte Carlo simulations. The benchmark dose values ranged from 2 mg/kg bodyweight for heroin to 531 mg/kg bodyweight for alcohol (ethanol). For individual exposure the four substances alcohol, nicotine, cocaine and heroin fall into the "high risk" category with MOE < 10, the rest of the compounds except THC fall into the "risk" category with MOE < 100. On a population scale, only alcohol would fall into the "high risk" category, and cigarette smoking would fall into the "risk" category, while all other agents (opiates, cocaine, amphetamine-type stimulants, ecstasy, and benzodiazepines) had MOEs > 100, and cannabis had a MOE > 10,000. The toxicological MOE approach validates epidemiological and social science-based drug ranking approaches especially in regard to the positions of alcohol and tobacco (high risk) and cannabis (low risk).

  11. Comparative risk assessment of tobacco smoke constituents using the margin of exposure approach: the neglected contribution of nicotine

    PubMed Central

    Baumung, Claudia; Rehm, Jürgen; Franke, Heike; Lachenmeier, Dirk W.

    2016-01-01

    Nicotine was not included in previous efforts to identify the most important toxicants of tobacco smoke. A health risk assessment of nicotine for smokers of cigarettes was conducted using the margin of exposure (MOE) approach and results were compared to literature MOEs of various other tobacco toxicants. The MOE is defined as ratio between toxicological threshold (benchmark dose) and estimated human intake. Dose-response modelling of human and animal data was used to derive the benchmark dose. The MOE was calculated using probabilistic Monte Carlo simulations for daily cigarette smokers. Benchmark dose values ranged from 0.004 mg/kg bodyweight for symptoms of intoxication in children to 3 mg/kg bodyweight for mortality in animals; MOEs ranged from below 1 up to 7.6 indicating a considerable consumer risk. The dimension of the MOEs is similar to those of other tobacco toxicants with high concerns relating to adverse health effects such as acrolein or formaldehyde. Owing to the lack of toxicological data in particular relating to cancer, long term animal testing studies for nicotine are urgently necessary. There is immediate need of action concerning the risk of nicotine also with regard to electronic cigarettes and smokeless tobacco. PMID:27759090

  12. Comparative risk assessment of alcohol, tobacco, cannabis and other illicit drugs using the margin of exposure approach

    PubMed Central

    Lachenmeier, Dirk W.; Rehm, Jürgen

    2015-01-01

    A comparative risk assessment of drugs including alcohol and tobacco using the margin of exposure (MOE) approach was conducted. The MOE is defined as ratio between toxicological threshold (benchmark dose) and estimated human intake. Median lethal dose values from animal experiments were used to derive the benchmark dose. The human intake was calculated for individual scenarios and population-based scenarios. The MOE was calculated using probabilistic Monte Carlo simulations. The benchmark dose values ranged from 2 mg/kg bodyweight for heroin to 531 mg/kg bodyweight for alcohol (ethanol). For individual exposure the four substances alcohol, nicotine, cocaine and heroin fall into the “high risk” category with MOE < 10, the rest of the compounds except THC fall into the “risk” category with MOE < 100. On a population scale, only alcohol would fall into the “high risk” category, and cigarette smoking would fall into the “risk” category, while all other agents (opiates, cocaine, amphetamine-type stimulants, ecstasy, and benzodiazepines) had MOEs > 100, and cannabis had a MOE > 10,000. The toxicological MOE approach validates epidemiological and social science-based drug ranking approaches especially in regard to the positions of alcohol and tobacco (high risk) and cannabis (low risk). PMID:25634572

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Grace L.; Department of Health Services Research, The University of Texas MD Anderson Cancer Center, Houston, Texas; Jiang, Jing

    Purpose: High-quality treatment for intact cervical cancer requires external radiation therapy, brachytherapy, and chemotherapy, carefully sequenced and completed without delays. We sought to determine how frequently current treatment meets quality benchmarks and whether new technologies have influenced patterns of care. Methods and Materials: By searching diagnosis and procedure claims in MarketScan, an employment-based health care claims database, we identified 1508 patients with nonmetastatic, intact cervical cancer treated from 1999 to 2011, who were <65 years of age and received >10 fractions of radiation. Treatments received were identified using procedure codes and compared with 3 quality benchmarks: receipt of brachytherapy, receipt ofmore » chemotherapy, and radiation treatment duration not exceeding 63 days. The Cochran-Armitage test was used to evaluate temporal trends. Results: Seventy-eight percent of patients (n=1182) received brachytherapy, with brachytherapy receipt stable over time (Cochran-Armitage P{sub trend}=.15). Among patients who received brachytherapy, 66% had high–dose rate and 34% had low–dose rate treatment, although use of high–dose rate brachytherapy steadily increased to 75% by 2011 (P{sub trend}<.001). Eighteen percent of patients (n=278) received intensity modulated radiation therapy (IMRT), and IMRT receipt increased to 37% by 2011 (P{sub trend}<.001). Only 2.5% of patients (n=38) received IMRT in the setting of brachytherapy omission. Overall, 79% of patients (n=1185) received chemotherapy, and chemotherapy receipt increased to 84% by 2011 (P{sub trend}<.001). Median radiation treatment duration was 56 days (interquartile range, 47-65 days); however, duration exceeded 63 days in 36% of patients (n=543). Although 98% of patients received at least 1 benchmark treatment, only 44% received treatment that met all 3 benchmarks. With more stringent indicators (brachytherapy, ≥4 chemotherapy cycles, and duration not exceeding 56 days), only 25% of patients received treatment that met all benchmarks. Conclusion: In this cohort, most cervical cancer patients received treatment that did not comply with all 3 benchmarks for quality treatment. In contrast to increasing receipt of newer radiation technologies, there was little improvement in receipt of essential treatment benchmarks.« less

  14. Benchmarking the MCNP code for Monte Carlo modelling of an in vivo neutron activation analysis system.

    PubMed

    Natto, S A; Lewis, D G; Ryde, S J

    1998-01-01

    The Monte Carlo computer code MCNP (version 4A) has been used to develop a personal computer-based model of the Swansea in vivo neutron activation analysis (IVNAA) system. The model included specification of the neutron source (252Cf), collimators, reflectors and shielding. The MCNP model was 'benchmarked' against fast neutron and thermal neutron fluence data obtained experimentally from the IVNAA system. The Swansea system allows two irradiation geometries using 'short' and 'long' collimators, which provide alternative dose rates for IVNAA. The data presented here relate to the short collimator, although results of similar accuracy were obtained using the long collimator. The fast neutron fluence was measured in air at a series of depths inside the collimator. The measurements agreed with the MCNP simulation within the statistical uncertainty (5-10%) of the calculations. The thermal neutron fluence was measured and calculated inside the cuboidal water phantom. The depth of maximum thermal fluence was 3.2 cm (measured) and 3.0 cm (calculated). The width of the 50% thermal fluence level across the phantom at its mid-depth was found to be the same by both MCNP and experiment. This benchmarking exercise has given us a high degree of confidence in MCNP as a tool for the design of IVNAA systems.

  15. Development of a flattening filter free multiple source model for use as an independent, Monte Carlo, dose calculation, quality assurance tool for clinical trials.

    PubMed

    Faught, Austin M; Davidson, Scott E; Popple, Richard; Kry, Stephen F; Etzel, Carol; Ibbott, Geoffrey S; Followill, David S

    2017-09-01

    The Imaging and Radiation Oncology Core-Houston (IROC-H) Quality Assurance Center (formerly the Radiological Physics Center) has reported varying levels of compliance from their anthropomorphic phantom auditing program. IROC-H studies have suggested that one source of disagreement between institution submitted calculated doses and measurement is the accuracy of the institution's treatment planning system dose calculations and heterogeneity corrections used. In order to audit this step of the radiation therapy treatment process, an independent dose calculation tool is needed. Monte Carlo multiple source models for Varian flattening filter free (FFF) 6 MV and FFF 10 MV therapeutic x-ray beams were commissioned based on central axis depth dose data from a 10 × 10 cm 2 field size and dose profiles for a 40 × 40 cm 2 field size. The models were validated against open-field measurements in a water tank for field sizes ranging from 3 × 3 cm 2 to 40 × 40 cm 2 . The models were then benchmarked against IROC-H's anthropomorphic head and neck phantom and lung phantom measurements. Validation results, assessed with a ±2%/2 mm gamma criterion, showed average agreement of 99.9% and 99.0% for central axis depth dose data for FFF 6 MV and FFF 10 MV models, respectively. Dose profile agreement using the same evaluation technique averaged 97.8% and 97.9% for the respective models. Phantom benchmarking comparisons were evaluated with a ±3%/2 mm gamma criterion, and agreement averaged 90.1% and 90.8% for the respective models. Multiple source models for Varian FFF 6 MV and FFF 10 MV beams have been developed, validated, and benchmarked for inclusion in an independent dose calculation quality assurance tool for use in clinical trial audits. © 2017 American Association of Physicists in Medicine.

  16. Transcriptomic Dose-Response Analysis for Mode of Action ...

    EPA Pesticide Factsheets

    Microarray and RNA-seq technologies can play an important role in assessing the health risks associated with environmental exposures. The utility of gene expression data to predict hazard has been well documented. Early toxicogenomics studies used relatively high, single doses with minimal replication. Thus, they were not useful in understanding health risks at environmentally-relevant doses. Until the past decade, application of toxicogenomics in dose response assessment and determination of chemical mode of action has been limited. New transcriptomic biomarkers have evolved to detect chemical hazards in multiple tissues together with pathway methods to study biological effects across the full dose response range and critical time course. Comprehensive low dose datasets are now available and with the use of transcriptomic benchmark dose estimation techniques within a mode of action framework, the ability to incorporate informative genomic data into human health risk assessment has substantially improved. The key advantage to applying transcriptomic technology to risk assessment is both the sensitivity and comprehensive examination of direct and indirect molecular changes that lead to adverse outcomes. Book Chapter with topic on future application of toxicogenomics technologies for MoA and risk assessment

  17. Recommended approaches in the application of ...

    EPA Pesticide Factsheets

    ABSTRACT:Only a fraction of chemicals in commerce have been fully assessed for their potential hazards to human health due to difficulties involved in conventional regulatory tests. It has recently been proposed that quantitative transcriptomic data can be used to determine benchmark dose (BMD) and estimate a point of departure (POD). Several studies have shown that transcriptional PODs correlate with PODs derived from analysis of pathological changes, but there is no consensus on how the genes that are used to derive a transcriptional POD should be selected. Because of very large number of unrelated genes in gene expression data, the process of selecting subsets of informative genes is a major challenge. We used published microarray data from studies on rats exposed orally to multiple doses of six chemicals for 5, 14, 28, and 90 days. We evaluated eight different approaches to select genes for POD derivation and compared them to three previously proposed approaches. The relationship between transcriptional BMDs derived using these 11 approaches were compared with PODs derived from apical data that might be used in a human health risk assessment. We found that transcriptional benchmark dose values for all 11 approaches were remarkably aligned with different apical PODs, while a subset of between 3 and 8 of the approaches met standard statistical criteria across the 5-, 14-, 28-, and 90-day time points and thus qualify as effective estimates of apical PODs. Our r

  18. Benchmark dose and the three Rs. Part I. Getting more information from the same number of animals.

    PubMed

    Slob, Wout

    2014-08-01

    Evaluating dose-response data using the Benchmark dose (BMD) approach rather than by the no observed adverse effect (NOAEL) approach implies a considerable step forward from the perspective of the Reduction, Replacement, and Refinement, three Rs, in particular the R of reduction: more information is obtained from the same number of animals, or, vice versa, similar information may be obtained from fewer animals. The first part of this twin paper focusses on the former, the second on the latter aspect. Regarding the former, the BMD approach provides more information from any given dose-response dataset in various ways. First, the BMDL (= BMD lower confidence bound) provides more information by its more explicit definition. Further, as compared to the NOAEL approach the BMD approach results in more statistical precision in the value of the point of departure (PoD), for deriving exposure limits. While part of the animals in the study do not directly contribute to the numerical value of a NOAEL, all animals are effectively used and do contribute to a BMDL. In addition, the BMD approach allows for combining similar datasets for the same chemical (e.g., both sexes) in a single analysis, which further increases precision. By combining a dose-response dataset with similar historical data for other chemicals, the precision can even be substantially increased. Further, the BMD approach results in more precise estimates for relative potency factors (RPFs, or TEFs). And finally, the BMD approach is not only more precise, it also allows for quantification of the precision in the BMD estimate, which is not possible in the NOAEL approach.

  19. BMDExpress Data Viewer: A Visualization Tool to Analyze BMDExpress Datasets

    EPA Science Inventory

    Regulatory agencies increasingly apply benchmark dose (BMD) modeling to determine points of departure in human risk assessments. BMDExpress applies BMD modeling to transcriptomics datasets and groups genes to biological processes and pathways for rapid assessment of doses at whic...

  20. Benchmark Dose Modeling Estimates of the Concentrations of Inorganic Arsenic That Induce Changes to the Neonatal Transcriptome, Proteome, and Epigenome in a Pregnancy Cohort.

    PubMed

    Rager, Julia E; Auerbach, Scott S; Chappell, Grace A; Martin, Elizabeth; Thompson, Chad M; Fry, Rebecca C

    2017-10-16

    Prenatal inorganic arsenic (iAs) exposure influences the expression of critical genes and proteins associated with adverse outcomes in newborns, in part through epigenetic mediators. The doses at which these genomic and epigenomic changes occur have yet to be evaluated in the context of dose-response modeling. The goal of the present study was to estimate iAs doses that correspond to changes in transcriptomic, proteomic, epigenomic, and integrated multi-omic signatures in human cord blood through benchmark dose (BMD) modeling. Genome-wide DNA methylation, microRNA expression, mRNA expression, and protein expression levels in cord blood were modeled against total urinary arsenic (U-tAs) levels from pregnant women exposed to varying levels of iAs. Dose-response relationships were modeled in BMDExpress, and BMDs representing 10% response levels were estimated. Overall, DNA methylation changes were estimated to occur at lower exposure concentrations in comparison to other molecular endpoints. Multi-omic module eigengenes were derived through weighted gene co-expression network analysis, representing co-modulated signatures across transcriptomic, proteomic, and epigenomic profiles. One module eigengene was associated with decreased gestational age occurring alongside increased iAs exposure. Genes/proteins within this module eigengene showed enrichment for organismal development, including potassium voltage-gated channel subfamily Q member 1 (KCNQ1), an imprinted gene showing differential methylation and expression in response to iAs. Modeling of this prioritized multi-omic module eigengene resulted in a BMD(BMDL) of 58(45) μg/L U-tAs, which was estimated to correspond to drinking water arsenic concentrations of 51(40) μg/L. Results are in line with epidemiological evidence supporting effects of prenatal iAs occurring at levels <100 μg As/L urine. Together, findings present a variety of BMD measures to estimate doses at which prenatal iAs exposure influences neonatal outcome-relevant transcriptomic, proteomic, and epigenomic profiles.

  1. Benchmark dose for cadmium exposure and elevated N-acetyl-β-D-glucosaminidase: a meta-analysis.

    PubMed

    Liu, CuiXia; Li, YuBiao; Zhu, ChunShui; Dong, ZhaoMin; Zhang, Kun; Zhao, YanBin; Xu, YiLu

    2016-10-01

    Cadmium (Cd) is a well-known nephrotoxic contaminant, and N-acetyl-β-D-glucosaminidase (NAG) is considered to be an early and sensitive marker of tubular dysfunction. The link between Cd exposure and NAG level enables us to derive the benchmark dose (BMD) of Cd. Although several reports have already documented urinary Cd (UCd)-NAG relationships and BMD estimations, high heterogeneities arise due to the sub-populations (age, gender, and ethnicity) and BMD methodologies being employed. To clarify the influences that these variables exert, firstly, a random effect meta-analysis was performed in this study to correlate the UCd and NAG based on 92 datasets collected from 30 publications. Later, this established correlation (Ln(NAG) = 0.51 × Ln(UCd) + 0.83) was applied to derive the UCd BMD 5 of 1.76 μg/g creatinine and 95 % lower confidence limit of BMD 5 (BMDL 5 ) of 1.67 μg/g creatinine. While the regressions for different age groups and genders differed slightly, it is age and not gender that significantly affects BMD estimations. Ethnic differences may require further investigation given that limited data is currently available. Based on a comprehensive and systematic literature review, this study is a new attempt to quantify the UCd-NAG link and estimate BMD.

  2. Benchmark Dose Software (BMDS) Development and ...

    EPA Pesticide Factsheets

    This report is intended to provide an overview of beta version 1.0 of the implementation of a model of repeated measures data referred to as the Toxicodiffusion model. The implementation described here represents the first steps towards integration of the Toxicodiffusion model into the EPA benchmark dose software (BMDS). This version runs from within BMDS 2.0 using an option screen for making model selection, as is done for other models in the BMDS 2.0 suite. This report is intended to provide an overview of beta version 1.0 of the implementation of a model of repeated measures data referred to as the Toxicodiffusion model.

  3. Comparison of Vocal Vibration-Dose Measures for Potential-Damage Risk Criteria

    ERIC Educational Resources Information Center

    Titze, Ingo R.; Hunter, Eric J.

    2015-01-01

    Purpose: School-teachers have become a benchmark population for the study of occupational voice use. A decade of vibration-dose studies on the teacher population allows a comparison to be made between specific dose measures for eventual assessment of damage risk. Method: Vibration dosimetry is reformulated with the inclusion of collision stress.…

  4. Shutdown Dose Rate Analysis Using the Multi-Step CADIS Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.

    2015-01-01

    The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) hybrid Monte Carlo (MC)/deterministic radiation transport method was proposed to speed up the shutdown dose rate (SDDR) neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. This work applied the MS-CADIS method to the ITER SDDR benchmark problem. The MS-CADIS method was also used to calculate the SDDR uncertainty resulting from uncertainties in the MC neutron calculation and to determine the degree of undersampling in SDDR calculations because of the limited ability of the MC method to tally detailed spatial and energy distributions. The analysismore » that used the ITER benchmark problem compared the efficiency of the MS-CADIS method to the traditional approach of using global MC variance reduction techniques for speeding up SDDR neutron MC calculation. Compared to the standard Forward-Weighted-CADIS (FW-CADIS) method, the MS-CADIS method increased the efficiency of the SDDR neutron MC calculation by 69%. The MS-CADIS method also increased the fraction of nonzero scoring mesh tally elements in the space-energy regions of high importance to the final SDDR.« less

  5. BMDExpress Data Viewer: A Visualization Tool to Analyze BMDExpress Datasets(SoTC)

    EPA Science Inventory

    Background: Benchmark Dose (BMD) modelling is a mathematical approach used to determine where a dose-response change begins to take place relative to controls following chemical exposure. BMDs are being increasingly applied in regulatory toxicology to estimate acceptable exposure...

  6. BMDExpress Data Viewer: A Visualization Tool to Analyze BMDExpress Datasets (STC symposium)

    EPA Science Inventory

    Background: Benchmark Dose (BMD) modelling is a mathematical approach used to determine where a dose-response change begins to take place relative to controls following chemical exposure. BMDs are being increasingly applied in regulatory toxicology to estimate acceptable exposure...

  7. Altered operant responding for motor reinforcement and the determination of benchmark doses following perinatal exposure to low-level 2,3,7,8-tetrachlorodibenzo-p-dioxin.

    PubMed

    Markowski, V P; Zareba, G; Stern, S; Cox, C; Weiss, B

    2001-06-01

    Pregnant Holtzman rats were exposed to a single oral dose of 0, 20, 60, or 180 ng/kg 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) on the 18th day of gestation. Their adult female offspring were trained to respond on a lever for brief opportunities to run in specially designed running wheels. Once they had begun responding on a fixed-ratio 1 (FR1) schedule of reinforcement, the fixed-ratio requirement for lever pressing was increased at five-session intervals to values of FR2, FR5, FR10, FR20, and FR30. We examined vaginal cytology after each behavior session to track estrous cyclicity. Under each of the FR values, perinatal TCDD exposure produced a significant dose-related reduction in the number of earned opportunities to run, the lever response rate, and the total number of revolutions in the wheel. Estrous cyclicity was not affected. Because of the consistent dose-response relationship at all FR values, we used the behavioral data to calculate benchmark doses based on displacements from modeled zero-dose performance of 1% (ED(01)) and 10% (ED(10)), as determined by a quadratic fit to the dose-response function. The mean ED(10) benchmark dose for earned run opportunities was 10.13 ng/kg with a 95% lower bound of 5.77 ng/kg. The corresponding ED(01) was 0.98 ng/kg with a 95% lower bound of 0.83 ng/kg. The mean ED(10) for total wheel revolutions was calculated as 7.32 ng/kg with a 95% lower bound of 5.41 ng/kg. The corresponding ED(01) was 0.71 ng/kg with a 95% lower bound of 0.60. These values should be viewed from the perspective of current human body burdens, whose average value, based on TCDD toxic equivalents, has been calculated as 13 ng/kg.

  8. Development of a chronic noncancer oral reference dose and drinking water screening level for sulfolane using benchmark dose modeling.

    PubMed

    Thompson, Chad M; Gaylor, David W; Tachovsky, J Andrew; Perry, Camarie; Carakostas, Michael C; Haws, Laurie C

    2013-12-01

    Sulfolane is a widely used industrial solvent that is often used for gas treatment (sour gas sweetening; hydrogen sulfide removal from shale and coal processes, etc.), and in the manufacture of polymers and electronics, and may be found in pharmaceuticals as a residual solvent used in the manufacturing processes. Sulfolane is considered a high production volume chemical with worldwide production around 18 000-36 000 tons per year. Given that sulfolane has been detected as a contaminant in groundwater, an important potential route of exposure is tap water ingestion. Because there are currently no federal drinking water standards for sulfolane in the USA, we developed a noncancer oral reference dose (RfD) based on benchmark dose modeling, as well as a tap water screening value that is protective of ingestion. Review of the available literature suggests that sulfolane is not likely to be mutagenic, clastogenic or carcinogenic, or pose reproductive or developmental health risks except perhaps at very high exposure concentrations. RfD values derived using benchmark dose modeling were 0.01-0.04 mg kg(-1) per day, although modeling of developmental endpoints resulted in higher values, approximately 0.4 mg kg(-1) per day. The lowest, most conservative, RfD of 0.01 mg kg(-1) per day was based on reduced white blood cell counts in female rats. This RfD was used to develop a tap water screening level that is protective of ingestion, viz. 365 µg l(-1). It is anticipated that these values, along with the hazard identification and dose-response modeling described herein, should be informative for risk assessors and regulators interested in setting health-protective drinking water guideline values for sulfolane. Copyright © 2012 John Wiley & Sons, Ltd.

  9. DOSE-RESPONSE ASSESSMENT FOR DEVELOPMENTAL TOXICITY III. STATISTICAL MODELS

    EPA Science Inventory

    Although quantitative modeling has been central to cancer risk assessment for years, the concept of do@e-response modeling for developmental effects is relatively new. he benchmark dose (BMD) approach has been proposed for use with developmental (as well as other noncancer) endpo...

  10. Direct potable reuse microbial risk assessment methodology: Sensitivity analysis and application to State log credit allocations.

    PubMed

    Soller, Jeffrey A; Eftim, Sorina E; Nappier, Sharon P

    2018-01-01

    Understanding pathogen risks is a critically important consideration in the design of water treatment, particularly for potable reuse projects. As an extension to our published microbial risk assessment methodology to estimate infection risks associated with Direct Potable Reuse (DPR) treatment train unit process combinations, herein, we (1) provide an updated compilation of pathogen density data in raw wastewater and dose-response models; (2) conduct a series of sensitivity analyses to consider potential risk implications using updated data; (3) evaluate the risks associated with log credit allocations in the United States; and (4) identify reference pathogen reductions needed to consistently meet currently applied benchmark risk levels. Sensitivity analyses illustrated changes in cumulative annual risks estimates, the significance of which depends on the pathogen group driving the risk for a given treatment train. For example, updates to norovirus (NoV) raw wastewater values and use of a NoV dose-response approach, capturing the full range of uncertainty, increased risks associated with one of the treatment trains evaluated, but not the other. Additionally, compared to traditional log-credit allocation approaches, our results indicate that the risk methodology provides more nuanced information about how consistently public health benchmarks are achieved. Our results indicate that viruses need to be reduced by 14 logs or more to consistently achieve currently applied benchmark levels of protection associated with DPR. The refined methodology, updated model inputs, and log credit allocation comparisons will be useful to regulators considering DPR projects and design engineers as they consider which unit treatment processes should be employed for particular projects. Published by Elsevier Ltd.

  11. BMDExpress Data Viewer - A visualization Tool to Analyze BMDExpress Datasets (Health Canada Science Forum)

    EPA Science Inventory

    Benchmark Dose (BMD) modelling is a mathematical approach used to determine where a dose-response change begins to take place relative to controls following chemical exposure. BMDs are being increasingly applied in regulatory toxicology to determine points of departure. BMDExpres...

  12. Concordance of Transcriptional and Apical Benchmark Dose Levels for Conazole-Induced Liver Effects in Mice

    EPA Science Inventory

    ABSTRACT The ability to anchor chemical class-based gene expression changes to phenotypic lesions and to describe these changes as a function of dose and time informs mode of action determinations and improves quantitative risk assessments. Previous transcription-based microarra...

  13. Benchmarking Water Quality from Wastewater to Drinking Waters Using Reduced Transcriptome of Human Cells.

    PubMed

    Xia, Pu; Zhang, Xiaowei; Zhang, Hanxin; Wang, Pingping; Tian, Mingming; Yu, Hongxia

    2017-08-15

    One of the major challenges in environmental science is monitoring and assessing the risk of complex environmental mixtures. In vitro bioassays with limited key toxicological end points have been shown to be suitable to evaluate mixtures of organic pollutants in wastewater and recycled water. Omics approaches such as transcriptomics can monitor biological effects at the genome scale. However, few studies have applied omics approach in the assessment of mixtures of organic micropollutants. Here, an omics approach was developed for profiling bioactivity of 10 water samples ranging from wastewater to drinking water in human cells by a reduced human transcriptome (RHT) approach and dose-response modeling. Transcriptional expression of 1200 selected genes were measured by an Ampliseq technology in two cell lines, HepG2 and MCF7, that were exposed to eight serial dilutions of each sample. Concentration-effect models were used to identify differentially expressed genes (DEGs) and to calculate effect concentrations (ECs) of DEGs, which could be ranked to investigate low dose response. Furthermore, molecular pathways disrupted by different samples were evaluated by Gene Ontology (GO) enrichment analysis. The ability of RHT for representing bioactivity utilizing both HepG2 and MCF7 was shown to be comparable to the results of previous in vitro bioassays. Finally, the relative potencies of the mixtures indicated by RHT analysis were consistent with the chemical profiles of the samples. RHT analysis with human cells provides an efficient and cost-effective approach to benchmarking mixture of micropollutants and may offer novel insight into the assessment of mixture toxicity in water.

  14. Modification and benchmarking of SKYSHINE-III for use with ISFSI cask arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hertel, N.E.; Napolitano, D.G.

    1997-12-01

    Dry cask storage arrays are becoming more and more common at nuclear power plants in the United States. Title 10 of the Code of Federal Regulations, Part 72, limits doses at the controlled area boundary of these independent spent-fuel storage installations (ISFSI) to 0.25 mSv (25 mrem)/yr. The minimum controlled area boundaries of such a facility are determined by cask array dose calculations, which include direct radiation and radiation scattered by the atmosphere, also known as skyshine. NAC International (NAC) uses SKYSHINE-III to calculate the gamma-ray and neutron dose rates as a function of distance from ISFSI arrays. In thismore » paper, we present modifications to the SKYSHINE-III that more explicitly model cask arrays. In addition, we have benchmarked the radiation transport methods used in SKYSHINE-III against {sup 60}Co gamma-ray experiments and MCNP neutron calculations.« less

  15. Comparison of Monte Carlo and analytical dose computations for intensity modulated proton therapy

    NASA Astrophysics Data System (ADS)

    Yepes, Pablo; Adair, Antony; Grosshans, David; Mirkovic, Dragan; Poenisch, Falk; Titt, Uwe; Wang, Qianxia; Mohan, Radhe

    2018-02-01

    To evaluate the effect of approximations in clinical analytical calculations performed by a treatment planning system (TPS) on dosimetric indices in intensity modulated proton therapy. TPS calculated dose distributions were compared with dose distributions as estimated by Monte Carlo (MC) simulations, calculated with the fast dose calculator (FDC) a system previously benchmarked to full MC. This study analyzed a total of 525 patients for four treatment sites (brain, head-and-neck, thorax and prostate). Dosimetric indices (D02, D05, D20, D50, D95, D98, EUD and Mean Dose) and a gamma-index analysis were utilized to evaluate the differences. The gamma-index passing rates for a 3%/3 mm criterion for voxels with a dose larger than 10% of the maximum dose had a median larger than 98% for all sites. The median difference for all dosimetric indices for target volumes was less than 2% for all cases. However, differences for target volumes as large as 10% were found for 2% of the thoracic patients. For organs at risk (OARs), the median absolute dose difference was smaller than 2 Gy for all indices and cohorts. However, absolute dose differences as large as 10 Gy were found for some small volume organs in brain and head-and-neck patients. This analysis concludes that for a fraction of the patients studied, TPS may overestimate the dose in the target by as much as 10%, while for some OARs the dose could be underestimated by as much as 10 Gy. Monte Carlo dose calculations may be needed to ensure more accurate dose computations to improve target coverage and sparing of OARs in proton therapy.

  16. SU-E-T-577: Commissioning of a Deterministic Algorithm for External Photon Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, T; Finlay, J; Mesina, C

    Purpose: We report commissioning results for a deterministic algorithm for external photon beam treatment planning. A deterministic algorithm solves the radiation transport equations directly using a finite difference method, thus improve the accuracy of dose calculation, particularly under heterogeneous conditions with results similar to that of Monte Carlo (MC) simulation. Methods: Commissioning data for photon energies 6 – 15 MV includes the percentage depth dose (PDD) measured at SSD = 90 cm and output ratio in water (Spc), both normalized to 10 cm depth, for field sizes between 2 and 40 cm and depths between 0 and 40 cm. Off-axismore » ratio (OAR) for the same set of field sizes was used at 5 depths (dmax, 5, 10, 20, 30 cm). The final model was compared with the commissioning data as well as additional benchmark data. The benchmark data includes dose per MU determined for 17 points for SSD between 80 and 110 cm, depth between 5 and 20 cm, and lateral offset of up to 16.5 cm. Relative comparisons were made in a heterogeneous phantom made of cork and solid water. Results: Compared to the commissioning beam data, the agreement are generally better than 2% with large errors (up to 13%) observed in the buildup regions of the FDD and penumbra regions of the OAR profiles. The overall mean standard deviation is 0.04% when all data are taken into account. Compared to the benchmark data, the agreements are generally better than 2%. Relative comparison in heterogeneous phantom is in general better than 4%. Conclusion: A commercial deterministic algorithm was commissioned for megavoltage photon beams. In a homogeneous medium, the agreement between the algorithm and measurement at the benchmark points is generally better than 2%. The dose accuracy for a deterministic algorithm is better than a convolution algorithm in heterogeneous medium.« less

  17. A Monte-Carlo Benchmark of TRIPOLI-4® and MCNP on ITER neutronics

    NASA Astrophysics Data System (ADS)

    Blanchet, David; Pénéliau, Yannick; Eschbach, Romain; Fontaine, Bruno; Cantone, Bruno; Ferlet, Marc; Gauthier, Eric; Guillon, Christophe; Letellier, Laurent; Proust, Maxime; Mota, Fernando; Palermo, Iole; Rios, Luis; Guern, Frédéric Le; Kocan, Martin; Reichle, Roger

    2017-09-01

    Radiation protection and shielding studies are often based on the extensive use of 3D Monte-Carlo neutron and photon transport simulations. ITER organization hence recommends the use of MCNP-5 code (version 1.60), in association with the FENDL-2.1 neutron cross section data library, specifically dedicated to fusion applications. The MCNP reference model of the ITER tokamak, the `C-lite', is being continuously developed and improved. This article proposes to develop an alternative model, equivalent to the 'C-lite', but for the Monte-Carlo code TRIPOLI-4®. A benchmark study is defined to test this new model. Since one of the most critical areas for ITER neutronics analysis concerns the assessment of radiation levels and Shutdown Dose Rates (SDDR) behind the Equatorial Port Plugs (EPP), the benchmark is conducted to compare the neutron flux through the EPP. This problem is quite challenging with regard to the complex geometry and considering the important neutron flux attenuation ranging from 1014 down to 108 n•cm-2•s-1. Such code-to-code comparison provides independent validation of the Monte-Carlo simulations, improving the confidence in neutronic results.

  18. Risk assessment for consumer exposure to toluene diisocyanate (TDI) derived from polyurethane flexible foam.

    PubMed

    Arnold, Scott M; Collins, Michael A; Graham, Cynthia; Jolly, Athena T; Parod, Ralph J; Poole, Alan; Schupp, Thomas; Shiotsuka, Ronald N; Woolhiser, Michael R

    2012-12-01

    Polyurethanes (PU) are polymers made from diisocyanates and polyols for a variety of consumer products. It has been suggested that PU foam may contain trace amounts of residual toluene diisocyanate (TDI) monomers and present a health risk. To address this concern, the exposure scenario and health risks posed by sleeping on a PU foam mattress were evaluated. Toxicity benchmarks for key non-cancer endpoints (i.e., irritation, sensitization, respiratory tract effects) were determined by dividing points of departure by uncertainty factors. The cancer benchmark was derived using the USEPA Benchmark Dose Software. Results of previous migration and emission data of TDI from PU foam were combined with conservative exposure factors to calculate upper-bound dermal and inhalation exposures to TDI as well as a lifetime average daily dose to TDI from dermal exposure. For each non-cancer endpoint, the toxicity benchmark was divided by the calculated exposure to determine the margin of safety (MOS), which ranged from 200 (respiratory tract) to 3×10(6) (irritation). Although available data indicate TDI is not carcinogenic, a theoretical excess cancer risk (1×10(-7)) was calculated. We conclude from this assessment that sleeping on a PU foam mattress does not pose TDI-related health risks to consumers. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Current modeling practice may lead to falsely high benchmark dose estimates.

    PubMed

    Ringblom, Joakim; Johanson, Gunnar; Öberg, Mattias

    2014-07-01

    Benchmark dose (BMD) modeling is increasingly used as the preferred approach to define the point-of-departure for health risk assessment of chemicals. As data are inherently variable, there is always a risk to select a model that defines a lower confidence bound of the BMD (BMDL) that, contrary to expected, exceeds the true BMD. The aim of this study was to investigate how often and under what circumstances such anomalies occur under current modeling practice. Continuous data were generated from a realistic dose-effect curve by Monte Carlo simulations using four dose groups and a set of five different dose placement scenarios, group sizes between 5 and 50 animals and coefficients of variations of 5-15%. The BMD calculations were conducted using nested exponential models, as most BMD software use nested approaches. "Non-protective" BMDLs (higher than true BMD) were frequently observed, in some scenarios reaching 80%. The phenomenon was mainly related to the selection of the non-sigmoidal exponential model (Effect=a·e(b)(·dose)). In conclusion, non-sigmoid models should be used with caution as it may underestimate the risk, illustrating that awareness of the model selection process and sound identification of the point-of-departure is vital for health risk assessment. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Evaluating MoE and its Uncertainty and Variability for Food Contaminants (EuroTox presentation)

    EPA Science Inventory

    Margin of Exposure (MoE), is a metric for quantifying the relationship between exposure and hazard. Ideally, it is the ratio of the dose associated with hazard and an estimate of exposure. For example, hazard may be characterized by a benchmark dose (BMD), and, for food contami...

  1. 75 FR 40729 - Residues of Quaternary Ammonium Compounds, N-Alkyl (C12-14

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-14

    .... Systemic toxicity occurs after absorption and distribution of the chemical to tissues in the body. Such... identified (the LOAEL) or a Benchmark Dose (BMD) approach is sometimes used for risk assessment. Uncertainty.... No systemic effects observed up to 20 mg/ kg/day, highest dose of technical that could be tested...

  2. Concordance of transcriptional and apical benchmark dose levels for conazole-ind uced liver effects in mice

    EPA Science Inventory

    The ability to anchor chemical class-based gene expression changes to phenotypic lesions and to describe these changes as a function of dose and time can inform mode of action and improve quantitative risk assessment. Previous research identified a 330-gene cluster commonly resp...

  3. Issues in the Design and Interpretation of Chronic Toxicity and Carcinogenicity Studies in Rodents: Approaches to Dose Selection

    EPA Science Inventory

    For more than three decades chronic studies in rodents have been the benchmark for assessing the potential long-term toxicity, and particularly the carcinogenicity, of chemicals. With doses typically administered for about 2 years (18 months to lifetime), the rodent bioassay has ...

  4. A health risk benchmark for the neurologic effects of styrene: comparison with NOAEL/LOAEL approach.

    PubMed

    Rabovsky, J; Fowles, J; Hill, M D; Lewis, D C

    2001-02-01

    Benchmark dose (BMD) analysis was used to estimate an inhalation benchmark concentration for styrene neurotoxicity. Quantal data on neuropsychologic test results from styrene-exposed workers [Mutti et al. (1984). American Journal of Industrial Medicine, 5, 275-286] were used to quantify neurotoxicity, defined as the percent of tested workers who responded abnormally to > or = 1, > or = 2, or > or = 3 out of a battery of eight tests. Exposure was based on previously published results on mean urinary mandelic- and phenylglyoxylic acid levels in the workers, converted to air styrene levels (15, 44, 74, or 115 ppm). Nonstyrene-exposed workers from the same region served as a control group. Maximum-likelihood estimates (MLEs) and BMDs at 5 and 10% response levels of the exposed population were obtained from log-normal analysis of the quantal data. The highest MLE was 9 ppm (BMD = 4 ppm) styrene and represents abnormal responses to > or = 3 tests by 10% of the exposed population. The most health-protective MLE was 2 ppm styrene (BMD = 0.3 ppm) and represents abnormal responses to > or = 1 test by 5% of the exposed population. A no observed adverse effect level/lowest observed adverse effect level (NOAEL/LOAEL) analysis of the same quantal data showed workers in all styrene exposure groups responded abnormally to > or = 1, > or = 2, or > or = 3 tests, compared to controls, and the LOAEL was 15 ppm. A comparison of the BMD and NOAEL/LOAEL analyses suggests that at air styrene levels below the LOAEL, a segment of the worker population may be adversely affected. The benchmark approach will be useful for styrene noncancer risk assessment purposes by providing a more accurate estimate of potential risk that should, in turn, help to reduce the uncertainty that is a common problem in setting exposure levels.

  5. A diversity index for model space selection in the estimation of benchmark and infectious doses via model averaging.

    PubMed

    Kim, Steven B; Kodell, Ralph L; Moon, Hojin

    2014-03-01

    In chemical and microbial risk assessments, risk assessors fit dose-response models to high-dose data and extrapolate downward to risk levels in the range of 1-10%. Although multiple dose-response models may be able to fit the data adequately in the experimental range, the estimated effective dose (ED) corresponding to an extremely small risk can be substantially different from model to model. In this respect, model averaging (MA) provides more robustness than a single dose-response model in the point and interval estimation of an ED. In MA, accounting for both data uncertainty and model uncertainty is crucial, but addressing model uncertainty is not achieved simply by increasing the number of models in a model space. A plausible set of models for MA can be characterized by goodness of fit and diversity surrounding the truth. We propose a diversity index (DI) to balance between these two characteristics in model space selection. It addresses a collective property of a model space rather than individual performance of each model. Tuning parameters in the DI control the size of the model space for MA. © 2013 Society for Risk Analysis.

  6. Derivation of a no-significant-risk-level for tetrabromobisphenol A based on a threshold non-mutagenic cancer mode of action.

    PubMed

    Pecquet, Alison M; Martinez, Jeanelle M; Vincent, Melissa; Erraguntla, Neeraja; Dourson, Michael

    2018-06-01

    A no-significant-risk-level of 20 mg day -1 was derived for tetrabromobisphenol A (TBBPA). Uterine tumors (adenomas, adenocarcinomas, and malignant mixed Müllerian) observed in female Wistar Han rats from a National Toxicology Program 2-year cancer bioassay were identified as the critical effect. Studies suggest that TBBPA is acting through a non-mutagenic mode of action. Thus, the most appropriate approach to derivation of a cancer risk value based on US Environmental Protection Agency guidelines is a threshold approach, akin to a cancer safe dose (RfD cancer ). Using the National Toxicology Program data, we utilized Benchmark dose software to derive a benchmark dose lower limit (BMDL 10 ) as the point of departure (POD) of 103 mg kg -1  day -1 . The POD was adjusted to a human equivalent dose of 25.6 mg kg -1  day -1 using allometric scaling. We applied a composite adjustment factor of 100 to the POD to derive an RfD cancer of 0.26 mg kg -1  day -1 . Based on a human body weight of 70 kg, the RfD cancer was adjusted to a no-significant-risk-level of 20 mg day -1 . This was compared to other available non-cancer and cancer risk values, and aligns well with our understanding of the underlying biology based on the toxicology data. Overall, the weight of evidence from animal studies indicates that TBBPA has low toxicity and suggests that high doses over long exposure durations are needed to induce uterine tumor formation. Future research needs include a thorough and detailed vetting of the proposed adverse outcome pathway, including further support for key events leading to uterine tumor formation and a quantitative weight of evidence analysis. Copyright © 2018 John Wiley & Sons, Ltd.

  7. Intercomparison of Monte Carlo radiation transport codes to model TEPC response in low-energy neutron and gamma-ray fields.

    PubMed

    Ali, F; Waker, A J; Waller, E J

    2014-10-01

    Tissue-equivalent proportional counters (TEPC) can potentially be used as a portable and personal dosemeter in mixed neutron and gamma-ray fields, but what hinders this use is their typically large physical size. To formulate compact TEPC designs, the use of a Monte Carlo transport code is necessary to predict the performance of compact designs in these fields. To perform this modelling, three candidate codes were assessed: MCNPX 2.7.E, FLUKA 2011.2 and PHITS 2.24. In each code, benchmark simulations were performed involving the irradiation of a 5-in. TEPC with monoenergetic neutron fields and a 4-in. wall-less TEPC with monoenergetic gamma-ray fields. The frequency and dose mean lineal energies and dose distributions calculated from each code were compared with experimentally determined data. For the neutron benchmark simulations, PHITS produces data closest to the experimental values and for the gamma-ray benchmark simulations, FLUKA yields data closest to the experimentally determined quantities. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. MutAIT: an online genetic toxicology data portal and analysis tools.

    PubMed

    Avancini, Daniele; Menzies, Georgina E; Morgan, Claire; Wills, John; Johnson, George E; White, Paul A; Lewis, Paul D

    2016-05-01

    Assessment of genetic toxicity and/or carcinogenic activity is an essential element of chemical screening programs employed to protect human health. Dose-response and gene mutation data are frequently analysed by industry, academia and governmental agencies for regulatory evaluations and decision making. Over the years, a number of efforts at different institutions have led to the creation and curation of databases to house genetic toxicology data, largely, with the aim of providing public access to facilitate research and regulatory assessments. This article provides a brief introduction to a new genetic toxicology portal called Mutation Analysis Informatics Tools (MutAIT) (www.mutait.org) that provides easy access to two of the largest genetic toxicology databases, the Mammalian Gene Mutation Database (MGMD) and TransgenicDB. TransgenicDB is a comprehensive collection of transgenic rodent mutation data initially compiled and collated by Health Canada. The updated MGMD contains approximately 50 000 individual mutation spectral records from the published literature. The portal not only gives access to an enormous quantity of genetic toxicology data, but also provides statistical tools for dose-response analysis and calculation of benchmark dose. Two important R packages for dose-response analysis are provided as web-distributed applications with user-friendly graphical interfaces. The 'drsmooth' package performs dose-response shape analysis and determines various points of departure (PoD) metrics and the 'PROAST' package provides algorithms for dose-response modelling. The MutAIT statistical tools, which are currently being enhanced, provide users with an efficient and comprehensive platform to conduct quantitative dose-response analyses and determine PoD values that can then be used to calculate human exposure limits or margins of exposure. © The Author 2015. Published by Oxford University Press on behalf of the UK Environmental Mutagen Society. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. An analysis of MCNP cross-sections and tally methods for low-energy photon emitters.

    PubMed

    Demarco, John J; Wallace, Robert E; Boedeker, Kirsten

    2002-04-21

    Monte Carlo calculations are frequently used to analyse a variety of radiological science applications using low-energy (10-1000 keV) photon sources. This study seeks to create a low-energy benchmark for the MCNP Monte Carlo code by simulating the absolute dose rate in water and the air-kerma rate for monoenergetic point sources with energies between 10 keV and 1 MeV. The analysis compares four cross-section datasets as well as the tally method for collision kerma versus absorbed dose. The total photon attenuation coefficient cross-section for low atomic number elements has changed significantly as cross-section data have changed between 1967 and 1989. Differences of up to 10% are observed in the photoelectric cross-section for water at 30 keV between the standard MCNP cross-section dataset (DLC-200) and the most recent XCOM/NIST tabulation. At 30 keV, the absolute dose rate in water at 1.0 cm from the source increases by 7.8% after replacing the DLC-200 photoelectric cross-sections for water with those from the XCOM/NIST tabulation. The differences in the absolute dose rate are analysed when calculated with either the MCNP absorbed dose tally or the collision kerma tally. Significant differences between the collision kerma tally and the absorbed dose tally can occur when using the DLC-200 attenuation coefficients in conjunction with a modern tabulation of mass energy-absorption coefficients.

  10. MO-F-16A-06: Implementation of a Radiation Exposure Monitoring System for Surveillance of Multi-Modality Radiation Dose Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, B; Kanal, K; Dickinson, R

    2014-06-15

    Purpose: We have implemented a commercially available Radiation Exposure Monitoring System (REMS) to enhance the processes of radiation dose data collection, analysis and alerting developed over the past decade at our sites of practice. REMS allows for consolidation of multiple radiation dose information sources and quicker alerting than previously developed processes. Methods: Thirty-nine x-ray producing imaging modalities were interfaced with the REMS: thirteen computed tomography scanners, sixteen angiography/interventional systems, nine digital radiography systems and one mammography system. A number of methodologies were used to provide dose data to the REMS: Modality Performed Procedure Step (MPPS) messages, DICOM Radiation Dose Structuredmore » Reports (RDSR), and DICOM header information. Once interfaced, the dosimetry information from each device underwent validation (first 15–20 exams) before release for viewing by end-users: physicians, medical physicists, technologists and administrators. Results: Before REMS, our diagnostic physics group pulled dosimetry data from seven disparate databases throughout the radiology, radiation oncology, cardiology, electrophysiology, anesthesiology/pain management and vascular surgery departments at two major medical centers and four associated outpatient clinics. With the REMS implementation, we now have one authoritative source of dose information for alerting, longitudinal analysis, dashboard/graphics generation and benchmarking. REMS provides immediate automatic dose alerts utilizing thresholds calculated through daily statistical analysis. This has streamlined our Closing the Loop process for estimated skin exposures in excess of our institutional specific substantial radiation dose level which relied on technologist notification of the diagnostic physics group and daily report from the radiology information system (RIS). REMS also automatically calculates the CT size-specific dose estimate (SSDE) as well as provides two-dimensional angulation dose maps for angiography/interventional procedures. Conclusion: REMS implementation has streamlined and consolidated the dosimetry data collection and analysis process at our institutions while eliminating manual entry error and providing immediate alerting and access to dosimetry data to both physicists and physicians. Brent Stewart has funded research through GE Healthcare.« less

  11. Methods for Derivation of Inhalation Reference Concentrations and Application of Inhalation Dosimetry

    EPA Pesticide Factsheets

    EPA's methodology for estimation of inhalation reference concentrations (RfCs) as benchmark estimates of the quantitative dose-response assessment of chronic noncancer toxicity for individual inhaled chemicals.

  12. Pediatric susceptibility to 18 industrial chemicals: a comparative analysis of newborn with young animals.

    PubMed

    Hasegawa, R; Hirata-Koizumi, M; Dourson, M; Parker, A; Hirose, A; Nakai, S; Kamata, E; Ema, M

    2007-04-01

    We comprehensively re-analyzed the toxicity data for 18 industrial chemicals from repeated oral exposures in newborn and young rats, which were previously published. Two new toxicity endpoints specific to this comparative analysis were identified, the first, the presumed no observed adverse effect level (pNOAEL) was estimated based on results of both main and dose-finding studies, and the second, the presumed unequivocally toxic level (pUETL) was defined as a clear toxic dose giving similar severity in both newborn and young rats. Based on the analyses of both pNOAEL and pUETL ratios between the different ages, newborn rats demonstrated greater susceptibility (at most 8-fold) to nearly two thirds of these 18 chemicals (mostly phenolic substances), and less or nearly equal sensitivity to the other chemicals. Exceptionally one chemical only showed toxicity in newborn rats. In addition, Benchmark Dose Lower Bound (BMDL) estimates were calculated as an alternative endpoint. Most BMDLs were comparable to their corresponding pNOAELs and the overall correlation coefficient was 0.904. We discussed how our results can be incorporated into chemical risk assessment approaches to protect pediatric health from direct oral exposure to chemicals.

  13. Benchmarking the minimum Electron Beam (eBeam) dose required for the sterilization of space foods

    NASA Astrophysics Data System (ADS)

    Bhatia, Sohini S.; Wall, Kayley R.; Kerth, Chris R.; Pillai, Suresh D.

    2018-02-01

    As manned space missions extend in length, the safety, nutrition, acceptability, and shelf life of space foods are of paramount importance to NASA. Since food and mealtimes play a key role in reducing stress and boredom of prolonged missions, the quality of food in terms of appearance, flavor, texture, and aroma can have significant psychological ramifications on astronaut performance. The FDA, which oversees space foods, currently requires a minimum dose of 44 kGy for irradiated space foods. The underlying hypothesis was that commercial sterility of space foods could be achieved at a significantly lower dose, and this lowered dose would positively affect the shelf life of the product. Electron beam processed beef fajitas were used as an example NASA space food to benchmark the minimum eBeam dose required for sterility. A 15 kGy dose was able to achieve an approximately 10 log reduction in Shiga-toxin-producing Escherichia coli bacteria, and a 5 log reduction in Clostridium sporogenes spores. Furthermore, accelerated shelf life testing (ASLT) to determine sensory and quality characteristics under various conditions was conducted. Using Multidimensional gas-chromatography-olfactometry-mass spectrometry (MDGC-O-MS), numerous volatiles were shown to be dependent on the dose applied to the product. Furthermore, concentrations of off -flavor aroma compounds such as dimethyl sulfide were decreased at the reduced 15 kGy dose. The results suggest that the combination of conventional cooking combined with eBeam processing (15 kGy) can achieve the safety and shelf-life objectives needed for long duration space-foods.

  14. The grout/glass performance assessment code system (GPACS) with verification and benchmarking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piepho, M.G.; Sutherland, W.H.; Rittmann, P.D.

    1994-12-01

    GPACS is a computer code system for calculating water flow (unsaturated or saturated), solute transport, and human doses due to the slow release of contaminants from a waste form (in particular grout or glass) through an engineered system and through a vadose zone to an aquifer, well and river. This dual-purpose document is intended to serve as a user`s guide and verification/benchmark document for the Grout/Glass Performance Assessment Code system (GPACS). GPACS can be used for low-level-waste (LLW) Glass Performance Assessment and many other applications including other low-level-waste performance assessments and risk assessments. Based on all the cses presented, GPACSmore » is adequate (verified) for calculating water flow and contaminant transport in unsaturated-zone sediments and for calculating human doses via the groundwater pathway.« less

  15. Radioactive impacts on nekton species in the Northwest Pacific and humans more than one year after the Fukushima nuclear accident.

    PubMed

    Men, Wu; Deng, Fangfang; He, Jianhua; Yu, Wen; Wang, Fenfen; Li, Yiliang; Lin, Feng; Lin, Jing; Lin, Longshan; Zhang, Yusheng; Yu, Xingguang

    2017-10-01

    This study investigated the radioactive impacts on 10 nekton species in the Northwest Pacific more than one year after the Fukushima Nuclear Accident (FNA) from the two perspectives of contamination and harm. Squids were especially used for the spatial and temporal comparisons to demonstrate the impacts from the FNA. The radiation doses to nekton species and humans were assessed to link this radioactivity contamination to possible harm. The total dose rates to nektons were lower than the ERICA ecosystem screening benchmark of 10μGy/h. Further dose-contribution analysis showed that the internal doses from the naturally occurring nuclide 210 Po were the main dose contributor. The dose rates from 134 Cs, 137 Cs, 90 Sr and 110m Ag were approximately three or four orders of magnitude lower than those from naturally occurring radionuclides. The 210 Po-derived dose was also the main contributor of the total human dose from immersion in the seawater and the ingestion of nekton species. The human doses from anthropogenic radionuclides were ~ 100 to ~ 10,000 times lower than the doses from naturally occurring radionuclides. A morbidity assessment was performed based on the Linear No Threshold assumptions of exposure and showed 7 additional cancer cases per 100,000,000 similarly exposed people. Taken together, there is no need for concern regarding the radioactive harm in the open ocean area of the Northwest Pacific. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. In Search of a Time Efficient Approach to Crack and Delamination Growth Predictions in Composites

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Carvalho, Nelson

    2016-01-01

    Analysis benchmarking was used to assess the accuracy and time efficiency of algorithms suitable for automated delamination growth analysis. First, the Floating Node Method (FNM) was introduced and its combination with a simple exponential growth law (Paris Law) and Virtual Crack Closure technique (VCCT) was discussed. Implementation of the method into a user element (UEL) in Abaqus/Standard(Registered TradeMark) was also presented. For the assessment of growth prediction capabilities, an existing benchmark case based on the Double Cantilever Beam (DCB) specimen was briefly summarized. Additionally, the development of new benchmark cases based on the Mixed-Mode Bending (MMB) specimen to assess the growth prediction capabilities under mixed-mode I/II conditions was discussed in detail. A comparison was presented, in which the benchmark cases were used to assess the existing low-cycle fatigue analysis tool in Abaqus/Standard(Registered TradeMark) in comparison to the FNM-VCCT fatigue growth analysis implementation. The low-cycle fatigue analysis tool in Abaqus/Standard(Registered TradeMark) was able to yield results that were in good agreement with the DCB benchmark example. Results for the MMB benchmark cases, however, only captured the trend correctly. The user element (FNM-VCCT) always yielded results that were in excellent agreement with all benchmark cases, at a fraction of the analysis time. The ability to assess the implementation of two methods in one finite element code illustrated the value of establishing benchmark solutions.

  17. Gamma irradiator dose mapping simulation using the MCNP code and benchmarking with dosimetry.

    PubMed

    Sohrabpour, M; Hassanzadeh, M; Shahriari, M; Sharifzadeh, M

    2002-10-01

    The Monte Carlo transport code, MCNP, has been applied in simulating dose rate distribution in the IR-136 gamma irradiator system. Isodose curves, cumulative dose values, and system design data such as throughputs, over-dose-ratios, and efficiencies have been simulated as functions of product density. Simulated isodose curves, and cumulative dose values were compared with dosimetry values obtained using polymethyle-methacrylate, Fricke, ethanol-chlorobenzene, and potassium dichromate dosimeters. The produced system design data were also found to agree quite favorably with those of the system manufacturer's data. MCNP has thus been found to be an effective transport code for handling of various dose mapping excercises for gamma irradiators.

  18. Performance analysis of fusion nuclear-data benchmark experiments for light to heavy materials in MeV energy region with a neutron spectrum shifter

    NASA Astrophysics Data System (ADS)

    Murata, Isao; Ohta, Masayuki; Miyamaru, Hiroyuki; Kondo, Keitaro; Yoshida, Shigeo; Iida, Toshiyuki; Ochiai, Kentaro; Konno, Chikara

    2011-10-01

    Nuclear data are indispensable for development of fusion reactor candidate materials. However, benchmarking of the nuclear data in MeV energy region is not yet adequate. In the present study, benchmark performance in the MeV energy region was investigated theoretically for experiments by using a 14 MeV neutron source. We carried out a systematical analysis for light to heavy materials. As a result, the benchmark performance for the neutron spectrum was confirmed to be acceptable, while for gamma-rays it was not sufficiently accurate. Consequently, a spectrum shifter has to be applied. Beryllium had the best performance as a shifter. Moreover, a preliminary examination of whether it is really acceptable that only the spectrum before the last collision is considered in the benchmark performance analysis. It was pointed out that not only the last collision but also earlier collisions should be considered equally in the benchmark performance analysis.

  19. Hepatic transcriptomic alterations for N,N-dimethyl-p-toluidine (DMPT) and p-toluidine after 5-day exposure in rats.

    PubMed

    Dunnick, June K; Shockley, Keith R; Morgan, Daniel L; Brix, Amy; Travlos, Gregory S; Gerrish, Kevin; Michael Sanders, J; Ton, T V; Pandiri, Arun R

    2017-04-01

    N,N-dimethyl-p-toluidine (DMPT), an accelerant for methyl methacrylate monomers in medical devices, was a liver carcinogen in male and female F344/N rats and B6C3F1 mice in a 2-year oral exposure study. p-Toluidine, a structurally related chemical, was a liver carcinogen in mice but not in rats in an 18-month feed exposure study. In this current study, liver transcriptomic data were used to characterize mechanisms in DMPT and p-toluidine liver toxicity and for conducting benchmark dose (BMD) analysis. Male F344/N rats were exposed orally to DMPT or p-toluidine (0, 1, 6, 20, 60 or 120 mg/kg/day) for 5 days. The liver was examined for lesions and transcriptomic alterations. Both chemicals caused mild hepatic toxicity at 60 and 120 mg/kg and dose-related transcriptomic alterations in the liver. There were 511 liver transcripts differentially expressed for DMPT and 354 for p-toluidine at 120 mg/kg/day (false discovery rate threshold of 5 %). The liver transcriptomic alterations were characteristic of an anti-oxidative damage response (activation of the Nrf2 pathway) and hepatic toxicity. The top cellular processes in gene ontology (GO) categories altered in livers exposed to DMPT or p-toluidine were used for BMD calculations. The lower confidence bound benchmark doses for these chemicals were 2 mg/kg/day for DMPT and 7 mg/kg/day for p-toluidine. These studies show the promise of using 5-day target organ transcriptomic data to identify chemical-induced molecular changes that can serve as markers for preliminary toxicity risk assessment.

  20. Hand rub dose needed for a single disinfection varies according to product: a bias in benchmarking using indirect hand hygiene indicator.

    PubMed

    Girard, Raphaële; Aupee, Martine; Erb, Martine; Bettinger, Anne; Jouve, Alice

    2012-12-01

    The 3ml volume currently used as the hand hygiene (HH) measure has been explored as the pertinent dose for an indirect indicator of HH compliance. A multicenter study was conducted in order to ascertain the required dose using different products. The average contact duration before drying was measured and compared with references. Effective hand coverage had to include the whole hand and the wrist. Two durations were chosen as points of reference: 30s, as given by guidelines, and the duration validated by the European standard EN 1500. Each product was to be tested, using standardized procedures, by three nosocomial infection prevention teams, for three different doses (3, 2 and 1.5ml). Data from 27 products and 1706 tests were analyzed. Depending on the product, the dose needed to ensure a 30-s contact duration in 75% of tests ranging from 2ml to more than 3ml, and to ensure a contact duration exceeding the EN 1500 times in 75% of tests ranging from 1.5ml to more than 3ml. The aftermath interpretation is the following: if different products are used, the volume utilized does not give an unbiased estimation of the HH compliance. Other compliance evaluation methods remain necessary for efficient benchmarking. Copyright © 2012 Ministry of Health, Saudi Arabia. Published by Elsevier Ltd. All rights reserved.

  1. What is a food and what is a medicinal product in the European Union? Use of the benchmark dose (BMD) methodology to define a threshold for "pharmacological action".

    PubMed

    Lachenmeier, Dirk W; Steffen, Christian; el-Atma, Oliver; Maixner, Sibylle; Löbell-Behrends, Sigrid; Kohl-Himmelseher, Matthias

    2012-11-01

    The decision criterion for the demarcation between foods and medicinal products in the EU is the significant "pharmacological action". Based on six examples of substances with ambivalent status, the benchmark dose (BMD) method is evaluated to provide a threshold for pharmacological action. Using significant dose-response models from literature clinical trial data or epidemiology, the BMD values were 63mg/day for caffeine, 5g/day for alcohol, 6mg/day for lovastatin, 769mg/day for glucosamine sulfate, 151mg/day for Ginkgo biloba extract, and 0.4mg/day for melatonin. The examples for caffeine and alcohol validate the approach because intake above BMD clearly exhibits pharmacological action. Nevertheless, due to uncertainties in dose-response modelling as well as the need for additional uncertainty factors to consider differences in sensitivity within the human population, a "borderline range" on the dose-response curve remains. "Pharmacological action" has proven to be not very well suited as binary decision criterion between foods and medicinal product. The European legislator should rethink the definition of medicinal products, as the current situation based on complicated case-by-case decisions on pharmacological action leads to an unregulated market flooded with potentially illegal food supplements. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Benchmark solutions for the galactic heavy-ion transport equations with energy and spatial coupling

    NASA Technical Reports Server (NTRS)

    Ganapol, Barry D.; Townsend, Lawrence W.; Lamkin, Stanley L.; Wilson, John W.

    1991-01-01

    Nontrivial benchmark solutions are developed for the galactic heavy ion transport equations in the straightahead approximation with energy and spatial coupling. Analytical representations of the ion fluxes are obtained for a variety of sources with the assumption that the nuclear interaction parameters are energy independent. The method utilizes an analytical LaPlace transform inversion to yield a closed form representation that is computationally efficient. The flux profiles are then used to predict ion dose profiles, which are important for shield design studies.

  3. Using physiologically based pharmacokinetic modeling and benchmark dose methods to derive an occupational exposure limit for N-methylpyrrolidone.

    PubMed

    Poet, T S; Schlosser, P M; Rodriguez, C E; Parod, R J; Rodwell, D E; Kirman, C R

    2016-04-01

    The developmental effects of NMP are well studied in Sprague-Dawley rats following oral, inhalation, and dermal routes of exposure. Short-term and chronic occupational exposure limit (OEL) values were derived using an updated physiologically based pharmacokinetic (PBPK) model for NMP, along with benchmark dose modeling. Two suitable developmental endpoints were evaluated for human health risk assessment: (1) for acute exposures, the increased incidence of skeletal malformations, an effect noted only at oral doses that were toxic to the dam and fetus; and (2) for repeated exposures to NMP, changes in fetal/pup body weight. Where possible, data from multiple studies were pooled to increase the predictive power of the dose-response data sets. For the purposes of internal dose estimation, the window of susceptibility was estimated for each endpoint, and was used in the dose-response modeling. A point of departure value of 390 mg/L (in terms of peak NMP in blood) was calculated for skeletal malformations based on pooled data from oral and inhalation studies. Acceptable dose-response model fits were not obtained using the pooled data for fetal/pup body weight changes. These data sets were also assessed individually, from which the geometric mean value obtained from the inhalation studies (470 mg*hr/L), was used to derive the chronic OEL. A PBPK model for NMP in humans was used to calculate human equivalent concentrations corresponding to the internal dose point of departure values. Application of a net uncertainty factor of 20-21, which incorporates data-derived extrapolation factors, to the point of departure values yields short-term and chronic occupational exposure limit values of 86 and 24 ppm, respectively. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Benchmark duration of work hours for development of fatigue symptoms in Japanese workers with adjustment for job-related stress.

    PubMed

    Suwazono, Yasushi; Dochi, Mirei; Kobayashi, Etsuko; Oishi, Mitsuhiro; Okubo, Yasushi; Tanaka, Kumihiko; Sakata, Kouichi

    2008-12-01

    The objective of this study was to calculate benchmark durations and lower 95% confidence limits for benchmark durations of working hours associated with subjective fatigue symptoms by applying the benchmark dose approach while adjusting for job-related stress using multiple logistic regression analyses. A self-administered questionnaire was completed by 3,069 male and 412 female daytime workers (age 18-67 years) in a Japanese steel company. The eight dependent variables in the Cumulative Fatigue Symptoms Index were decreased vitality, general fatigue, physical disorders, irritability, decreased willingness to work, anxiety, depressive feelings, and chronic tiredness. Independent variables were daily working hours, four subscales (job demand, job control, interpersonal relationship, and job suitability) of the Brief Job Stress Questionnaire, and other potential covariates. Using significant parameters for working hours and those for other covariates, the benchmark durations of working hours were calculated for the corresponding Index property. Benchmark response was set at 5% or 10%. Assuming a condition of worst job stress, the benchmark duration/lower 95% confidence limit for benchmark duration of working hours per day with a benchmark response of 5% or 10% were 10.0/9.4 or 11.7/10.7 (irritability) and 9.2/8.9 or 10.4/9.8 (chronic tiredness) in men and 8.9/8.4 or 9.8/8.9 (chronic tiredness) in women. The threshold amounts of working hours for fatigue symptoms under the worst job-related stress were very close to the standard daily working hours in Japan. The results strongly suggest that special attention should be paid to employees whose working hours exceed threshold amounts based on individual levels of job-related stress.

  5. Neutron Activation and Thermoluminescent Detector Responses to a Bare Pulse of the CEA Valduc SILENE Critical Assembly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Thomas Martin; Celik, Cihangir; McMahan, Kimberly L.

    This benchmark experiment was conducted as a joint venture between the US Department of Energy (DOE) and the French Commissariat à l'Energie Atomique (CEA). Staff at the Oak Ridge National Laboratory (ORNL) in the US and the Centre de Valduc in France planned this experiment. The experiment was conducted on October 11, 2010 in the SILENE critical assembly facility at Valduc. Several other organizations contributed to this experiment and the subsequent evaluation, including CEA Saclay, Lawrence Livermore National Laboratory (LLNL), the Y-12 National Security Complex (NSC), Babcock International Group in the United Kingdom, and Los Alamos National Laboratory (LANL). Themore » goal of this experiment was to measure neutron activation and thermoluminescent dosimeter (TLD) doses from a source similar to a fissile solution critical excursion. The resulting benchmark can be used for validation of computer codes and nuclear data libraries as required when performing analysis of criticality accident alarm systems (CAASs). A secondary goal of this experiment was to qualitatively test performance of two CAAS detectors similar to those currently and formerly in use in some US DOE facilities. The detectors tested were the CIDAS MkX and the Rocky Flats NCD-91. These detectors were being evaluated to determine whether they would alarm, so they were not expected to generate benchmark quality data.« less

  6. Neutron skyshine from intense 14-MeV neutron source facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakamura, T.; Hayashi, K.; Takahashi, A.

    1985-07-01

    The dose distribution and the spectrum variation of neutrons due to the skyshine effect have been measured with the high-efficiency rem counter, the multisphere spectrometer, and the NE-213 scintillator in the environment surrounding an intense 14-MeV neutron source facility. The dose distribution and the energy spectra of neutrons around the facility used as a skyshine source have also been measured to enable the absolute evaluation of the skyshine effect. The skyshine effect was analyzed by two multigroup Monte Carlo codes, NIMSAC and MMCR-2, by two discrete ordinates S /sub n/ codes, ANISN and DOT3.5, and by the shield structure designmore » code for skyshine, SKYSHINE-II. The calculated results show good agreement with the measured results in absolute values. These experimental results should be useful as benchmark data for shyshine analysis and for shielding design of fusion facilities.« less

  7. SU-F-R-11: Designing Quality and Safety Informatics Through Implementation of a CT Radiation Dose Monitoring Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, JM; Samei, E; Departments of Physics, Electrical and Computer Engineering, and Biomedical Engineering, and Medical Physics Graduate Program, Duke University, Durham, NC

    2016-06-15

    Purpose: Recent legislative and accreditation requirements have driven rapid development and implementation of CT radiation dose monitoring solutions. Institutions must determine how to improve quality, safety, and consistency of their clinical performance. The purpose of this work was to design a strategy and meaningful characterization of results from an in-house, clinically-deployed dose monitoring solution. Methods: A dose monitoring platform was designed by our imaging physics group that focused on extracting protocol parameters, dose metrics, and patient demographics and size. Compared to most commercial solutions, which focus on individual exam alerts and global thresholds, the program sought to characterize overall consistencymore » and targeted thresholds based on eight analytic interrogations. Those were based on explicit questions related to protocol application, national benchmarks, protocol and size-specific dose targets, operational consistency, outliers, temporal trends, intra-system variability, and consistent use of electronic protocols. Using historical data since the start of 2013, 95% and 99% intervals were used to establish yellow and amber parameterized dose alert thresholds, respectively, as a function of protocol, scanner, and size. Results: Quarterly reports have been generated for three hospitals for 3 quarters of 2015 totaling 27880, 28502, 30631 exams, respectively. Four adult and two pediatric protocols were higher than external institutional benchmarks. Four protocol dose levels were being inconsistently applied as a function of patient size. For the three hospitals, the minimum and maximum amber outlier percentages were [1.53%,2.28%], [0.76%,1.8%], [0.94%,1.17%], respectively. Compared with the electronic protocols, 10 protocols were found to be used with some inconsistency. Conclusion: Dose monitoring can satisfy requirements with global alert thresholds and patient dose records, but the real value is in optimizing patient-specific protocols, balancing image quality trade-offs that dose-reduction strategies promise, and improving the performance and consistency of a clinical operation. Data plots that capture patient demographics and scanner performance demonstrate that value.« less

  8. Analysis of a benchmark suite to evaluate mixed numeric and symbolic processing

    NASA Technical Reports Server (NTRS)

    Ragharan, Bharathi; Galant, David

    1992-01-01

    The suite of programs that formed the benchmark for a proposed advanced computer is described and analyzed. The features of the processor and its operating system that are tested by the benchmark are discussed. The computer codes and the supporting data for the analysis are given as appendices.

  9. Selection of appropriate tumour data sets for Benchmark Dose Modelling (BMD) and derivation of a Margin of Exposure (MoE) for substances that are genotoxic and carcinogenic: considerations of biological relevance of tumour type, data quality and uncertainty assessment.

    PubMed

    Edler, Lutz; Hart, Andy; Greaves, Peter; Carthew, Philip; Coulet, Myriam; Boobis, Alan; Williams, Gary M; Smith, Benjamin

    2014-08-01

    This article addresses a number of concepts related to the selection and modelling of carcinogenicity data for the calculation of a Margin of Exposure. It follows up on the recommendations put forward by the International Life Sciences Institute - European branch in 2010 on the application of the Margin of Exposure (MoE) approach to substances in food that are genotoxic and carcinogenic. The aims are to provide practical guidance on the relevance of animal tumour data for human carcinogenic hazard assessment, appropriate selection of tumour data for Benchmark Dose Modelling, and approaches for dealing with the uncertainty associated with the selection of data for modelling and, consequently, the derived Point of Departure (PoD) used to calculate the MoE. Although the concepts outlined in this article are interrelated, the background expertise needed to address each topic varies. For instance, the expertise needed to make a judgement on biological relevance of a specific tumour type is clearly different to that needed to determine the statistical uncertainty around the data used for modelling a benchmark dose. As such, each topic is dealt with separately to allow those with specialised knowledge to target key areas of guidance and provide a more in-depth discussion on each subject for those new to the concept of the Margin of Exposure approach. Copyright © 2013 ILSI Europe. Published by Elsevier Ltd.. All rights reserved.

  10. Editor's Highlight: Application of Gene Set Enrichment Analysis for Identification of Chemically Induced, Biologically Relevant Transcriptomic Networks and Potential Utilization in Human Health Risk Assessment.

    PubMed

    Dean, Jeffry L; Zhao, Q Jay; Lambert, Jason C; Hawkins, Belinda S; Thomas, Russell S; Wesselkamper, Scott C

    2017-05-01

    The rate of new chemical development in commerce combined with a paucity of toxicity data for legacy chemicals presents a unique challenge for human health risk assessment. There is a clear need to develop new technologies and incorporate novel data streams to more efficiently inform derivation of toxicity values. One avenue of exploitation lies in the field of transcriptomics and the application of gene expression analysis to characterize biological responses to chemical exposures. In this context, gene set enrichment analysis (GSEA) was employed to evaluate tissue-specific, dose-response gene expression data generated following exposure to multiple chemicals for various durations. Patterns of transcriptional enrichment were evident across time and with increasing dose, and coordinated enrichment plausibly linked to the etiology of the biological responses was observed. GSEA was able to capture both transient and sustained transcriptional enrichment events facilitating differentiation between adaptive versus longer term molecular responses. When combined with benchmark dose (BMD) modeling of gene expression data from key drivers of biological enrichment, GSEA facilitated characterization of dose ranges required for enrichment of biologically relevant molecular signaling pathways, and promoted comparison of the activation dose ranges required for individual pathways. Median transcriptional BMD values were calculated for the most sensitive enriched pathway as well as the overall median BMD value for key gene members of significantly enriched pathways, and both were observed to be good estimates of the most sensitive apical endpoint BMD value. Together, these efforts support the application of GSEA to qualitative and quantitative human health risk assessment. Published by Oxford University Press on behalf of the Society of Toxicology 2017. This work is written by US Government employees and is in the public domain in the US.

  11. Rigorous-two-Steps scheme of TRIPOLI-4® Monte Carlo code validation for shutdown dose rate calculation

    NASA Astrophysics Data System (ADS)

    Jaboulay, Jean-Charles; Brun, Emeric; Hugot, François-Xavier; Huynh, Tan-Dat; Malouch, Fadhel; Mancusi, Davide; Tsilanizara, Aime

    2017-09-01

    After fission or fusion reactor shutdown the activated structure emits decay photons. For maintenance operations the radiation dose map must be established in the reactor building. Several calculation schemes have been developed to calculate the shutdown dose rate. These schemes are widely developed in fusion application and more precisely for the ITER tokamak. This paper presents the rigorous-two-steps scheme implemented at CEA. It is based on the TRIPOLI-4® Monte Carlo code and the inventory code MENDEL. The ITER shutdown dose rate benchmark has been carried out, results are in a good agreement with the other participant.

  12. An integrated data envelopment analysis-artificial neural network approach for benchmarking of bank branches

    NASA Astrophysics Data System (ADS)

    Shokrollahpour, Elsa; Hosseinzadeh Lotfi, Farhad; Zandieh, Mostafa

    2016-06-01

    Efficiency and quality of services are crucial to today's banking industries. The competition in this section has become increasingly intense, as a result of fast improvements in Technology. Therefore, performance analysis of the banking sectors attracts more attention these days. Even though data envelopment analysis (DEA) is a pioneer approach in the literature as of an efficiency measurement tool and finding benchmarks, it is on the other hand unable to demonstrate the possible future benchmarks. The drawback to it could be that the benchmarks it provides us with, may still be less efficient compared to the more advanced future benchmarks. To cover for this weakness, artificial neural network is integrated with DEA in this paper to calculate the relative efficiency and more reliable benchmarks of one of the Iranian commercial bank branches. Therefore, each branch could have a strategy to improve the efficiency and eliminate the cause of inefficiencies based on a 5-year time forecast.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandor, Debra; Chung, Donald; Keyser, David

    This report documents the CEMAC methodologies for developing and reporting annual global clean energy manufacturing benchmarks. The report reviews previously published manufacturing benchmark reports and foundational data, establishes a framework for benchmarking clean energy technologies, describes the CEMAC benchmark analysis methodologies, and describes the application of the methodologies to the manufacturing of four specific clean energy technologies.

  14. Development of Benchmark Examples for Quasi-Static Delamination Propagation and Fatigue Growth Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2012-01-01

    The development of benchmark examples for quasi-static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for Abaqus/Standard. The example is based on a finite element model of a Double-Cantilever Beam specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, a quasi-static benchmark example was created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.

  15. Development of Benchmark Examples for Static Delamination Propagation and Fatigue Growth Predictions

    NASA Technical Reports Server (NTRS)

    Kruger, Ronald

    2011-01-01

    The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall, the results are encouraging but further assessment for mixed-mode delamination is required.

  16. Development and Application of Benchmark Examples for Mode II Static Delamination Propagation and Fatigue Growth Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2011-01-01

    The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.

  17. Benchmarking and validation of a Geant4-SHADOW Monte Carlo simulation for dose calculations in microbeam radiation therapy.

    PubMed

    Cornelius, Iwan; Guatelli, Susanna; Fournier, Pauline; Crosbie, Jeffrey C; Sanchez Del Rio, Manuel; Bräuer-Krisch, Elke; Rosenfeld, Anatoly; Lerch, Michael

    2014-05-01

    Microbeam radiation therapy (MRT) is a synchrotron-based radiotherapy modality that uses high-intensity beams of spatially fractionated radiation to treat tumours. The rapid evolution of MRT towards clinical trials demands accurate treatment planning systems (TPS), as well as independent tools for the verification of TPS calculated dose distributions in order to ensure patient safety and treatment efficacy. Monte Carlo computer simulation represents the most accurate method of dose calculation in patient geometries and is best suited for the purpose of TPS verification. A Monte Carlo model of the ID17 biomedical beamline at the European Synchrotron Radiation Facility has been developed, including recent modifications, using the Geant4 Monte Carlo toolkit interfaced with the SHADOW X-ray optics and ray-tracing libraries. The code was benchmarked by simulating dose profiles in water-equivalent phantoms subject to irradiation by broad-beam (without spatial fractionation) and microbeam (with spatial fractionation) fields, and comparing against those calculated with a previous model of the beamline developed using the PENELOPE code. Validation against additional experimental dose profiles in water-equivalent phantoms subject to broad-beam irradiation was also performed. Good agreement between codes was observed, with the exception of out-of-field doses and toward the field edge for larger field sizes. Microbeam results showed good agreement between both codes and experimental results within uncertainties. Results of the experimental validation showed agreement for different beamline configurations. The asymmetry in the out-of-field dose profiles due to polarization effects was also investigated, yielding important information for the treatment planning process in MRT. This work represents an important step in the development of a Monte Carlo-based independent verification tool for treatment planning in MRT.

  18. An Improved Method of Heterogeneity Compensation for the Convolution / Superposition Algorithm

    NASA Astrophysics Data System (ADS)

    Jacques, Robert; McNutt, Todd

    2014-03-01

    Purpose: To improve the accuracy of convolution/superposition (C/S) in heterogeneous material by developing a new algorithm: heterogeneity compensated superposition (HCS). Methods: C/S has proven to be a good estimator of the dose deposited in a homogeneous volume. However, near heterogeneities electron disequilibrium occurs, leading to the faster fall-off and re-buildup of dose. We propose to filter the actual patient density in a position and direction sensitive manner, allowing the dose deposited near interfaces to be increased or decreased relative to C/S. We implemented the effective density function as a multivariate first-order recursive filter and incorporated it into GPU-accelerated, multi-energetic C/S implementation. We compared HCS against C/S using the ICCR 2000 Monte-Carlo accuracy benchmark, 23 similar accuracy benchmarks and 5 patient cases. Results: Multi-energetic HCS increased the dosimetric accuracy for the vast majority of voxels; in many cases near Monte-Carlo results were achieved. We defined the per-voxel error, %|mm, as the minimum of the distance to agreement in mm and the dosimetric percentage error relative to the maximum MC dose. HCS improved the average mean error by 0.79 %|mm for the patient volumes; reducing the average mean error from 1.93 %|mm to 1.14 %|mm. Very low densities (i.e. < 0.1 g / cm3) remained problematic, but may be solvable with a better filter function. Conclusions: HCS improved upon C/S's density scaled heterogeneity correction with a position and direction sensitive density filter. This method significantly improved the accuracy of the GPU based algorithm reaching the accuracy levels of Monte Carlo based methods with performance in a few tenths of seconds per beam. Acknowledgement: Funding for this research was provided by the NSF Cooperative Agreement EEC9731748, Elekta / IMPAC Medical Systems, Inc. and the Johns Hopkins University. James Satterthwaite provided the Monte Carlo benchmark simulations.

  19. Radiation dose in coronary angiography and intervention: initial results from the establishment of a multi-centre diagnostic reference level in Queensland public hospitals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowhurst, James A, E-mail: jimcrowhurst@hotmail.com; School of Medicine, University of Queensland, St. Lucia, Brisbane, Queensland; Whitby, Mark

    Radiation dose to patients undergoing invasive coronary angiography (ICA) is relatively high. Guidelines suggest that a local benchmark or diagnostic reference level (DRL) be established for these procedures. This study sought to create a DRL for ICA procedures in Queensland public hospitals. Data were collected for all Cardiac Catheter Laboratories in Queensland public hospitals. Data were collected for diagnostic coronary angiography (CA) and single-vessel percutaneous intervention (PCI) procedures. Dose area product (P{sub KA}), skin surface entrance dose (K{sub AR}), fluoroscopy time (FT), and patient height and weight were collected for 3 months. The DRL was set from the 75th percentilemore » of the P{sub KA.} 2590 patients were included in the CA group where the median FT was 3.5 min (inter-quartile range = 2.3–6.1). Median K{sub AR} = 581 mGy (374–876). Median P{sub KA} = 3908 uGym{sup 2} (2489–5865) DRL = 5865 uGym{sup 2}. 947 patients were included in the PCI group where median FT was 11.2 min (7.7–17.4). Median K{sub AR} = 1501 mGy (928–2224). Median P{sub KA} = 8736 uGym{sup 2} (5449–12,900) DRL = 12,900 uGym{sup 2}. This study established a benchmark for radiation dose for diagnostic and interventional coronary angiography in Queensland public facilities.« less

  20. Validation of a commercial TPS based on the VMC(++) Monte Carlo code for electron beams: commissioning and dosimetric comparison with EGSnrc in homogeneous and heterogeneous phantoms.

    PubMed

    Ferretti, A; Martignano, A; Simonato, F; Paiusco, M

    2014-02-01

    The aim of the present work was the validation of the VMC(++) Monte Carlo (MC) engine implemented in the Oncentra Masterplan (OMTPS) and used to calculate the dose distribution produced by the electron beams (energy 5-12 MeV) generated by the linear accelerator (linac) Primus (Siemens), shaped by a digital variable applicator (DEVA). The BEAMnrc/DOSXYZnrc (EGSnrc package) MC model of the linac head was used as a benchmark. Commissioning results for both MC codes were evaluated by means of 1D Gamma Analysis (2%, 2 mm), calculated with a home-made Matlab (The MathWorks) program, comparing the calculations with the measured profiles. The results of the commissioning of OMTPS were good [average gamma index (γ) > 97%]; some mismatches were found with large beams (size ≥ 15 cm). The optimization of the BEAMnrc model required to increase the beam exit window to match the calculated and measured profiles (final average γ > 98%). Then OMTPS dose distribution maps were compared with DOSXYZnrc with a 2D Gamma Analysis (3%, 3 mm), in 3 virtual water phantoms: (a) with an air step, (b) with an air insert, and (c) with a bone insert. The OMTPD and EGSnrc dose distributions with the air-water step phantom were in very high agreement (γ ∼ 99%), while for heterogeneous phantoms there were differences of about 9% in the air insert and of about 10-15% in the bone region. This is due to the Masterplan implementation of VMC(++) which reports the dose as "dose to water", instead of "dose to medium". Copyright © 2013 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  1. CLEAR: Cross-Layer Exploration for Architecting Resilience

    DTIC Science & Technology

    2017-03-01

    benchmark analysis, also provides cost-effective solutions (~1% additional energy cost for the same 50× improvement). This paper addresses the...core (OoO-core) [Wang 04], across 18 benchmarks . Such extensive exploration enables us to conclusively answer the above cross-layer resilience...analysis of the effects of soft errors on application benchmarks , provides a highly effective soft error resilience approach. 3. The above

  2. Assessment of Static Delamination Propagation Capabilities in Commercial Finite Element Codes Using Benchmark Analysis

    NASA Technical Reports Server (NTRS)

    Orifici, Adrian C.; Krueger, Ronald

    2010-01-01

    With capabilities for simulating delamination growth in composite materials becoming available, the need for benchmarking and assessing these capabilities is critical. In this study, benchmark analyses were performed to assess the delamination propagation simulation capabilities of the VCCT implementations in Marc TM and MD NastranTM. Benchmark delamination growth results for Double Cantilever Beam, Single Leg Bending and End Notched Flexure specimens were generated using a numerical approach. This numerical approach was developed previously, and involves comparing results from a series of analyses at different delamination lengths to a single analysis with automatic crack propagation. Specimens were analyzed with three-dimensional and two-dimensional models, and compared with previous analyses using Abaqus . The results demonstrated that the VCCT implementation in Marc TM and MD Nastran(TradeMark) was capable of accurately replicating the benchmark delamination growth results and that the use of the numerical benchmarks offers advantages over benchmarking using experimental and analytical results.

  3. Contributions to Integral Nuclear Data in ICSBEP and IRPhEP since ND 2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Briggs, J. Blair; Gulliford, Jim

    2016-09-01

    The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) was last discussed directly with the international nuclear data community at ND2013. Since ND2013, integral benchmark data that are available for nuclear data testing has continued to increase. The status of the international benchmark efforts and the latest contributions to integral nuclear data for testing is discussed. Select benchmark configurations that have been added to the ICSBEP and IRPhEP Handbooks since ND2013 are highlighted. The 2015 edition of the ICSBEP Handbook now contains 567 evaluations with benchmark specifications for 4,874more » critical, near-critical, or subcritical configurations, 31 criticality alarm placement/shielding configuration with multiple dose points apiece, and 207 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications. The 2015 edition of the IRPhEP Handbook contains data from 143 different experimental series that were performed at 50 different nuclear facilities. Currently 139 of the 143 evaluations are published as approved benchmarks with the remaining four evaluations published in draft format only. Measurements found in the IRPhEP Handbook include criticality, buckling and extrapolation length, spectral characteristics, reactivity effects, reactivity coefficients, kinetics, reaction-rate distributions, power distributions, isotopic compositions, and/or other miscellaneous types of measurements for various types of reactor systems. Annual technical review meetings for both projects were held in April 2016; additional approved benchmark evaluations will be included in the 2016 editions of these handbooks.« less

  4. Benchmarking of MCNP for calculating dose rates at an interim storage facility for nuclear waste.

    PubMed

    Heuel-Fabianek, Burkhard; Hille, Ralf

    2005-01-01

    During the operation of research facilities at Research Centre Jülich, Germany, nuclear waste is stored in drums and other vessels in an interim storage building on-site, which has a concrete shielding at the side walls. Owing to the lack of a well-defined source, measured gamma spectra were unfolded to determine the photon flux on the surface of the containers. The dose rate simulation, including the effects of skyshine, using the Monte Carlo transport code MCNP is compared with the measured dosimetric data at some locations in the vicinity of the interim storage building. The MCNP data for direct radiation confirm the data calculated using a point-kernel method. However, a comparison of the modelled dose rates for direct radiation and skyshine with the measured data demonstrate the need for a more precise definition of the source. Both the measured and the modelled dose rates verified the fact that the legal limits (<1 mSv a(-1)) are met in the area outside the perimeter fence of the storage building to which members of the public have access. Using container surface data (gamma spectra) to define the source may be a useful tool for practical calculations and additionally for benchmarking of computer codes if the discussed critical aspects with respect to the source can be addressed adequately.

  5. Simple Benchmark Specifications for Space Radiation Protection

    NASA Technical Reports Server (NTRS)

    Singleterry, Robert C. Jr.; Aghara, Sukesh K.

    2013-01-01

    This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.

  6. U.S. EPA Superfund Program's Policy for Risk and Dose Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Stuart

    2008-01-15

    The Environmental Protection Agency (EPA) Office of Superfund Remediation and Technology Innovation (OSRTI) has primary responsibility for implementing the long-term (non-emergency) portion of a key U.S. law regulating cleanup: the Comprehensive Environmental Response, Compensation and Liability Act, CERCLA, nicknamed 'Superfund'. The purpose of the Superfund program is to protect human health and the environment over the long term from releases or potential releases of hazardous substances from abandoned or uncontrolled hazardous waste sites. The focus of this paper is on risk and dose assessment policies and tools for addressing radioactively contaminated sites by the Superfund program. EPA has almost completedmore » two risk assessment tools that are particularly relevant to decommissioning activities conducted under CERCLA authority. These are the: 1. Building Preliminary Remediation Goals for Radionuclides (BPRG) electronic calculator, and 2. Radionuclide Outdoor Surfaces Preliminary Remediation Goals (SPRG) electronic calculator. EPA developed the BPRG calculator to help standardize the evaluation and cleanup of radiologically contaminated buildings at which risk is being assessed for occupancy. BPRGs are radionuclide concentrations in dust, air and building materials that correspond to a specified level of human cancer risk. The intent of SPRG calculator is to address hard outside surfaces such as building slabs, outside building walls, sidewalks and roads. SPRGs are radionuclide concentrations in dust and hard outside surface materials. EPA is also developing the 'Radionuclide Ecological Benchmark' calculator. This calculator provides biota concentration guides (BCGs), also known as ecological screening benchmarks, for use in ecological risk assessments at CERCLA sites. This calculator is intended to develop ecological benchmarks as part of the EPA guidance 'Ecological Risk Assessment Guidance for Superfund: Process for Designing and Conducting Ecological Risk Assessments'. The calculator develops ecological benchmarks for ionizing radiation based on cell death only.« less

  7. International land Model Benchmarking (ILAMB) Package v002.00

    DOE Data Explorer

    Collier, Nathaniel [Oak Ridge National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory; Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory

    2016-05-09

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  8. International land Model Benchmarking (ILAMB) Package v001.00

    DOE Data Explorer

    Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory

    2016-05-02

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  9. Benchmarking, benchmarks, or best practices? Applying quality improvement principles to decrease surgical turnaround time.

    PubMed

    Mitchell, L

    1996-01-01

    The processes of benchmarking, benchmark data comparative analysis, and study of best practices are distinctly different. The study of best practices is explained with an example based on the Arthur Andersen & Co. 1992 "Study of Best Practices in Ambulatory Surgery". The results of a national best practices study in ambulatory surgery were used to provide our quality improvement team with the goal of improving the turnaround time between surgical cases. The team used a seven-step quality improvement problem-solving process to improve the surgical turnaround time. The national benchmark for turnaround times between surgical cases in 1992 was 13.5 minutes. The initial turnaround time at St. Joseph's Medical Center was 19.9 minutes. After the team implemented solutions, the time was reduced to an average of 16.3 minutes, an 18% improvement. Cost-benefit analysis showed a potential enhanced revenue of approximately $300,000, or a potential savings of $10,119. Applying quality improvement principles to benchmarking, benchmarks, or best practices can improve process performance. Understanding which form of benchmarking the institution wishes to embark on will help focus a team and use appropriate resources. Communicating with professional organizations that have experience in benchmarking will save time and money and help achieve the desired results.

  10. GraphBench

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sukumar, Sreenivas R.; Hong, Seokyong; Lee, Sangkeun

    2016-06-01

    GraphBench is a benchmark suite for graph pattern mining and graph analysis systems. The benchmark suite is a significant addition to conducting apples-apples comparison of graph analysis software (databases, in-memory tools, triple stores, etc.)

  11. Practical examples of modeling choices and their consequences for risk assessment

    EPA Science Inventory

    Although benchmark dose (BMD) modeling has become the preferred approach to identifying a point of departure (POD) over the No Observed Adverse Effect Level, there remain challenges to its application in human health risk assessment. BMD modeling, as currently implemented by the...

  12. The U. S. Environmental Protection Agency's inhalation RfD methodology: Risk assessment for air toxics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarabek, A.M.; Menache, M.G.; Overton, J.H. Jr.

    1990-10-01

    The U.S. Environmental Protection Agency (U.S. EPA) has advocated the establishment of general and scientific guidelines for the evaluation of toxicological data and their use in deriving benchmark values to protect exposed populations from adverse health effects. The Agency's reference dose (RfD) methodology for deriving benchmark values for noncancer toxicity originally addressed risk assessment of oral exposures. This paper presents a brief background on the development of the inhalation reference dose (RfDi) methodology, including concepts and issues related to addressing the dynamics of the respiratory system as the portal of entry. Different dosimetric adjustments are described that were incorporated intomore » the methodology to account for the nature of the inhaled agent (particle or gas) and the site of the observed toxic effects (respiratory or extra-respiratory). Impacts of these adjustments on the extrapolation of toxicity data of inhaled agents for human health risk assessment and future research directions are also discussed.« less

  13. U. S. Environmental Protection Agency's inhalation RFD methodology: Risk assessment for air toxics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarabek, A.M.; Menache, M.G.; Overton, J.H.

    1989-01-01

    The U.S. Environmental Protection Agency (U.S. EPA) has advocated the establishment of general and scientific guidelines for the evaluation of toxicological data and their use in deriving benchmark values to protect exposed populations from adverse health effects. The Agency's reference dose (RfD) methodology for deriving benchmark values for noncancer toxicity originally addressed risk assessment of oral exposures. The paper presents a brief background on the development of the inhalation reference dose (RFDi) methodology, including concepts and issues related to addressing the dynamics of the respiratory system as the portal of entry. Different dosimetric adjustments are described that were incorporated intomore » the methodology to account for the nature of the inhaled agent (particle or gas) and the site of the observed toxic effects (respiratory or extrarespiratory). Impacts of these adjustments on the extrapolation of toxicity data of inhaled agents for human health risk assessment and future research directions are also discussed.« less

  14. Benchmarking the performance of fixed-image receptor digital radiography systems. Part 2: system performance metric.

    PubMed

    Lee, Kam L; Bernardo, Michael; Ireland, Timothy A

    2016-06-01

    This is part two of a two-part study in benchmarking system performance of fixed digital radiographic systems. The study compares the system performance of seven fixed digital radiography systems based on quantitative metrics like modulation transfer function (sMTF), normalised noise power spectrum (sNNPS), detective quantum efficiency (sDQE) and entrance surface air kerma (ESAK). It was found that the most efficient image receptors (greatest sDQE) were not necessarily operating at the lowest ESAK. In part one of this study, sMTF is shown to depend on system configuration while sNNPS is shown to be relatively consistent across systems. Systems are ranked on their signal-to-noise ratio efficiency (sDQE) and their ESAK. Systems using the same equipment configuration do not necessarily have the same system performance. This implies radiographic practice at the site will have an impact on the overall system performance. In general, systems are more dose efficient at low dose settings.

  15. Experimental depth dose curves of a 67.5 MeV proton beam for benchmarking and validation of Monte Carlo simulation

    PubMed Central

    Faddegon, Bruce A.; Shin, Jungwook; Castenada, Carlos M.; Ramos-Méndez, José; Daftari, Inder K.

    2015-01-01

    Purpose: To measure depth dose curves for a 67.5 ± 0.1 MeV proton beam for benchmarking and validation of Monte Carlo simulation. Methods: Depth dose curves were measured in 2 beam lines. Protons in the raw beam line traversed a Ta scattering foil, 0.1016 or 0.381 mm thick, a secondary emission monitor comprised of thin Al foils, and a thin Kapton exit window. The beam energy and peak width and the composition and density of material traversed by the beam were known with sufficient accuracy to permit benchmark quality measurements. Diodes for charged particle dosimetry from two different manufacturers were used to scan the depth dose curves with 0.003 mm depth reproducibility in a water tank placed 300 mm from the exit window. Depth in water was determined with an uncertainty of 0.15 mm, including the uncertainty in the water equivalent depth of the sensitive volume of the detector. Parallel-plate chambers were used to verify the accuracy of the shape of the Bragg peak and the peak-to-plateau ratio measured with the diodes. The uncertainty in the measured peak-to-plateau ratio was 4%. Depth dose curves were also measured with a diode for a Bragg curve and treatment beam spread out Bragg peak (SOBP) on the beam line used for eye treatment. The measurements were compared to Monte Carlo simulation done with geant4 using topas. Results: The 80% dose at the distal side of the Bragg peak for the thinner foil was at 37.47 ± 0.11 mm (average of measurement with diodes from two different manufacturers), compared to the simulated value of 37.20 mm. The 80% dose for the thicker foil was at 35.08 ± 0.15 mm, compared to the simulated value of 34.90 mm. The measured peak-to-plateau ratio was within one standard deviation experimental uncertainty of the simulated result for the thinnest foil and two standard deviations for the thickest foil. It was necessary to include the collimation in the simulation, which had a more pronounced effect on the peak-to-plateau ratio for the thicker foil. The treatment beam, being unfocussed, had a broader Bragg peak than the raw beam. A 1.3 ± 0.1 MeV FWHM peak width in the energy distribution was used in the simulation to match the Bragg peak width. An additional 1.3–2.24 mm of water in the water column was required over the nominal values to match the measured depth penetration. Conclusions: The proton Bragg curve measured for the 0.1016 mm thick Ta foil provided the most accurate benchmark, having a low contribution of proton scatter from upstream of the water tank. The accuracy was 0.15% in measured beam energy and 0.3% in measured depth penetration at the Bragg peak. The depth of the distal edge of the Bragg peak in the simulation fell short of measurement, suggesting that the mean ionization potential of water is 2–5 eV higher than the 78 eV used in the stopping power calculation for the simulation. The eye treatment beam line depth dose curves provide validation of Monte Carlo simulation of a Bragg curve and SOBP with 4%/2 mm accuracy. PMID:26133619

  16. Thought Experiment to Examine Benchmark Performance for Fusion Nuclear Data

    NASA Astrophysics Data System (ADS)

    Murata, Isao; Ohta, Masayuki; Kusaka, Sachie; Sato, Fuminobu; Miyamaru, Hiroyuki

    2017-09-01

    There are many benchmark experiments carried out so far with DT neutrons especially aiming at fusion reactor development. These integral experiments seemed vaguely to validate the nuclear data below 14 MeV. However, no precise studies exist now. The author's group thus started to examine how well benchmark experiments with DT neutrons can play a benchmarking role for energies below 14 MeV. Recently, as a next phase, to generalize the above discussion, the energy range was expanded to the entire region. In this study, thought experiments with finer energy bins have thus been conducted to discuss how to generally estimate performance of benchmark experiments. As a result of thought experiments with a point detector, the sensitivity for a discrepancy appearing in the benchmark analysis is "equally" due not only to contribution directly conveyed to the deterctor, but also due to indirect contribution of neutrons (named (A)) making neutrons conveying the contribution, indirect controbution of neutrons (B) making the neutrons (A) and so on. From this concept, it would become clear from a sensitivity analysis in advance how well and which energy nuclear data could be benchmarked with a benchmark experiment.

  17. Dose-Response Analysis of RNA-Seq Profiles in Archival ...

    EPA Pesticide Factsheets

    Use of archival resources has been limited to date by inconsistent methods for genomic profiling of degraded RNA from formalin-fixed paraffin-embedded (FFPE) samples. RNA-sequencing offers a promising way to address this problem. Here we evaluated transcriptomic dose responses using RNA-sequencing in paired FFPE and frozen (FROZ) samples from two archival studies in mice, one 20 years old. Experimental treatments included 3 different doses of di(2-ethylhexyl)phthalate or dichloroacetic acid for the recently archived and older studies, respectively. Total RNA was ribo-depleted and sequenced using the Illumina HiSeq platform. In the recently archived study, FFPE samples had 35% lower total counts compared to FROZ samples but high concordance in fold-change values of differentially expressed genes (DEGs) (r2 = 0.99), highly enriched pathways (90% overlap with FROZ), and benchmark dose estimates for preselected target genes (2% difference vs FROZ). In contrast, older FFPE samples had markedly lower total counts (3% of FROZ) and poor concordance in global DEGs and pathways. However, counts from FFPE and FROZ samples still positively correlated (r2 = 0.84 across all transcripts) and showed comparable dose responses for more highly expressed target genes. These findings highlight potential applications and issues in using RNA-sequencing data from FFPE samples. Recently archived FFPE samples were highly similar to FROZ samples in sequencing q

  18. Comment on ‘egs_brachy: a versatile and fast Monte Carlo code for brachytherapy’

    NASA Astrophysics Data System (ADS)

    Yegin, Gultekin

    2018-02-01

    In a recent paper (Chamberland et al 2016 Phys. Med. Biol. 61 8214) develop a new Monte Carlo code called egs_brachy for brachytherapy treatments. It is based on EGSnrc, and written in the C++ programming language. In order to benchmark the egs_brachy code, the authors use it in various test case scenarios in which complex geometry conditions exist. Another EGSnrc based brachytherapy dose calculation engine, BrachyDose, is used for dose comparisons. The authors fail to prove that egs_brachy can produce reasonable dose values for brachytherapy sources in a given medium. The dose comparisons in the paper are erroneous and misleading. egs_brachy should not be used in any further research studies unless and until all the potential bugs are fixed in the code.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dourson, M.L.

    The quantitative procedures associated with noncancer risk assessment include reference dose (RfD), benchmark dose, and severity modeling. The RfD, which is part of the EPA risk assessment guidelines, is an estimation of a level that is likely to be without any health risk to sensitive individuals. The RfD requires two major judgments: the first is choice of a critical effect(s) and its No Observed Adverse Effect Level (NOAEL); the second judgment is choice of an uncertainty factor. This paper discusses major assumptions and limitations of the RfD model.

  20. A broad scope knowledge based model for optimization of VMAT in esophageal cancer: validation and assessment of plan quality among different treatment centers.

    PubMed

    Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Laksar, Sarbani; Tozzi, Angelo; Scorsetti, Marta; Cozzi, Luca

    2015-10-31

    To evaluate the performance of a broad scope model-based optimisation process for volumetric modulated arc therapy applied to esophageal cancer. A set of 70 previously treated patients in two different institutions, were selected to train a model for the prediction of dose-volume constraints. The model was built with a broad-scope purpose, aiming to be effective for different dose prescriptions and tumour localisations. It was validated on three groups of patients from the same institution and from another clinic not providing patients for the training phase. Comparison of the automated plans was done against reference cases given by the clinically accepted plans. Quantitative improvements (statistically significant for the majority of the analysed dose-volume parameters) were observed between the benchmark and the test plans. Of 624 dose-volume objectives assessed for plan evaluation, in 21 cases (3.3 %) the reference plans failed to respect the constraints while the model-based plans succeeded. Only in 3 cases (<0.5 %) the reference plans passed the criteria while the model-based failed. In 5.3 % of the cases both groups of plans failed and in the remaining cases both passed the tests. Plans were optimised using a broad scope knowledge-based model to determine the dose-volume constraints. The results showed dosimetric improvements when compared to the benchmark data. Particularly the plans optimised for patients from the third centre, not participating to the training, resulted in superior quality. The data suggests that the new engine is reliable and could encourage its application to clinical practice.

  1. Indoor phthalate concentration in residential apartments in Chongqing, China: Implications for preschool children's exposure and risk assessment

    NASA Astrophysics Data System (ADS)

    Bu, Zhongming; Zhang, Yinping; Mmereki, Daniel; Yu, Wei; Li, Baizhan

    2016-02-01

    Six phthalates - dimethyl phthalate (DMP), diethyl phthalate (DEP), di(isobutyl) phthalate (DiBP), di(n-butyl) phthalate (DnBP), butyl benzyl phthalate (BBzP) and di(2-ethylhexyl) phthalate (DEHP) - in indoor gas-phase and dust samples were measured in thirty residential apartments for the first time in Chongqing, China. Monte-Carlo simulation was used to estimate preschool children's exposure via inhalation, non-dietary ingestion and dermal absorption based on gas-phase and dust concentrations. Risk assessment was evaluated by comparing the modeled exposure doses with child-specific benchmarks specified in California's Proposition 65. The detection frequency for all the targeted phthalates was more than 80% except for BBzP. DMP was the most predominant compound in the gas-phase (median = 0.91 μg/m3 and 0.82 μg/m3 in living rooms and bedrooms, respectively), and DEHP was the most predominant compound in the dust samples (median = 1543 μg/g and 1450 μg/g in living rooms and bedrooms, respectively). Correlation analysis suggests that indoor DiBP and DnBP might come from the same emission sources. The simulations showed that the median DEHP daily intake was 3.18-4.28 μg/day/kg-bw in all age groups, suggesting that it was the greatest of the targeted phthalates. The risk assessment indicated that the exposure doses of DnBP and DEHP exceeded the child-specific benchmarks in more than 90% of preschool children in Chongqing. Therefore, from a children's health perspective, efforts should focus on controlling indoor phthalate concentrations and exposures.

  2. Recent Additions for 1998

    EPA Science Inventory

    December 22, 1998
    Benchmark Dose Software

    December 16, 1998
    Quantitative analysis of antimicrobial use on British dairy farms.

    PubMed

    Hyde, Robert M; Remnant, John G; Bradley, Andrew J; Breen, James E; Hudson, Christopher D; Davies, Peers L; Clarke, Tom; Critchell, Yvonne; Hylands, Matthew; Linton, Emily; Wood, Erika; Green, Martin J

    2017-12-23

    Antimicrobial resistance has been reported to represent a growing threat to both human and animal health, and concerns have been raised around levels of antimicrobial usage (AMU) within the livestock industry. To provide a benchmark for dairy cattle AMU and identify factors associated with high AMU, data from a convenience sample of 358 dairy farms were analysed using both mass-based and dose-based metrics following standard methodologies proposed by the European Surveillance of Veterinary Antimicrobial Consumption project. Metrics calculated were mass (mg) of antimicrobial active ingredient per population correction unit (mg/PCU), defined daily doses (DDDvet) and defined course doses (DCDvet). AMU on dairy farms ranged from 0.36 to 97.79 mg/PCU, with a median and mean of 15.97 and 20.62 mg/PCU, respectively. Dose-based analysis ranged from 0.05 to 20.29 DDDvet, with a median and mean of 4.03 and 4.60 DDDvet, respectively. Multivariable analysis highlighted that usage of antibiotics via oral and footbath routes increased the odds of a farm being in the top quartile (>27.9 mg/PCU) of antimicrobial users. While dairy cattle farm AMU appeared to be lower than UK livestock average, there were a selection of outlying farms with extremely high AMU, with the top 25 per cent of farms contributing greater than 50 per cent of AMU by mass. Identification of these high use farms may enable targeted AMU reduction strategies and facilitate a significant reduction in overall dairy cattle AMU. © British Veterinary Association (unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. Extension of PENELOPE to protons: simulation of nuclear reactions and benchmark with Geant4.

    PubMed

    Sterpin, E; Sorriaux, J; Vynckier, S

    2013-11-01

    Describing the implementation of nuclear reactions in the extension of the Monte Carlo code (MC) PENELOPE to protons (PENH) and benchmarking with Geant4. PENH is based on mixed-simulation mechanics for both elastic and inelastic electromagnetic collisions (EM). The adopted differential cross sections for EM elastic collisions are calculated using the eikonal approximation with the Dirac-Hartree-Fock-Slater atomic potential. Cross sections for EM inelastic collisions are computed within the relativistic Born approximation, using the Sternheimer-Liljequist model of the generalized oscillator strength. Nuclear elastic and inelastic collisions were simulated using explicitly the scattering analysis interactive dialin database for (1)H and ICRU 63 data for (12)C, (14)N, (16)O, (31)P, and (40)Ca. Secondary protons, alphas, and deuterons were all simulated as protons, with the energy adapted to ensure consistent range. Prompt gamma emission can also be simulated upon user request. Simulations were performed in a water phantom with nuclear interactions switched off or on and integral depth-dose distributions were compared. Binary-cascade and precompound models were used for Geant4. Initial energies of 100 and 250 MeV were considered. For cases with no nuclear interactions simulated, additional simulations in a water phantom with tight resolution (1 mm in all directions) were performed with FLUKA. Finally, integral depth-dose distributions for a 250 MeV energy were computed with Geant4 and PENH in a homogeneous phantom with, first, ICRU striated muscle and, second, ICRU compact bone. For simulations with EM collisions only, integral depth-dose distributions were within 1%/1 mm for doses higher than 10% of the Bragg-peak dose. For central-axis depth-dose and lateral profiles in a phantom with tight resolution, there are significant deviations between Geant4 and PENH (up to 60%/1 cm for depth-dose distributions). The agreement is much better with FLUKA, with deviations within 3%/3 mm. When nuclear interactions were turned on, agreement (within 6% before the Bragg-peak) between PENH and Geant4 was consistent with uncertainties on nuclear models and cross sections, whatever the material simulated (water, muscle, or bone). A detailed and flexible description of nuclear reactions has been implemented in the PENH extension of PENELOPE to protons, which utilizes a mixed-simulation scheme for both elastic and inelastic EM collisions, analogous to the well-established algorithm for electrons/positrons. PENH is compatible with all current main programs that use PENELOPE as the MC engine. The nuclear model of PENH is realistic enough to give dose distributions in fair agreement with those computed by Geant4.

  4. Extension of PENELOPE to protons: Simulation of nuclear reactions and benchmark with Geant4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sterpin, E.; Sorriaux, J.; Vynckier, S.

    2013-11-15

    Purpose: Describing the implementation of nuclear reactions in the extension of the Monte Carlo code (MC) PENELOPE to protons (PENH) and benchmarking with Geant4.Methods: PENH is based on mixed-simulation mechanics for both elastic and inelastic electromagnetic collisions (EM). The adopted differential cross sections for EM elastic collisions are calculated using the eikonal approximation with the Dirac–Hartree–Fock–Slater atomic potential. Cross sections for EM inelastic collisions are computed within the relativistic Born approximation, using the Sternheimer–Liljequist model of the generalized oscillator strength. Nuclear elastic and inelastic collisions were simulated using explicitly the scattering analysis interactive dialin database for {sup 1}H and ICRUmore » 63 data for {sup 12}C, {sup 14}N, {sup 16}O, {sup 31}P, and {sup 40}Ca. Secondary protons, alphas, and deuterons were all simulated as protons, with the energy adapted to ensure consistent range. Prompt gamma emission can also be simulated upon user request. Simulations were performed in a water phantom with nuclear interactions switched off or on and integral depth–dose distributions were compared. Binary-cascade and precompound models were used for Geant4. Initial energies of 100 and 250 MeV were considered. For cases with no nuclear interactions simulated, additional simulations in a water phantom with tight resolution (1 mm in all directions) were performed with FLUKA. Finally, integral depth–dose distributions for a 250 MeV energy were computed with Geant4 and PENH in a homogeneous phantom with, first, ICRU striated muscle and, second, ICRU compact bone.Results: For simulations with EM collisions only, integral depth–dose distributions were within 1%/1 mm for doses higher than 10% of the Bragg-peak dose. For central-axis depth–dose and lateral profiles in a phantom with tight resolution, there are significant deviations between Geant4 and PENH (up to 60%/1 cm for depth–dose distributions). The agreement is much better with FLUKA, with deviations within 3%/3 mm. When nuclear interactions were turned on, agreement (within 6% before the Bragg-peak) between PENH and Geant4 was consistent with uncertainties on nuclear models and cross sections, whatever the material simulated (water, muscle, or bone).Conclusions: A detailed and flexible description of nuclear reactions has been implemented in the PENH extension of PENELOPE to protons, which utilizes a mixed-simulation scheme for both elastic and inelastic EM collisions, analogous to the well-established algorithm for electrons/positrons. PENH is compatible with all current main programs that use PENELOPE as the MC engine. The nuclear model of PENH is realistic enough to give dose distributions in fair agreement with those computed by Geant4.« less

  5. Benchmark Analysis of Pion Contribution from Galactic Cosmic Rays

    NASA Technical Reports Server (NTRS)

    Aghara, Sukesh K.; Blattnig, Steve R.; Norbury, John W.; Singleterry, Robert C., Jr.

    2008-01-01

    Shielding strategies for extended stays in space must include a comprehensive resolution of the secondary radiation environment inside the spacecraft induced by the primary, external radiation. The distribution of absorbed dose and dose equivalent is a function of the type, energy and population of these secondary products. A systematic verification and validation effort is underway for HZETRN, which is a space radiation transport code currently used by NASA. It performs neutron, proton and heavy ion transport explicitly, but it does not take into account the production and transport of mesons, photons and leptons. The question naturally arises as to what is the contribution of these particles to space radiation. The pion has a production kinetic energy threshold of about 280 MeV. The Galactic cosmic ray (GCR) spectra, coincidentally, reaches flux maxima in the hundreds of MeV range, corresponding to the pion production threshold. We present results from the Monte Carlo code MCNPX, showing the effect of lepton and meson physics when produced and transported explicitly in a GCR environment.

  6. Tolerance limits and methodologies for IMRT measurement-based verification QA: Recommendations of AAPM Task Group No. 218.

    PubMed

    Miften, Moyed; Olch, Arthur; Mihailidis, Dimitris; Moran, Jean; Pawlicki, Todd; Molineu, Andrea; Li, Harold; Wijesooriya, Krishni; Shi, Jie; Xia, Ping; Papanikolaou, Nikos; Low, Daniel A

    2018-04-01

    Patient-specific IMRT QA measurements are important components of processes designed to identify discrepancies between calculated and delivered radiation doses. Discrepancy tolerance limits are neither well defined nor consistently applied across centers. The AAPM TG-218 report provides a comprehensive review aimed at improving the understanding and consistency of these processes as well as recommendations for methodologies and tolerance limits in patient-specific IMRT QA. The performance of the dose difference/distance-to-agreement (DTA) and γ dose distribution comparison metrics are investigated. Measurement methods are reviewed and followed by a discussion of the pros and cons of each. Methodologies for absolute dose verification are discussed and new IMRT QA verification tools are presented. Literature on the expected or achievable agreement between measurements and calculations for different types of planning and delivery systems are reviewed and analyzed. Tests of vendor implementations of the γ verification algorithm employing benchmark cases are presented. Operational shortcomings that can reduce the γ tool accuracy and subsequent effectiveness for IMRT QA are described. Practical considerations including spatial resolution, normalization, dose threshold, and data interpretation are discussed. Published data on IMRT QA and the clinical experience of the group members are used to develop guidelines and recommendations on tolerance and action limits for IMRT QA. Steps to check failed IMRT QA plans are outlined. Recommendations on delivery methods, data interpretation, dose normalization, the use of γ analysis routines and choice of tolerance limits for IMRT QA are made with focus on detecting differences between calculated and measured doses via the use of robust analysis methods and an in-depth understanding of IMRT verification metrics. The recommendations are intended to improve the IMRT QA process and establish consistent, and comparable IMRT QA criteria among institutions. © 2018 American Association of Physicists in Medicine.

  7. Statistical Analysis of NAS Parallel Benchmarks and LINPACK Results

    NASA Technical Reports Server (NTRS)

    Meuer, Hans-Werner; Simon, Horst D.; Strohmeier, Erich; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    In the last three years extensive performance data have been reported for parallel machines both based on the NAS Parallel Benchmarks, and on LINPACK. In this study we have used the reported benchmark results and performed a number of statistical experiments using factor, cluster, and regression analyses. In addition to the performance results of LINPACK and the eight NAS parallel benchmarks, we have also included peak performance of the machine, and the LINPACK n and n(sub 1/2) values. Some of the results and observations can be summarized as follows: 1) All benchmarks are strongly correlated with peak performance. 2) LINPACK and EP have each a unique signature. 3) The remaining NPB can grouped into three groups as follows: (CG and IS), (LU and SP), and (MG, FT, and BT). Hence three (or four with EP) benchmarks are sufficient to characterize the overall NPB performance. Our poster presentation will follow a standard poster format, and will present the data of our statistical analysis in detail.

  8. Do fungi need to be included within environmental radiation protection assessment models?

    PubMed

    Guillén, J; Baeza, A; Beresford, N A; Wood, M D

    2017-09-01

    Fungi are used as biomonitors of forest ecosystems, having comparatively high uptakes of anthropogenic and naturally occurring radionuclides. However, whilst they are known to accumulate radionuclides they are not typically considered in radiological assessment tools for environmental (non-human biota) assessment. In this paper the total dose rate to fungi is estimated using the ERICA Tool, assuming different fruiting body geometries, a single ellipsoid and more complex geometries considering the different components of the fruit body and their differing radionuclide contents based upon measurement data. Anthropogenic and naturally occurring radionuclide concentrations from the Mediterranean ecosystem (Spain) were used in this assessment. The total estimated weighted dose rate was in the range 0.31-3.4 μGy/h (5 th -95 th percentile), similar to natural exposure rates reported for other wild groups. The total estimated dose was dominated by internal exposure, especially from 226 Ra and 210 Po. Differences in dose rate between complex geometries and a simple ellipsoid model were negligible. Therefore, the simple ellipsoid model is recommended to assess dose rates to fungal fruiting bodies. Fungal mycelium was also modelled assuming a long filament. Using these geometries, assessments for fungal fruiting bodies and mycelium under different scenarios (post-accident, planned release and existing exposure) were conducted, each being based on available monitoring data. The estimated total dose rate in each case was below the ERICA screening benchmark dose, except for the example post-accident existing exposure scenario (the Chernobyl Exclusion Zone) for which a dose rate in excess of 35 μGy/h was estimated for the fruiting body. Estimated mycelium dose rate in this post-accident existing exposure scenario was close to the 400 μGy/h benchmark for plants, although fungi are generally considered to be less radiosensitive than plants. Further research on appropriate mycelium geometries and their radionuclide content is required. Based on the assessments presented in this paper, there is no need to recommend that fungi should be added to the existing assessment tools and frameworks; if required some tools allow a geometry representing fungi to be created and used within a dose assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Issues in benchmarking human reliability analysis methods : a literature review.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lois, Erasmia; Forester, John Alan; Tran, Tuan Q.

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted,more » reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less

  10. Issues in Benchmarking Human Reliability Analysis Methods: A Literature Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring; Stacey M. L. Hendrickson; John A. Forester

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing pastmore » benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less

  11. THE EFFECT OF BACKGROUND SIGNAL AND ITS REPRESENTATION IN DECONVOLUTION OF EPR SPECTRA ON ACCURACY OF EPR DOSIMETRY IN BONE.

    PubMed

    Ciesielski, Bartlomiej; Marciniak, Agnieszka; Zientek, Agnieszka; Krefft, Karolina; Cieszyński, Mateusz; Boguś, Piotr; Prawdzik-Dampc, Anita

    2016-12-01

    This study is about the accuracy of EPR dosimetry in bones based on deconvolution of the experimental spectra into the background (BG) and the radiation-induced signal (RIS) components. The model RIS's were represented by EPR spectra from irradiated enamel or bone powder; the model BG signals by EPR spectra of unirradiated bone samples or by simulated spectra. Samples of compact and trabecular bones were irradiated in the 30-270 Gy range and the intensities of their RIS's were calculated using various combinations of those benchmark spectra. The relationships between the dose and the RIS were linear (R 2  > 0.995), with practically no difference between results obtained when using signals from irradiated enamel or bone as the model RIS. Use of different experimental spectra for the model BG resulted in variations in intercepts of the dose-RIS calibration lines, leading to systematic errors in reconstructed doses, in particular for high- BG samples of trabecular bone. These errors were reduced when simulated spectra instead of the experimental ones were used as the benchmark BG signal in the applied deconvolution procedures. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Benchmark Dose Software Development and Maintenance Ten Berge Cxt Models

    EPA Science Inventory

    This report is intended to provide an overview of beta version 1.0 of the implementation of a concentration-time (CxT) model originally programmed and provided by Wil ten Berge (referred to hereafter as the ten Berge model). The recoding and development described here represent ...

  13. Genetic toxicology at the crossroads-from qualitative hazard evaluation to quantitative risk assessment.

    PubMed

    White, Paul A; Johnson, George E

    2016-05-01

    Applied genetic toxicology is undergoing a transition from qualitative hazard identification to quantitative dose-response analysis and risk assessment. To facilitate this change, the Health and Environmental Sciences Institute (HESI) Genetic Toxicology Technical Committee (GTTC) sponsored a workshop held in Lancaster, UK on July 10-11, 2014. The event included invited speakers from several institutions and the contents was divided into three themes-1: Point-of-departure Metrics for Quantitative Dose-Response Analysis in Genetic Toxicology; 2: Measurement and Estimation of Exposures for Better Extrapolation to Humans and 3: The Use of Quantitative Approaches in Genetic Toxicology for human health risk assessment (HHRA). A host of pertinent issues were discussed relating to the use of in vitro and in vivo dose-response data, the development of methods for in vitro to in vivo extrapolation and approaches to use in vivo dose-response data to determine human exposure limits for regulatory evaluations and decision-making. This Special Issue, which was inspired by the workshop, contains a series of papers that collectively address topics related to the aforementioned themes. The Issue includes contributions that collectively evaluate, describe and discuss in silico, in vitro, in vivo and statistical approaches that are facilitating the shift from qualitative hazard evaluation to quantitative risk assessment. The use and application of the benchmark dose approach was a central theme in many of the workshop presentations and discussions, and the Special Issue includes several contributions that outline novel applications for the analysis and interpretation of genetic toxicity data. Although the contents of the Special Issue constitutes an important step towards the adoption of quantitative methods for regulatory assessment of genetic toxicity, formal acceptance of quantitative methods for HHRA and regulatory decision-making will require consensus regarding the relationships between genetic damage and disease, and the concomitant ability to use genetic toxicity results per se. © Her Majesty the Queen in Right of Canada 2016. Reproduced with the permission of the Minister of Health.

  14. 75 FR 82115 - Self-Regulatory Organizations; National Securities Clearing Corporation; Notice of Filing of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-29

    ... to provide more efficient, cost-effective, and timely benchmarking and other market information about.... This market analysis (commonly referred to as ``benchmarking'') would allow users of this service to... determine to be most useful. The benchmarking portion of the service would provide information on an...

  15. Rationale of technical requirements for NRG-BR001: The first NCI-sponsored trial of SBRT for the treatment of multiple metastases.

    PubMed

    Al-Hallaq, Hania A; Chmura, Steven; Salama, Joseph K; Winter, Kathryn A; Robinson, Clifford G; Pisansky, Thomas M; Borges, Virginia; Lowenstein, Jessica R; McNulty, Susan; Galvin, James M; Followill, David S; Timmerman, Robert D; White, Julia R; Xiao, Ying; Matuszak, Martha M

    In 2014, the NRG Oncology Group initiated the first National Cancer Institute-sponsored, phase 1 clinical trial of stereotactic body radiation therapy (SBRT) for the treatment of multiple metastases in multiple organ sites (BR001; NCT02206334). The primary endpoint is to test the safety of SBRT for the treatment of 2 to 4 multiple lesions in several anatomic sites in a multi-institutional setting. Because of the technical challenges inherent to treating multiple lesions as their spatial separation decreases, we present the technical requirements for NRG-BR001 and the rationale for their selection. Patients with controlled primary tumors of breast, non-small cell lung, or prostate are eligible if they have 2 to 4 metastases distributed among 7 extracranial anatomic locations throughout the body. Prescription and organ-at-risk doses were determined by expert consensus. Credentialing requirements include (1) irradiation of the Imaging and Radiation Oncology Core phantom with SBRT, (2) submitting image guided radiation therapy case studies, and (3) planning the benchmark. Guidelines for navigating challenging planning cases including assessing composite dose are discussed. Dosimetric planning to multiple lesions receiving differing doses (45-50 Gy) and fractionation (3-5) while irradiating the same organs at risk is discussed, particularly for metastases in close proximity (≤5 cm). The benchmark case was selected to demonstrate the planning tradeoffs required to satisfy protocol requirements for 2 nearby lesions. Examples of passing benchmark plans exhibited a large variability in plan conformity. NRG-BR001 was developed using expert consensus on multiple issues from the dose fractionation regimen to the minimum image guided radiation therapy guidelines. Credentialing was tied to the task rather than the anatomic site to reduce its burden. Every effort was made to include a variety of delivery methods to reflect current SBRT technology. Although some simplifications were adopted, the successful completion of this trial will inform future designs of both national and institutional trials and would allow immediate clinical adoption of SBRT trials for oligometastases. Copyright © 2016 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  16. Multi-axis dose accumulation of noninvasive image-guided breast brachytherapy through biomechanical modeling of tissue deformation using the finite element method

    PubMed Central

    Ghadyani, Hamid R.; Bastien, Adam D.; Lutz, Nicholas N.; Hepel, Jaroslaw T.

    2015-01-01

    Purpose Noninvasive image-guided breast brachytherapy delivers conformal HDR 192Ir brachytherapy treatments with the breast compressed, and treated in the cranial-caudal and medial-lateral directions. This technique subjects breast tissue to extreme deformations not observed for other disease sites. Given that, commercially-available software for deformable image registration cannot accurately co-register image sets obtained in these two states, a finite element analysis based on a biomechanical model was developed to deform dose distributions for each compression circumstance for dose summation. Material and methods The model assumed the breast was under planar stress with values of 30 kPa for Young's modulus and 0.3 for Poisson's ratio. Dose distributions from round and skin-dose optimized applicators in cranial-caudal and medial-lateral compressions were deformed using 0.1 cm planar resolution. Dose distributions, skin doses, and dose-volume histograms were generated. Results were examined as a function of breast thickness, applicator size, target size, and offset distance from the center. Results Over the range of examined thicknesses, target size increased several millimeters as compression thickness decreased. This trend increased with increasing offset distances. Applicator size minimally affected target coverage, until applicator size was less than the compressed target size. In all cases, with an applicator larger or equal to the compressed target size, > 90% of the target covered by > 90% of the prescription dose. In all cases, dose coverage became less uniform as offset distance increased and average dose increased. This effect was more pronounced for smaller target–applicator combinations. Conclusions The model exhibited skin dose trends that matched MC-generated benchmarking results within 2% and clinical observations over a similar range of breast thicknesses and target sizes. The model provided quantitative insight on dosimetric treatment variables over a range of clinical circumstances. These findings highlight the need for careful target localization and accurate identification of compression thickness and target offset. PMID:25829938

  17. Implementation and validation of a conceptual benchmarking framework for patient blood management.

    PubMed

    Kastner, Peter; Breznik, Nada; Gombotz, Hans; Hofmann, Axel; Schreier, Günter

    2015-01-01

    Public health authorities and healthcare professionals are obliged to ensure high quality health service. Because of the high variability of the utilisation of blood and blood components, benchmarking is indicated in transfusion medicine. Implementation and validation of a benchmarking framework for Patient Blood Management (PBM) based on the report from the second Austrian Benchmark trial. Core modules for automatic report generation have been implemented with KNIME (Konstanz Information Miner) and validated by comparing the output with the results of the second Austrian benchmark trial. Delta analysis shows a deviation <0.1% for 95% (max. 1.4%). The framework provides a reliable tool for PBM benchmarking. The next step is technical integration with hospital information systems.

  18. Thermo-hydro-mechanical-chemical processes in fractured-porous media: Benchmarks and examples

    NASA Astrophysics Data System (ADS)

    Kolditz, O.; Shao, H.; Görke, U.; Kalbacher, T.; Bauer, S.; McDermott, C. I.; Wang, W.

    2012-12-01

    The book comprises an assembly of benchmarks and examples for porous media mechanics collected over the last twenty years. Analysis of thermo-hydro-mechanical-chemical (THMC) processes is essential to many applications in environmental engineering, such as geological waste deposition, geothermal energy utilisation, carbon capture and storage, water resources management, hydrology, even climate change. In order to assess the feasibility as well as the safety of geotechnical applications, process-based modelling is the only tool to put numbers, i.e. to quantify future scenarios. This charges a huge responsibility concerning the reliability of computational tools. Benchmarking is an appropriate methodology to verify the quality of modelling tools based on best practices. Moreover, benchmarking and code comparison foster community efforts. The benchmark book is part of the OpenGeoSys initiative - an open source project to share knowledge and experience in environmental analysis and scientific computation.

  19. Development and Application of Benchmark Examples for Mixed-Mode I/II Quasi-Static Delamination Propagation Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2012-01-01

    The development of benchmark examples for quasi-static delamination propagation prediction is presented and demonstrated for a commercial code. The examples are based on finite element models of the Mixed-Mode Bending (MMB) specimen. The examples are independent of the analysis software used and allow the assessment of the automated delamination propagation prediction capability in commercial finite element codes based on the virtual crack closure technique (VCCT). First, quasi-static benchmark examples were created for the specimen. Second, starting from an initially straight front, the delamination was allowed to propagate under quasi-static loading. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Good agreement between the results obtained from the automated propagation analysis and the benchmark results could be achieved by selecting input parameters that had previously been determined during analyses of mode I Double Cantilever Beam and mode II End Notched Flexure specimens. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall the results are encouraging, but further assessment for mixed-mode delamination fatigue onset and growth is required.

  1. Neutron Activation Foil and Thermoluminescent Dosimeter Responses to a Polyethylene Reflected Pulse of the CEA Valduc SILENE Critical Assembly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Thomas Martin; Celik, Cihangir; McMahan, Kimberly L.

    This benchmark experiment was conducted as a joint venture between the US Department of Energy (DOE) and the French Commissariat à l'Energie Atomique (CEA). Staff at the Oak Ridge National Laboratory (ORNL) in the US and the Centre de Valduc in France planned this experiment. The experiment was conducted on October 19, 2010 in the SILENE critical assembly facility at Valduc. Several other organizations contributed to this experiment and the subsequent evaluation, including CEA Saclay, Lawrence Livermore National Laboratory (LLNL), the Y-12 National Security Complex (NSC), Babcock International Group in the United Kingdom, and Los Alamos National Laboratory (LANL). Themore » goal of this experiment was to measure neutron activation and thermoluminescent dosimeter (TLD) doses from a source similar to a fissile solution critical excursion. The resulting benchmark can be used for validation of computer codes and nuclear data libraries as required when performing analysis of criticality accident alarm systems (CAASs). A secondary goal of this experiment was to qualitatively test performance of two CAAS detectors similar to those currently and formerly in use in some US DOE facilities. The detectors tested were the CIDAS MkX and the Rocky Flats NCD-91. The CIDAS detects gammas with a Geiger-Muller tube and the Rocky Flats detects neutrons via charged particles produced in a thin 6LiF disc depositing energy in a Si solid state detector. These detectors were being evaluated to determine whether they would alarm, so they were not expected to generate benchmark quality data.« less

  2. Neutron Activation Foil and Thermoluminescent Dosimeter Responses to a Lead Reflected Pulse of the CEA Valduc SILENE Critical Assembly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Thomas Martin; Celik, Cihangir; Isbell, Kimberly McMahan

    This benchmark experiment was conducted as a joint venture between the US Department of Energy (DOE) and the French Commissariat à l'Energie Atomique (CEA). Staff at the Oak Ridge National Laboratory (ORNL) in the US and the Centre de Valduc in France planned this experiment. The experiment was conducted on October 13, 2010 in the SILENE critical assembly facility at Valduc. Several other organizations contributed to this experiment and the subsequent evaluation, including CEA Saclay, Lawrence Livermore National Laboratory (LLNL), the Y-12 National Security Complex (NSC), Babcock International Group in the United Kingdom, and Los Alamos National Laboratory (LANL). Themore » goal of this experiment was to measure neutron activation and thermoluminescent dosimeter (TLD) doses from a source similar to a fissile solution critical excursion. The resulting benchmark can be used for validation of computer codes and nuclear data libraries as required when performing analysis of criticality accident alarm systems (CAASs). A secondary goal of this experiment was to qualitatively test performance of two CAAS detectors similar to those currently and formerly in use in some US DOE facilities. The detectors tested were the CIDAS MkX and the Rocky Flats NCD-91. The CIDAS detects gammas with a Geiger-Muller tube, and the Rocky Flats detects neutrons via charged particles produced in a thin 6LiF disc, depositing energy in a Si solid-state detector. These detectors were being evaluated to determine whether they would alarm, so they were not expected to generate benchmark quality data.« less

  3. Results of the Australasian (Trans-Tasman Oncology Group) radiotherapy benchmarking exercise in preparation for participation in the PORTEC-3 trial.

    PubMed

    Jameson, Michael G; McNamara, Jo; Bailey, Michael; Metcalfe, Peter E; Holloway, Lois C; Foo, Kerwyn; Do, Viet; Mileshkin, Linda; Creutzberg, Carien L; Khaw, Pearly

    2016-08-01

    Protocol deviations in Randomised Controlled Trials have been found to result in a significant decrease in survival and local control. In some cases, the magnitude of the detrimental effect can be larger than the anticipated benefits of the interventions involved. The implementation of appropriate quality assurance of radiotherapy measures for clinical trials has been found to result in fewer deviations from protocol. This paper reports on a benchmarking study conducted in preparation for the PORTEC-3 trial in Australasia. A benchmarking CT dataset was sent to each of the Australasian investigators, it was requested they contour and plan the case according to trial protocol using local treatment planning systems. These data was then sent back to Trans-Tasman Oncology Group for collation and analysis. Thirty three investigators from eighteen institutions across Australia and New Zealand took part in the study. The mean clinical target volume (CTV) volume was 383.4 (228.5-497.8) cm(3) and the mean dose to a reference gold standard CTV was 48.8 (46.4-50.3) Gy. Although there were some large differences in the contouring of the CTV and its constituent parts, these did not translate into large variations in dosimetry. Where individual investigators had deviations from the trial contouring protocol, feedback was provided. The results of this study will be used to compare with the international study QA for the PORTEC-3 trial. © 2016 The Royal Australian and New Zealand College of Radiologists.

  4. Impact of Genomics Platform and Statistical Filtering on Transcriptional Benchmark Doses (BMD) and Multiple Approaches for Selection of Chemical Point of Departure (PoD)

    PubMed Central

    Webster, A. Francina; Chepelev, Nikolai; Gagné, Rémi; Kuo, Byron; Recio, Leslie; Williams, Andrew; Yauk, Carole L.

    2015-01-01

    Many regulatory agencies are exploring ways to integrate toxicogenomic data into their chemical risk assessments. The major challenge lies in determining how to distill the complex data produced by high-content, multi-dose gene expression studies into quantitative information. It has been proposed that benchmark dose (BMD) values derived from toxicogenomics data be used as point of departure (PoD) values in chemical risk assessments. However, there is limited information regarding which genomics platforms are most suitable and how to select appropriate PoD values. In this study, we compared BMD values modeled from RNA sequencing-, microarray-, and qPCR-derived gene expression data from a single study, and explored multiple approaches for selecting a single PoD from these data. The strategies evaluated include several that do not require prior mechanistic knowledge of the compound for selection of the PoD, thus providing approaches for assessing data-poor chemicals. We used RNA extracted from the livers of female mice exposed to non-carcinogenic (0, 2 mg/kg/day, mkd) and carcinogenic (4, 8 mkd) doses of furan for 21 days. We show that transcriptional BMD values were consistent across technologies and highly predictive of the two-year cancer bioassay-based PoD. We also demonstrate that filtering data based on statistically significant changes in gene expression prior to BMD modeling creates more conservative BMD values. Taken together, this case study on mice exposed to furan demonstrates that high-content toxicogenomics studies produce robust data for BMD modelling that are minimally affected by inter-technology variability and highly predictive of cancer-based PoD doses. PMID:26313361

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less thanmore » these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red-tailed hawk, osprey) (scientific names for both the mammalian and avian species are presented in Appendix B). [In this document, NOAEL refers to both dose (mg contaminant per kg animal body weight per day) and concentration (mg contaminant per kg of food or L of drinking water)]. The 20 wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at U.S. Department of Energy (DOE) waste sites. The NOAEL-based benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species; LOAEL-based benchmarks represent threshold levels at which adverse effects are likely to become evident. These benchmarks consider contaminant exposure through oral ingestion of contaminated media only. Exposure through inhalation and/or direct dermal exposure are not considered in this report.« less

  6. Benchmarking as a Global Strategy for Improving Instruction in Higher Education.

    ERIC Educational Resources Information Center

    Clark, Karen L.

    This paper explores the concept of benchmarking in institutional research, a comparative analysis methodology designed to help colleges and universities increase their educational quality and delivery systems. The primary purpose of benchmarking is to compare an institution to its competitors in order to improve the product (in this case…

  7. Benchmarking and beyond. Information trends in home care.

    PubMed

    Twiss, Amanda; Rooney, Heather; Lang, Christine

    2002-11-01

    With today's benchmarking concepts and tools, agencies have the unprecedented opportunity to use information as a strategic advantage. Because agencies are demanding more and better information, benchmark functionality has grown increasingly sophisticated. Agencies now require a new type of analysis, focused on high-level executive summaries while reducing the current "data overload."

  8. Development and Application of Benchmark Examples for Mixed-Mode I/II Quasi-Static Delamination Propagation Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2012-01-01

    The development of benchmark examples for quasi-static delamination propagation prediction is presented. The example is based on a finite element model of the Mixed-Mode Bending (MMB) specimen for 50% mode II. The benchmarking is demonstrated for Abaqus/Standard, however, the example is independent of the analysis software used and allows the assessment of the automated delamination propagation prediction capability in commercial finite element codes based on the virtual crack closure technique (VCCT). First, a quasi-static benchmark example was created for the specimen. Second, starting from an initially straight front, the delamination was allowed to propagate under quasi-static loading. Third, the load-displacement as well as delamination length versus applied load/displacement relationships from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall, the results are encouraging, but further assessment for mixed-mode delamination fatigue onset and growth is required.

  9. Experimental validation of the TOPAS Monte Carlo system for passive scattering proton therapy

    PubMed Central

    Testa, M.; Schümann, J.; Lu, H.-M.; Shin, J.; Faddegon, B.; Perl, J.; Paganetti, H.

    2013-01-01

    Purpose: TOPAS (TOol for PArticle Simulation) is a particle simulation code recently developed with the specific aim of making Monte Carlo simulations user-friendly for research and clinical physicists in the particle therapy community. The authors present a thorough and extensive experimental validation of Monte Carlo simulations performed with TOPAS in a variety of setups relevant for proton therapy applications. The set of validation measurements performed in this work represents an overall end-to-end testing strategy recommended for all clinical centers planning to rely on TOPAS for quality assurance or patient dose calculation and, more generally, for all the institutions using passive-scattering proton therapy systems. Methods: The authors systematically compared TOPAS simulations with measurements that are performed routinely within the quality assurance (QA) program in our institution as well as experiments specifically designed for this validation study. First, the authors compared TOPAS simulations with measurements of depth-dose curves for spread-out Bragg peak (SOBP) fields. Second, absolute dosimetry simulations were benchmarked against measured machine output factors (OFs). Third, the authors simulated and measured 2D dose profiles and analyzed the differences in terms of field flatness and symmetry and usable field size. Fourth, the authors designed a simple experiment using a half-beam shifter to assess the effects of multiple Coulomb scattering, beam divergence, and inverse square attenuation on lateral and longitudinal dose profiles measured and simulated in a water phantom. Fifth, TOPAS’ capabilities to simulate time dependent beam delivery was benchmarked against dose rate functions (i.e., dose per unit time vs time) measured at different depths inside an SOBP field. Sixth, simulations of the charge deposited by protons fully stopping in two different types of multilayer Faraday cups (MLFCs) were compared with measurements to benchmark the nuclear interaction models used in the simulations. Results: SOBPs’ range and modulation width were reproduced, on average, with an accuracy of +1, −2 and ±3 mm, respectively. OF simulations reproduced measured data within ±3%. Simulated 2D dose-profiles show field flatness and average field radius within ±3% of measured profiles. The field symmetry resulted, on average in ±3% agreement with commissioned profiles. TOPAS accuracy in reproducing measured dose profiles downstream the half beam shifter is better than 2%. Dose rate function simulation reproduced the measurements within ∼2% showing that the four-dimensional modeling of the passively modulation system was implement correctly and millimeter accuracy can be achieved in reproducing measured data. For MLFCs simulations, 2% agreement was found between TOPAS and both sets of experimental measurements. The overall results show that TOPAS simulations are within the clinical accepted tolerances for all QA measurements performed at our institution. Conclusions: Our Monte Carlo simulations reproduced accurately the experimental data acquired through all the measurements performed in this study. Thus, TOPAS can reliably be applied to quality assurance for proton therapy and also as an input for commissioning of commercial treatment planning systems. This work also provides the basis for routine clinical dose calculations in patients for all passive scattering proton therapy centers using TOPAS. PMID:24320505

  10. Introduction of risk size in the determination of uncertainty factor UFL in risk assessment

    NASA Astrophysics Data System (ADS)

    Xue, Jinling; Lu, Yun; Velasquez, Natalia; Yu, Ruozhen; Hu, Hongying; Liu, Zhengtao; Meng, Wei

    2012-09-01

    The methodology for using uncertainty factors in health risk assessment has been developed for several decades. A default value is usually applied for the uncertainty factor UFL, which is used to extrapolate from LOAEL (lowest observed adverse effect level) to NAEL (no adverse effect level). Here, we have developed a new method that establishes a linear relationship between UFL and the additional risk level at LOAEL based on the dose-response information, which represents a very important factor that should be carefully considered. This linear formula makes it possible to select UFL properly in the additional risk range from 5.3% to 16.2%. Also the results remind us that the default value 10 may not be conservative enough when the additional risk level at LOAEL exceeds 16.2%. Furthermore, this novel method not only provides a flexible UFL instead of the traditional default value, but also can ensure a conservative estimation of the UFL with fewer errors, and avoid the benchmark response selection involved in the benchmark dose method. These advantages can improve the estimation of the extrapolation starting point in the risk assessment.

  11. Sensitivity Analysis of OECD Benchmark Tests in BISON

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, Laura Painton; Gamble, Kyle; Schmidt, Rodney C.

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining coremore » boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.« less

  12. Qualitative and quantitative approaches in the dose-response assessment of genotoxic carcinogens.

    PubMed

    Fukushima, Shoji; Gi, Min; Kakehashi, Anna; Wanibuchi, Hideki; Matsumoto, Michiharu

    2016-05-01

    Qualitative and quantitative approaches are important issues in field of carcinogenic risk assessment of the genotoxic carcinogens. Herein, we provide quantitative data on low-dose hepatocarcinogenicity studies for three genotoxic hepatocarcinogens: 2-amino-3,8-dimethylimidazo[4,5-f]quinoxaline (MeIQx), 2-amino-3-methylimidazo[4,5-f]quinoline (IQ) and N-nitrosodiethylamine (DEN). Hepatocarcinogenicity was examined by quantitative analysis of glutathione S-transferase placental form (GST-P) positive foci, which are the preneoplastic lesions in rat hepatocarcinogenesis and the endpoint carcinogenic marker in the rat liver medium-term carcinogenicity bioassay. We also examined DNA damage and gene mutations which occurred through the initiation stage of carcinogenesis. For the establishment of points of departure (PoD) from which the cancer-related risk can be estimated, we analyzed the above events by quantitative no-observed-effect level and benchmark dose approaches. MeIQx at low doses induced formation of DNA-MeIQx adducts; somewhat higher doses caused elevation of 8-hydroxy-2'-deoxyquanosine levels; at still higher doses gene mutations occurred; and the highest dose induced formation of GST-P positive foci. These data indicate that early genotoxic events in the pathway to carcinogenesis showed the expected trend of lower PoDs for earlier events in the carcinogenic process. Similarly, only the highest dose of IQ caused an increase in the number of GST-P positive foci in the liver, while IQ-DNA adduct formation was observed with low doses. Moreover, treatment with DEN at low doses had no effect on development of GST-P positive foci in the liver. These data on PoDs for the markers contribute to understand whether genotoxic carcinogens have a threshold for their carcinogenicity. The most appropriate approach to use in low dose-response assessment must be approved on the basis of scientific judgment. © The Author 2015. Published by Oxford University Press on behalf of the UK Environmental Mutagen Society. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. Local and Systemic Inflammation May Mediate Diesel Engine Exhaust-Induced Lung Function Impairment in a Chinese Occupational Cohort.

    PubMed

    Wang, Haitao; Duan, Huawei; Meng, Tao; Yang, Mo; Cui, Lianhua; Bin, Ping; Dai, Yufei; Niu, Yong; Shen, Meili; Zhang, Liping; Zheng, Yuxin; Leng, Shuguang

    2018-04-01

    Diesel exhaust (DE) as the major source of vehicle-emitted particle matter in ambient air impairs lung function. The objectives were to assess the contribution of local (eg, the fraction of exhaled nitric oxide [FeNO] and serum Club cell secretory protein [CC16]) and systemic (eg, serum C-reaction protein [CRP] and interleukin-6 [IL-6]) inflammation to DE-induced lung function impairment using a unique cohort of diesel engine testers (DETs, n = 137) and non-DETs (n = 127), made up of current and noncurrent smokers. Urinary metabolites, FeNO, serum markers, and spirometry were assessed. A 19% reduction in CC16 and a 94% increase in CRP were identified in DETs compared with non-DETs (all p values <10-4), which were further corroborated by showing a dose-response relationship with internal dose for DE exposure (all p values <.04) and a time-course relationship with DE exposure history (all p values <.005). Mediation analysis showed that 43% of the difference in FEV1 between DETs and non-DETs can be explained by circulating CC16 and CRP (permuted p < .001). An inverse dose-dependent relationship between FeNO and internal dose for cigarette smoke was identified (p = .0003). A range of 95% lower bounds of benchmark dose of 1.0261-1.4513 μg phenanthrols/g creatinine in urine as an internal dose was recommended for regulatory risk assessment. Local and systemic inflammation may be key processes that contribute to the subsequent development of obstructive lung disease in DE-exposed populations.

  14. Development of a Benchmark Example for Delamination Fatigue Growth Prediction

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2010-01-01

    The development of a benchmark example for cyclic delamination growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of a Double Cantilever Beam (DCB) specimen, which is independent of the analysis software used and allows the assessment of the delamination growth prediction capabilities in commercial finite element codes. First, the benchmark result was created for the specimen. Second, starting from an initially straight front, the delamination was allowed to grow under cyclic loading in a finite element model of a commercial code. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the analysis. In general, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. Overall, the results are encouraging but further assessment for mixed-mode delamination is required

  15. Information Literacy and Office Tool Competencies: A Benchmark Study

    ERIC Educational Resources Information Center

    Heinrichs, John H.; Lim, Jeen-Su

    2010-01-01

    Present information science literature recognizes the importance of information technology to achieve information literacy. The authors report the results of a benchmarking student survey regarding perceived functional skills and competencies in word-processing and presentation tools. They used analysis of variance and regression analysis to…

  16. Benchmarking in Universities: League Tables Revisited

    ERIC Educational Resources Information Center

    Turner, David

    2005-01-01

    This paper examines the practice of benchmarking universities using a "league table" approach. Taking the example of the "Sunday Times University League Table", the author reanalyses the descriptive data on UK universities. Using a linear programming technique, data envelope analysis (DEA), the author uses the re-analysis to…

  17. Direct measurement of the 3-dimensional DNA lesion distribution induced by energetic charged particles in a mouse model tissue

    PubMed Central

    Mirsch, Johanna; Tommasino, Francesco; Frohns, Antonia; Conrad, Sandro; Durante, Marco; Scholz, Michael; Friedrich, Thomas; Löbrich, Markus

    2015-01-01

    Charged particles are increasingly used in cancer radiotherapy and contribute significantly to the natural radiation risk. The difference in the biological effects of high-energy charged particles compared with X-rays or γ-rays is determined largely by the spatial distribution of their energy deposition events. Part of the energy is deposited in a densely ionizing manner in the inner part of the track, with the remainder spread out more sparsely over the outer track region. Our knowledge about the dose distribution is derived solely from modeling approaches and physical measurements in inorganic material. Here we exploited the exceptional sensitivity of γH2AX foci technology and quantified the spatial distribution of DNA lesions induced by charged particles in a mouse model tissue. We observed that charged particles damage tissue nonhomogenously, with single cells receiving high doses and many other cells exposed to isolated damage resulting from high-energy secondary electrons. Using calibration experiments, we transformed the 3D lesion distribution into a dose distribution and compared it with predictions from modeling approaches. We obtained a radial dose distribution with sub-micrometer resolution that decreased with increasing distance to the particle path following a 1/r2 dependency. The analysis further revealed the existence of a background dose at larger distances from the particle path arising from overlapping dose deposition events from independent particles. Our study provides, to our knowledge, the first quantification of the spatial dose distribution of charged particles in biologically relevant material, and will serve as a benchmark for biophysical models that predict the biological effects of these particles. PMID:26392532

  18. Scale/TSUNAMI Sensitivity Data for ICSBEP Evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T; Reed, Davis Allan; Lefebvre, Robert A

    2011-01-01

    The Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) software developed at Oak Ridge National Laboratory (ORNL) as part of the Scale code system provide unique methods for code validation, gap analysis, and experiment design. For TSUNAMI analysis, sensitivity data are generated for each application and each existing or proposed experiment used in the assessment. The validation of diverse sets of applications requires potentially thousands of data files to be maintained and organized by the user, and a growing number of these files are available through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE) distributed through themore » International Criticality Safety Benchmark Evaluation Program (ICSBEP). To facilitate the use of the IHECSBE benchmarks in rigorous TSUNAMI validation and gap analysis techniques, ORNL generated SCALE/TSUNAMI sensitivity data files (SDFs) for several hundred benchmarks for distribution with the IHECSBE. For the 2010 edition of IHECSBE, the sensitivity data were generated using 238-group cross-section data based on ENDF/B-VII.0 for 494 benchmark experiments. Additionally, ORNL has developed a quality assurance procedure to guide the generation of Scale inputs and sensitivity data, as well as a graphical user interface to facilitate the use of sensitivity data in identifying experiments and applying them in validation studies.« less

  19. SU-F-T-201: Acceleration of Dose Optimization Process Using Dual-Loop Optimization Technique for Spot Scanning Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirayama, S; Fujimoto, R

    Purpose: The purpose was to demonstrate a developed acceleration technique of dose optimization and to investigate its applicability to the optimization process in a treatment planning system (TPS) for proton therapy. Methods: In the developed technique, the dose matrix is divided into two parts, main and halo, based on beam sizes. The boundary of the two parts is varied depending on the beam energy and water equivalent depth by utilizing the beam size as a singular threshold parameter. The optimization is executed with two levels of iterations. In the inner loop, doses from the main part are updated, whereas dosesmore » from the halo part remain constant. In the outer loop, the doses from the halo part are recalculated. We implemented this technique to the optimization process in the TPS and investigated the dependence on the target volume of the speedup effect and applicability to the worst-case optimization (WCO) in benchmarks. Results: We created irradiation plans for various cubic targets and measured the optimization time varying the target volume. The speedup effect was improved as the target volume increased, and the calculation speed increased by a factor of six for a 1000 cm3 target. An IMPT plan for the RTOG benchmark phantom was created in consideration of ±3.5% range uncertainties using the WCO. Beams were irradiated at 0, 45, and 315 degrees. The target’s prescribed dose and OAR’s Dmax were set to 3 Gy and 1.5 Gy, respectively. Using the developed technique, the calculation speed increased by a factor of 1.5. Meanwhile, no significant difference in the calculated DVHs was found before and after incorporating the technique into the WCO. Conclusion: The developed technique could be adapted to the TPS’s optimization. The technique was effective particularly for large target cases.« less

  20. Evaluation of Graph Pattern Matching Workloads in Graph Analysis Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Seokyong; Lee, Sangkeun; Lim, Seung-Hwan

    2016-01-01

    Graph analysis has emerged as a powerful method for data scientists to represent, integrate, query, and explore heterogeneous data sources. As a result, graph data management and mining became a popular area of research, and led to the development of plethora of systems in recent years. Unfortunately, the number of emerging graph analysis systems and the wide range of applications, coupled with a lack of apples-to-apples comparisons, make it difficult to understand the trade-offs between different systems and the graph operations for which they are designed. A fair comparison of these systems is a challenging task for the following reasons:more » multiple data models, non-standardized serialization formats, various query interfaces to users, and diverse environments they operate in. To address these key challenges, in this paper we present a new benchmark suite by extending the Lehigh University Benchmark (LUBM) to cover the most common capabilities of various graph analysis systems. We provide the design process of the benchmark, which generalizes the workflow for data scientists to conduct the desired graph analysis on different graph analysis systems. Equipped with this extended benchmark suite, we present performance comparison for nine subgraph pattern retrieval operations over six graph analysis systems, namely NetworkX, Neo4j, Jena, Titan, GraphX, and uRiKA. Through the proposed benchmark suite, this study reveals both quantitative and qualitative findings in (1) implications in loading data into each system; (2) challenges in describing graph patterns for each query interface; and (3) different sensitivity of each system to query selectivity. We envision that this study will pave the road for: (i) data scientists to select the suitable graph analysis systems, and (ii) data management system designers to advance graph analysis systems.« less

  1. Development of risk-based nanomaterial groups for occupational exposure control

    NASA Astrophysics Data System (ADS)

    Kuempel, E. D.; Castranova, V.; Geraci, C. L.; Schulte, P. A.

    2012-09-01

    Given the almost limitless variety of nanomaterials, it will be virtually impossible to assess the possible occupational health hazard of each nanomaterial individually. The development of science-based hazard and risk categories for nanomaterials is needed for decision-making about exposure control practices in the workplace. A possible strategy would be to select representative (benchmark) materials from various mode of action (MOA) classes, evaluate the hazard and develop risk estimates, and then apply a systematic comparison of new nanomaterials with the benchmark materials in the same MOA class. Poorly soluble particles are used here as an example to illustrate quantitative risk assessment methods for possible benchmark particles and occupational exposure control groups, given mode of action and relative toxicity. Linking such benchmark particles to specific exposure control bands would facilitate the translation of health hazard and quantitative risk information to the development of effective exposure control practices in the workplace. A key challenge is obtaining sufficient dose-response data, based on standard testing, to systematically evaluate the nanomaterials' physical-chemical factors influencing their biological activity. Categorization processes involve both science-based analyses and default assumptions in the absence of substance-specific information. Utilizing data and information from related materials may facilitate initial determinations of exposure control systems for nanomaterials.

  2. Methodology and Data Sources for Assessing Extreme Charging Events within the Earth's Magnetosphere

    NASA Astrophysics Data System (ADS)

    Parker, L. N.; Minow, J. I.; Talaat, E. R.

    2016-12-01

    Spacecraft surface and internal charging is a potential threat to space technologies because electrostatic discharges on, or within, charged spacecraft materials can result in a number of adverse impacts to spacecraft systems. The Space Weather Action Plan (SWAP) ionizing radiation benchmark team recognized that spacecraft charging will need to be considered to complete the ionizing radiation benchmarks in order to evaluate the threat of charging to critical space infrastructure operating within the near-Earth ionizing radiation environments. However, the team chose to defer work on the lower energy charging environments and focus the initial benchmark efforts on the higher energy galactic cosmic ray, solar energetic particle, and trapped radiation belt particle environments of concern for radiation dose and single event effects in humans and hardware. Therefore, an initial set of 1 in 100 year spacecraft charging environment benchmarks remains to be defined to meet the SWAP goals. This presentation will discuss the available data sources and a methodology to assess the 1 in 100 year extreme space weather events that drive surface and internal charging threats to spacecraft. Environments to be considered are the hot plasmas in the outer magnetosphere during geomagnetic storms, relativistic electrons in the outer radiation belt, and energetic auroral electrons in low Earth orbit at high latitudes.

  3. Developing Toxicogenomics as a Research Tool by Applying Benchmark Dose-Response Modeling to inform Chemical Mode of Action and Tumorigenic Potency

    EPA Science Inventory

    ABSTRACT Results of global gene expression profiling after short-term exposures can be used to inform tumorigenic potency and chemical mode of action (MOA) and thus serve as a strategy to prioritize future or data-poor chemicals for further evaluation. This compilation of cas...

  4. CHOLINESTERASE INHIBITION AND HYPOTHERMIA FOLLOWING EXPOSURE TO BINARY MIXTURES OF ANTICHOLINESTERASE AGENTS: LACK OF EVIDENCE FOR CAUSE-AND-EFFECT

    EPA Science Inventory

    Dose-additivity has been the default assumption in risk assessments of pesticides with a common mechanism of action but it has been suspected that there could be non-additive effects. Inhibition of plasma cholinesterase (ChE) activity and hypothermia were used as benchmarks of e...

  5. Neutron skyshine calculations with the integral line-beam method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gui, A.A.; Shultis, J.K.; Faw, R.E.

    1997-10-01

    Recently developed line- and conical-beam response functions are used to calculate neutron skyshine doses for four idealized source geometries. These calculations, which can serve as benchmarks, are compared with MCNP calculations, and the excellent agreement indicates that the integral conical- and line-beam method is an effective alternative to more computationally expensive transport calculations.

  6. Developing a benchmark for emotional analysis of music

    PubMed Central

    Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the ‘Emotion in Music’ task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER. PMID:28282400

  7. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Peiyuan; Brown, Timothy; Fullmer, William D.

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling ofmore » the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.« less

  8. Developing a benchmark for emotional analysis of music.

    PubMed

    Aljanaki, Anna; Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.

  9. Absolute dose calibration of an X-ray system and dead time investigations of photon-counting techniques

    NASA Astrophysics Data System (ADS)

    Carpentieri, C.; Schwarz, C.; Ludwig, J.; Ashfaq, A.; Fiederle, M.

    2002-07-01

    High precision concerning the dose calibration of X-ray sources is required when counting and integrating methods are compared. The dose calibration for a dental X-ray tube was executed with special dose calibration equipment (dosimeter) as function of exposure time and rate. Results were compared with a benchmark spectrum and agree within ±1.5%. Dead time investigations with the Medipix1 photon-counting chip (PCC) have been performed by rate variations. Two different types of dead time, paralysable and non-paralysable will be discussed. The dead time depends on settings of the front-end electronics and is a function of signal height, which might lead to systematic defects of systems. Dead time losses in excess of 30% have been found for the PCC at 200 kHz absorbed photons per pixel.

  10. Commercial Building Energy Saver, Web App

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Piette, Mary; Lee, Sang Hoon

    The CBES App is a web-based toolkit for use by small businesses and building owners and operators of small and medium size commercial buildings to perform energy benchmarking and retrofit analysis for buildings. The CBES App analyzes the energy performance of user's building for pre-and posto-retrofit, in conjunction with user's input data, to identify recommended retrofit measures, energy savings and economic analysis for the selected measures. The CBES App provides energy benchmarking, including getting an EnergyStar score using EnergyStar API and benchmarking against California peer buildings using the EnergyIQ API. The retrofit analysis includes a preliminary analysis by looking upmore » retrofit measures from a pre-simulated database DEEP, and a detailed analysis creating and running EnergyPlus models to calculate energy savings of retrofit measures. The CBES App builds upon the LBNL CBES API.« less

  11. Benchmarking the Importance and Use of Labor Market Surveys by Certified Rehabilitation Counselors

    ERIC Educational Resources Information Center

    Barros-Bailey, Mary; Saunders, Jodi L.

    2013-01-01

    The purpose of this research was to benchmark the importance and use of labor market survey (LMS) among U.S. certified rehabilitation counselors (CRCs). A secondary post hoc analysis of data collected via the "Rehabilitation Skills Inventory--Revised" for the 2011 Commission on Rehabilitation Counselor Certification job analysis resulted in…

  12. Policy Analysis of the English Graduation Benchmark in Taiwan

    ERIC Educational Resources Information Center

    Shih, Chih-Min

    2012-01-01

    To nudge students to study English and to improve their English proficiency, many universities in Taiwan have imposed an English graduation benchmark on their students. This article reviews this policy, using the theoretic framework for education policy analysis proposed by Haddad and Demsky (1995). The author presents relevant research findings,…

  13. The impact of a scheduling change on ninth grade high school performance on biology benchmark exams and the California Standards Test

    NASA Astrophysics Data System (ADS)

    Leonardi, Marcelo

    The primary purpose of this study was to examine the impact of a scheduling change from a trimester 4x4 block schedule to a modified hybrid schedule on student achievement in ninth grade biology courses. This study examined the impact of the scheduling change on student achievement through teacher created benchmark assessments in Genetics, DNA, and Evolution and on the California Standardized Test in Biology. The secondary purpose of this study examined the ninth grade biology teacher perceptions of ninth grade biology student achievement. Using a mixed methods research approach, data was collected both quantitatively and qualitatively as aligned to research questions. Quantitative methods included gathering data from departmental benchmark exams and California Standardized Test in Biology and conducting multiple analysis of covariance and analysis of covariance to determine significance differences. Qualitative methods include journal entries questions and focus group interviews. The results revealed a statistically significant increase in scores on both the DNA and Evolution benchmark exams. DNA and Evolution benchmark exams showed significant improvements from a change in scheduling format. The scheduling change was responsible for 1.5% of the increase in DNA benchmark scores and 2% of the increase in Evolution benchmark scores. The results revealed a statistically significant decrease in scores on the Genetics Benchmark exam as a result of the scheduling change. The scheduling change was responsible for 1% of the decrease in Genetics benchmark scores. The results also revealed a statistically significant increase in scores on the CST Biology exam. The scheduling change was responsible for .7% of the increase in CST Biology scores. Results of the focus group discussions indicated that all teachers preferred the modified hybrid schedule over the trimester schedule and that it improved student achievement.

  14. Interactive visual optimization and analysis for RFID benchmarking.

    PubMed

    Wu, Yingcai; Chung, Ka-Kei; Qu, Huamin; Yuan, Xiaoru; Cheung, S C

    2009-01-01

    Radio frequency identification (RFID) is a powerful automatic remote identification technique that has wide applications. To facilitate RFID deployment, an RFID benchmarking instrument called aGate has been invented to identify the strengths and weaknesses of different RFID technologies in various environments. However, the data acquired by aGate are usually complex time varying multidimensional 3D volumetric data, which are extremely challenging for engineers to analyze. In this paper, we introduce a set of visualization techniques, namely, parallel coordinate plots, orientation plots, a visual history mechanism, and a 3D spatial viewer, to help RFID engineers analyze benchmark data visually and intuitively. With the techniques, we further introduce two workflow procedures (a visual optimization procedure for finding the optimum reader antenna configuration and a visual analysis procedure for comparing the performance and identifying the flaws of RFID devices) for the RFID benchmarking, with focus on the performance analysis of the aGate system. The usefulness and usability of the system are demonstrated in the user evaluation.

  15. Monaco and film dosimetry of 3D CRT, IMRT and VMAT cases in a realistic pelvic prosthetic phantom

    NASA Astrophysics Data System (ADS)

    Ade, Nicholas; du Plessis, F. C. P.

    2018-04-01

    The dosimetry of patients with metallic hip implants during irradiation of pelvic lesions is challenging due to dose distortions caused by implants. This work presents a dosimetric comparison of various multi-field photon-beam dose distributions in the presence of unilateral hip titanium prosthesis (UHTiP) embedded in a unique pelvic phantom made out of water-equivalent nylon slices. The impact of the UHTiP on the accuracy of dose calculations from a Monaco TPS (treatment planning system) using the X-ray voxel Monte Carlo (XVMC) algorithm was benchmarked against measured dose data using Gafchromic EBT3 film. Multi-field beam arrangements including a 4-field box, 5-field 3DCRT (three-dimensional conformal radiation therapy), 6-field IMRT (intensity modulated radiation therapy) and a single-arc VMAT (volumetric modulated arc therapy) plan were set up for 6 MV and 15 MV beams. These plans were generated for the pelvic phantom that contains the prosthesis with film inserted. Compared to Monaco TPS dose calculations, film measurements showed enhanced dose in the prosthesis which was not predicted by Monaco due to its limitation in relative density assignment. The enhanced prosthesis dose increased with increase in beam energy and decreased with the complexity of the treatment plans, with VMAT giving the least escalated dose. The dose increased between 5% and 19% for 6 MV and between 6% and 21% for 15 MV. A gamma index analysis showed that 70-92% of dose points (excluding the prosthesis) were within 3% discrepancy. Increasing the number of treatment fields increases target dose coverage and improves the agreement between film and Monaco. When the relative electron density (RED) in the prosthesis was varied between 3.72 and 15 the dose discrepancy between film and Monaco increased from 30% to 57% for 6 MV and from 30% to 50% for 15 MV. The study indicates that beam weights for fields that pass through the prosthesis should be minimised and its RED must be correct for accurate dose calculation on Monaco.

  16. SP2Bench: A SPARQL Performance Benchmark

    NASA Astrophysics Data System (ADS)

    Schmidt, Michael; Hornung, Thomas; Meier, Michael; Pinkel, Christoph; Lausen, Georg

    A meaningful analysis and comparison of both existing storage schemes for RDF data and evaluation approaches for SPARQL queries necessitates a comprehensive and universal benchmark platform. We present SP2Bench, a publicly available, language-specific performance benchmark for the SPARQL query language. SP2Bench is settled in the DBLP scenario and comprises a data generator for creating arbitrarily large DBLP-like documents and a set of carefully designed benchmark queries. The generated documents mirror vital key characteristics and social-world distributions encountered in the original DBLP data set, while the queries implement meaningful requests on top of this data, covering a variety of SPARQL operator constellations and RDF access patterns. In this chapter, we discuss requirements and desiderata for SPARQL benchmarks and present the SP2Bench framework, including its data generator, benchmark queries and performance metrics.

  17. BMDExpress Data Viewer: A Visualization Tool to Analyze ...

    EPA Pesticide Factsheets

    Regulatory agencies increasingly apply benchmark dose (BMD) modeling to determine points of departure in human risk assessments. BMDExpress applies BMD modeling to transcriptomics datasets and groups genes to biological processes and pathways for rapid assessment of doses at which biological perturbations occur. However, graphing and analytical capabilities within BMDExpress are limited, and the analysis of output files is challenging. We developed a web-based application, BMDExpress Data Viewer, for visualization and graphical analyses of BMDExpress output files. The software application consists of two main components: ‘Summary Visualization Tools’ and ‘Dataset Exploratory Tools’. We demonstrate through two case studies that the ‘Summary Visualization Tools’ can be used to examine and assess the distributions of probe and pathway BMD outputs, as well as derive a potential regulatory BMD through the modes or means of the distributions. The ‘Functional Enrichment Analysis’ tool presents biological processes in a two-dimensional bubble chart view. By applying filters of pathway enrichment p-value and minimum number of significant genes, we showed that the Functional Enrichment Analysis tool can be applied to select pathways that are potentially sensitive to chemical perturbations. The ‘Multiple Dataset Comparison’ tool enables comparison of BMDs across multiple experiments (e.g., across time points, tissues, or organisms, etc.). The ‘BMDL-BM

  18. Translating reference doses into allergen management practice: challenges for stakeholders.

    PubMed

    Crevel, René W R; Baumert, Joseph L; Luccioli, Stefano; Baka, Athanasia; Hattersley, Sue; Hourihane, Jonathan O'B; Ronsmans, Stefan; Timmermans, Frans; Ward, Rachel; Chung, Yong-joo

    2014-05-01

    Risk assessment describes the impact of a particular hazard as a function of dose and exposure. It forms the foundation of risk management and contributes to the overall decision-making process, but is not its endpoint. This paper outlines a risk analysis framework to underpin decision-making in the area of allergen cross-contact. Specifically, it identifies challenges relevant to each component of the risk analysis: risk assessment (data gaps and output interpretation); risk management (clear and realistic objectives); and risk communication (clear articulation of risk and benefit). Translation of the outputs from risk assessment models into risk management measures must be informed by a clear understanding of the model outputs and their limitations. This will lead to feasible and achievable risk management objectives, grounded in a level of risk accepted by the different stakeholders, thereby avoiding potential unintended detrimental consequences. Clear, consistent and trustworthy communications actively involving all stakeholders underpin these objectives. The conclusions, integrating the perspectives of different stakeholders, offer a vision where clear, science-based benchmarks form the basis of allergen management and labelling, cutting through the current confusion and uncertainty. Finally, the paper recognises that the proposed framework must be adaptable to new and emerging evidence. Copyright © 2014 ILSI Europe. Published by Elsevier Ltd.. All rights reserved.

  19. Transcriptional responses in the rat nasal epithelium following subchronic inhalation of naphthalene vapor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clewell, H.J., E-mail: hclewell@thehamner.org; Efremenko, A.; Campbell, J.L.

    Male and female Fischer 344 rats were exposed to naphthalene vapors at 0 (controls), 0.1, 1, 10, and 30 ppm for 6 h/d, 5 d/wk, over a 90-day period. Following exposure, the respiratory epithelium and olfactory epithelium from the nasal cavity were dissected separately, RNA was isolated, and gene expression microarray analysis was conducted. Only a few significant gene expression changes were observed in the olfactory or respiratory epithelium of either gender at the lowest concentration (0.1 ppm). At the 1.0 ppm concentration there was limited evidence of an oxidative stress response in the respiratory epithelium, but not in themore » olfactory epithelium. In contrast, a large number of significantly enriched cellular pathway responses were observed in both tissues at the two highest concentrations (10 and 30 ppm, which correspond to tumorigenic concentrations in the NTP bioassay). The nature of these responses supports a mode of action involving oxidative stress, inflammation and proliferation. These results are consistent with a dose-dependent transition in the mode of action for naphthalene toxicity/carcinogenicity between 1.0 and 10 ppm in the rat. In the female olfactory epithelium (the gender/site with the highest incidences of neuroblastomas in the NTP bioassay), the lowest concentration at which any signaling pathway was significantly affected, as characterized by the median pathway benchmark dose (BMD) or its 95% lower bound (BMDL) was 6.0 or 3.7 ppm, respectively, while the lowest female olfactory BMD values for pathways related to glutathione homeostasis, inflammation, and proliferation were 16.1, 11.1, and 8.4 ppm, respectively. In the male respiratory epithelium (the gender/site with the highest incidences of adenomas in the NTP bioassay), the lowest pathway BMD and BMDL were 0.4 and 0.3 ppm, respectively, and the lowest male respiratory BMD values for pathways related to glutathione homeostasis, inflammation, and proliferation were 0.5, 0.7, and 0.9 ppm, respectively. Using a published physiologically based pharmacokinetic (PBPK) model to estimate target tissue dose relevant to the proposed mode of action (total naphthalene metabolism per gram nasal tissue), the lowest transcriptional BMDLs from this analysis equate to human continuous naphthalene exposure at approximately 0.3 ppm. It is unlikely that significant effects of naphthalene or its metabolites will occur at exposures below this concentration. - Highlights: • We investigated mode of action for carcinogenicity of inhaled naphthalene in rats. • Gene expression changes were measured in rat nasal tissues after 90 day exposures. • Support a non-linear mode of action (oxidative stress, inflammation, and proliferation) • Suggest a dose-dependent transition in the mode of action between 1.0 and 10 ppm • Transcriptional benchmark doses could inform point of departure for risk assessment.« less

  20. Notes on numerical reliability of several statistical analysis programs

    USGS Publications Warehouse

    Landwehr, J.M.; Tasker, Gary D.

    1999-01-01

    This report presents a benchmark analysis of several statistical analysis programs currently in use in the USGS. The benchmark consists of a comparison between the values provided by a statistical analysis program for variables in the reference data set ANASTY and their known or calculated theoretical values. The ANASTY data set is an amendment of the Wilkinson NASTY data set that has been used in the statistical literature to assess the reliability (computational correctness) of calculated analytical results.

  1. Benchmarking to improve the quality of cystic fibrosis care.

    PubMed

    Schechter, Michael S

    2012-11-01

    Benchmarking involves the ascertainment of healthcare programs with most favorable outcomes as a means to identify and spread effective strategies for delivery of care. The recent interest in the development of patient registries for patients with cystic fibrosis (CF) has been fueled in part by an interest in using them to facilitate benchmarking. This review summarizes reports of how benchmarking has been operationalized in attempts to improve CF care. Although certain goals of benchmarking can be accomplished with an exclusive focus on registry data analysis, benchmarking programs in Germany and the United States have supplemented these data analyses with exploratory interactions and discussions to better understand successful approaches to care and encourage their spread throughout the care network. Benchmarking allows the discovery and facilitates the spread of effective approaches to care. It provides a pragmatic alternative to traditional research methods such as randomized controlled trials, providing insights into methods that optimize delivery of care and allowing judgments about the relative effectiveness of different therapeutic approaches.

  2. Developing Student Character through Disciplinary Curricula: An Analysis of UK QAA Subject Benchmark Statements

    ERIC Educational Resources Information Center

    Quinlan, Kathleen M.

    2016-01-01

    What aspects of student character are expected to be developed through disciplinary curricula? This paper examines the UK written curriculum through an analysis of the Quality Assurance Agency's subject benchmark statements for the most popular subjects studied in the UK. It explores the language, principles and intended outcomes that suggest…

  3. A BENCHMARKING ANALYSIS FOR FIVE RADIONUCLIDE VADOSE ZONE MODELS (CHAIN, MULTIMED_DP, FECTUZ, HYDRUS, AND CHAIN 2D) IN SOIL SCREENING LEVEL CALCULATIONS

    EPA Science Inventory

    Five radionuclide vadose zone models with different degrees of complexity (CHAIN, MULTIMED_DP, FECTUZ, HYDRUS, and CHAIN 2D) were selected for use in soil screening level (SSL) calculations. A benchmarking analysis between the models was conducted for a radionuclide (99Tc) rele...

  4. Accuracy of a simplified method for shielded gamma-ray skyshine sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bassett, M.S.; Shultis, J.K.

    1989-11-01

    Rigorous transport or Monte Carlo methods for estimating far-field gamma-ray skyshine doses generally are computationally intensive. consequently, several simplified techniques such as point-kernel methods and methods based on beam response functions have been proposed. For unshielded skyshine sources, these simplified methods have been shown to be quite accurate from comparisons to benchmark problems and to benchmark experimental results. For shielded sources, the simplified methods typically use exponential attenuation and photon buildup factors to describe the effect of the shield. However, the energy and directional redistribution of photons scattered in the shield is usually ignored, i.e., scattered photons are assumed tomore » emerge from the shield with the same energy and direction as the uncollided photons. The accuracy of this shield treatment is largely unknown due to the paucity of benchmark results for shielded sources. In this paper, the validity of such a shield treatment is assessed by comparison to a composite method, which accurately calculates the energy and angular distribution of photons penetrating the shield.« less

  5. Comparison of Vocal Vibration-Dose Measures for Potential-Damage Risk Criteria

    PubMed Central

    Hunter, Eric J.

    2015-01-01

    Purpose Schoolteachers have become a benchmark population for the study of occupational voice use. A decade of vibration-dose studies on the teacher population allows a comparison to be made between specific dose measures for eventual assessment of damage risk. Method Vibration dosimetry is reformulated with the inclusion of collision stress. Two methods of estimating amplitude of vocal-fold vibration are compared to capture variations in vocal intensity. Energy loss from collision is added to the energy-dissipation dose. An equal-energy-dissipation criterion is defined and used on the teacher corpus as a potential-damage risk criterion. Results Comparison of time-, cycle-, distance-, and energy-dose calculations for 57 teachers reveals a progression in information content in the ability to capture variations in duration, speaking pitch, and vocal intensity. The energy-dissipation dose carries the greatest promise in capturing excessive tissue stress and collision but also the greatest liability, due to uncertainty in parameters. Cycle dose is least correlated with the other doses. Conclusion As a first guide to damage risk in excessive voice use, the equal-energy-dissipation dose criterion can be used to structure trade-off relations between loudness, adduction, and duration of speech. PMID:26172434

  6. GROWTH OF THE INTERNATIONAL CRITICALITY SAFETY AND REACTOR PHYSICS EXPERIMENT EVALUATION PROJECTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. Blair Briggs; John D. Bess; Jim Gulliford

    2011-09-01

    Since the International Conference on Nuclear Criticality Safety (ICNC) 2007, the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) have continued to expand their efforts and broaden their scope. Eighteen countries participated on the ICSBEP in 2007. Now, there are 20, with recent contributions from Sweden and Argentina. The IRPhEP has also expanded from eight contributing countries in 2007 to 16 in 2011. Since ICNC 2007, the contents of the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments1' have increased from 442 evaluations (38000 pages), containing benchmark specifications for 3955 critical ormore » subcritical configurations to 516 evaluations (nearly 55000 pages), containing benchmark specifications for 4405 critical or subcritical configurations in the 2010 Edition of the ICSBEP Handbook. The contents of the Handbook have also increased from 21 to 24 criticality-alarm-placement/shielding configurations with multiple dose points for each, and from 20 to 200 configurations categorized as fundamental physics measurements relevant to criticality safety applications. Approximately 25 new evaluations and 150 additional configurations are expected to be added to the 2011 edition of the Handbook. Since ICNC 2007, the contents of the 'International Handbook of Evaluated Reactor Physics Benchmark Experiments2' have increased from 16 different experimental series that were performed at 12 different reactor facilities to 53 experimental series that were performed at 30 different reactor facilities in the 2011 edition of the Handbook. Considerable effort has also been made to improve the functionality of the searchable database, DICE (Database for the International Criticality Benchmark Evaluation Project) and verify the accuracy of the data contained therein. DICE will be discussed in separate papers at ICNC 2011. The status of the ICSBEP and the IRPhEP will be discussed in the full paper, selected benchmarks that have been added to the ICSBEP Handbook will be highlighted, and a preview of the new benchmarks that will appear in the September 2011 edition of the Handbook will be provided. Accomplishments of the IRPhEP will also be highlighted and the future of both projects will be discussed. REFERENCES (1) International Handbook of Evaluated Criticality Safety Benchmark Experiments, NEA/NSC/DOC(95)03/I-IX, Organisation for Economic Co-operation and Development-Nuclear Energy Agency (OECD-NEA), September 2010 Edition, ISBN 978-92-64-99140-8. (2) International Handbook of Evaluated Reactor Physics Benchmark Experiments, NEA/NSC/DOC(2006)1, Organisation for Economic Co-operation and Development-Nuclear Energy Agency (OECD-NEA), March 2011 Edition, ISBN 978-92-64-99141-5.« less

  7. Benchmarking expert system tools

    NASA Technical Reports Server (NTRS)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  8. Generating Shifting Workloads to Benchmark Adaptability in Relational Database Systems

    NASA Astrophysics Data System (ADS)

    Rabl, Tilmann; Lang, Andreas; Hackl, Thomas; Sick, Bernhard; Kosch, Harald

    A large body of research concerns the adaptability of database systems. Many commercial systems already contain autonomic processes that adapt configurations as well as data structures and data organization. Yet there is virtually no possibility for a just measurement of the quality of such optimizations. While standard benchmarks have been developed that simulate real-world database applications very precisely, none of them considers variations in workloads produced by human factors. Today’s benchmarks test the performance of database systems by measuring peak performance on homogeneous request streams. Nevertheless, in systems with user interaction access patterns are constantly shifting. We present a benchmark that simulates a web information system with interaction of large user groups. It is based on the analysis of a real online eLearning management system with 15,000 users. The benchmark considers the temporal dependency of user interaction. Main focus is to measure the adaptability of a database management system according to shifting workloads. We will give details on our design approach that uses sophisticated pattern analysis and data mining techniques.

  9. Benchmark of PENELOPE code for low-energy photon transport: dose comparisons with MCNP4 and EGS4.

    PubMed

    Ye, Sung-Joon; Brezovich, Ivan A; Pareek, Prem; Naqvi, Shahid A

    2004-02-07

    The expanding clinical use of low-energy photon emitting 125I and 103Pd seeds in recent years has led to renewed interest in their dosimetric properties. Numerous papers pointed out that higher accuracy could be obtained in Monte Carlo simulations by utilizing newer libraries for the low-energy photon cross-sections, such as XCOM and EPDL97. The recently developed PENELOPE 2001 Monte Carlo code is user friendly and incorporates photon cross-section data from the EPDL97. The code has been verified for clinical dosimetry of high-energy electron and photon beams, but has not yet been tested at low energies. In the present work, we have benchmarked the PENELOPE code for 10-150 keV photons. We computed radial dose distributions from 0 to 10 cm in water at photon energies of 10-150 keV using both PENELOPE and MCNP4C with either DLC-146 or DLC-200 cross-section libraries, assuming a point source located at the centre of a 30 cm diameter and 20 cm length cylinder. Throughout the energy range of simulated photons (except for 10 keV), PENELOPE agreed within statistical uncertainties (at worst +/- 5%) with MCNP/DLC-146 in the entire region of 1-10 cm and with published EGS4 data up to 5 cm. The dose at 1 cm (or dose rate constant) of PENELOPE agreed with MCNP/DLC-146 and EGS4 data within approximately +/- 2% in the range of 20-150 keV, while MCNP/DLC-200 produced values up to 9% lower in the range of 20-100 keV than PENELOPE or the other codes. However, the differences among the four datasets became negligible above 100 keV.

  10. A new numerical benchmark of a freshwater lens

    NASA Astrophysics Data System (ADS)

    Stoeckl, L.; Walther, M.; Graf, T.

    2016-04-01

    A numerical benchmark for 2-D variable-density flow and solute transport in a freshwater lens is presented. The benchmark is based on results of laboratory experiments conducted by Stoeckl and Houben (2012) using a sand tank on the meter scale. This benchmark describes the formation and degradation of a freshwater lens over time as it can be found under real-world islands. An error analysis gave the appropriate spatial and temporal discretization of 1 mm and 8.64 s, respectively. The calibrated parameter set was obtained using the parameter estimation tool PEST. Comparing density-coupled and density-uncoupled results showed that the freshwater-saltwater interface position is strongly dependent on density differences. A benchmark that adequately represents saltwater intrusion and that includes realistic features of coastal aquifers or freshwater lenses was lacking. This new benchmark was thus developed and is demonstrated to be suitable to test variable-density groundwater models applied to saltwater intrusion investigations.

  11. Copper benchmark experiment for the testing of JEFF-3.2 nuclear data for fusion applications

    NASA Astrophysics Data System (ADS)

    Angelone, M.; Flammini, D.; Loreti, S.; Moro, F.; Pillon, M.; Villar, R.; Klix, A.; Fischer, U.; Kodeli, I.; Perel, R. L.; Pohorecky, W.

    2017-09-01

    A neutronics benchmark experiment on a pure Copper block (dimensions 60 × 70 × 70 cm3) aimed at testing and validating the recent nuclear data libraries for fusion applications was performed in the frame of the European Fusion Program at the 14 MeV ENEA Frascati Neutron Generator (FNG). Reaction rates, neutron flux spectra and doses were measured using different experimental techniques (e.g. activation foils techniques, NE213 scintillator and thermoluminescent detectors). This paper first summarizes the analyses of the experiment carried-out using the MCNP5 Monte Carlo code and the European JEFF-3.2 library. Large discrepancies between calculation (C) and experiment (E) were found for the reaction rates both in the high and low neutron energy range. The analysis was complemented by sensitivity/uncertainty analyses (S/U) using the deterministic and Monte Carlo SUSD3D and MCSEN codes, respectively. The S/U analyses enabled to identify the cross sections and energy ranges which are mostly affecting the calculated responses. The largest discrepancy among the C/E values was observed for the thermal (capture) reactions indicating severe deficiencies in the 63,65Cu capture and elastic cross sections at lower rather than at high energy. Deterministic and MC codes produced similar results. The 14 MeV copper experiment and its analysis thus calls for a revision of the JEFF-3.2 copper cross section and covariance data evaluation. A new analysis of the experiment was performed with the MCNP5 code using the revised JEFF-3.3-T2 library released by NEA and a new, not yet distributed, revised JEFF-3.2 Cu evaluation produced by KIT. A noticeable improvement of the C/E results was obtained with both new libraries.

  12. SU-F-T-513: Dosimetric Validation of Spatially Fractionated Radiotherapy Using Gel Dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papanikolaou, P; Watts, L; Kirby, N

    2016-06-15

    Purpose: Spatially fractionated radiation therapy, also known as GRID therapy, is used to treat large solid tumors by irradiating the target to a single dose of 10–20Gy through spatially distributed beamlets. We have investigated the use of a 3D gel for dosimetric characterization of GRID therapy. Methods: GRID therapy is an external beam analog of volumetric brachytherapy, whereby we produce a distribution of hot and cold dose columns inside the tumor volume. Such distribution can be produced with a block or by using a checker-like pattern with MLC. We have studied both types of GRID delivery. A cube shaped acrylicmore » phantom was filled with polymer gel and served as a 3D dosimeter. The phantom was scanned and the CT images were used to produce two plans in Pinnacle, one with the grid block and one with the MLC defined grid. A 6MV beam was used for the plan with a prescription of 1500cGy at dmax. The irradiated phantom was scanned in a 3T MRI scanner. Results: 3D dose maps were derived from the MR scans of the gel dosimeter and were found to be in good agreement with the predicted dose distribution from the RTP system. Gamma analysis showed a passing rate of 93% for 5% dose and 2mm DTA scoring criteria. Both relative and absolute dose profiles are in good agreement, except in the peripheral beamlets where the gel measured slightly higher dose, possibly because of the changing head scatter conditions that the RTP is not fully accounting for. Our results have also been benchmarked against ionization chamber measurements. Conclusion: We have investigated the use of a polymer gel for the 3D dosimetric characterization and evaluation of GRID therapy. Our results demonstrated that the planning system can predict fairly accurately the dose distribution for GRID type therapy.« less

  13. Formative usability evaluation of a fixed-dose pen-injector platform device

    PubMed Central

    Lange, Jakob; Nemeth, Tobias

    2018-01-01

    Background This article for the first time presents a formative usability study of a fixed-dose pen injector platform device used for the subcutaneous delivery of biopharmaceuticals, primarily for self-administration by the patient. The study was conducted with a user population of both naïve and experienced users across a range of ages. The goals of the study were to evaluate whether users could use the devices safely and effectively relying on the instructions for use (IFU) for guidance, as well as to benchmark the device against another similar injector established in the market. Further objectives were to capture any usability issues and obtain participants’ subjective ratings on the properties and performance of both devices. Methods A total of 20 participants in three groups studied the IFU and performed simulated injections into an injection pad. Results All participants were able to use the device successfully. The device was well appreciated by all users with, maximum usability feedback scores reported by 90% or more on handling forces and device feedback, and by 85% or more on fit and grip of the device. The presence of clear audible and visible feedbacks upon successful loading of a dose and completion of injection was seen to be a significant improvement over the benchmark injector. Conclusion The observation that the platform device can be safely and efficiently used by all user groups provides confidence that the device and IFU in their current form will pass future summative testing in specific applications. PMID:29670411

  14. Inverse treatment planning for spinal robotic radiosurgery: an international multi-institutional benchmark trial.

    PubMed

    Blanck, Oliver; Wang, Lei; Baus, Wolfgang; Grimm, Jimm; Lacornerie, Thomas; Nilsson, Joakim; Luchkovskyi, Sergii; Cano, Isabel Palazon; Shou, Zhenyu; Ayadi, Myriam; Treuer, Harald; Viard, Romain; Siebert, Frank-Andre; Chan, Mark K H; Hildebrandt, Guido; Dunst, Jürgen; Imhoff, Detlef; Wurster, Stefan; Wolff, Robert; Romanelli, Pantaleo; Lartigau, Eric; Semrau, Robert; Soltys, Scott G; Schweikard, Achim

    2016-05-08

    Stereotactic radiosurgery (SRS) is the accurate, conformal delivery of high-dose radiation to well-defined targets while minimizing normal structure doses via steep dose gradients. While inverse treatment planning (ITP) with computerized optimization algorithms are routine, many aspects of the planning process remain user-dependent. We performed an international, multi-institutional benchmark trial to study planning variability and to analyze preferable ITP practice for spinal robotic radiosurgery. 10 SRS treatment plans were generated for a complex-shaped spinal metastasis with 21 Gy in 3 fractions and tight constraints for spinal cord (V14Gy < 2 cc, V18Gy < 0.1 cc) and target (coverage > 95%). The resulting plans were rated on a scale from 1 to 4 (excellent-poor) in five categories (constraint compliance, optimization goals, low-dose regions, ITP complexity, and clinical acceptability) by a blinded review panel. Additionally, the plans were mathemati-cally rated based on plan indices (critical structure and target doses, conformity, monitor units, normal tissue complication probability, and treatment time) and compared to the human rankings. The treatment plans and the reviewers' rankings varied substantially among the participating centers. The average mean overall rank was 2.4 (1.2-4.0) and 8/10 plans were rated excellent in at least one category by at least one reviewer. The mathematical rankings agreed with the mean overall human rankings in 9/10 cases pointing toward the possibility for sole mathematical plan quality comparison. The final rankings revealed that a plan with a well-balanced trade-off among all planning objectives was preferred for treatment by most par-ticipants, reviewers, and the mathematical ranking system. Furthermore, this plan was generated with simple planning techniques. Our multi-institutional planning study found wide variability in ITP approaches for spinal robotic radiosurgery. The participants', reviewers', and mathematical match on preferable treatment plans and ITP techniques indicate that agreement on treatment planning and plan quality can be reached for spinal robotic radiosurgery.

  15. SU-E-T-776: Use of Quality Metrics for a New Hypo-Fractionated Pre-Surgical Mesothelioma Protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richardson, S; Mehta, V

    Purpose: The “SMART” (Surgery for Mesothelioma After Radiation Therapy) approach involves hypo-fractionated radiotherapy of the lung pleura to 25Gy over 5 days followed by surgical resection within 7. Early clinical results suggest that this approach is very promising, but also logistically challenging due to the multidisciplinary involvement. Due to the compressed schedule, high dose, and shortened planning time, the delivery of the planned doses were monitored for safety with quality metric software. Methods: Hypo-fractionated IMRT treatment plans were developed for all patients and exported to Quality Reports™ software. Plan quality metrics or PQMs™ were created to calculate an objective scoringmore » function for each plan. This allows for an objective assessment of the quality of the plan and a benchmark for plan improvement for subsequent patients. The priorities of various components were incorporated based on similar hypo-fractionated protocols such as lung SBRT treatments. Results: Five patients have been treated at our institution using this approach. The plans were developed, QA performed, and ready within 5 days of simulation. Plan Quality metrics utilized in scoring included doses to OAR and target coverage. All patients tolerated treatment well and proceeded to surgery as scheduled. Reported toxicity included grade 1 nausea (n=1), grade 1 esophagitis (n=1), grade 2 fatigue (n=3). One patient had recurrent fluid accumulation following surgery. No patients experienced any pulmonary toxicity prior to surgery. Conclusion: An accelerated course of pre-operative high dose radiation for mesothelioma is an innovative and promising new protocol. Without historical data, one must proceed cautiously and monitor the data carefully. The development of quality metrics and scoring functions for these treatments allows us to benchmark our plans and monitor improvement. If subsequent toxicities occur, these will be easy to investigate and incorporate into the metrics. This will improve the safe delivery of large doses for these patients.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Y; Lacroix, F; Lavallee, M

    Purpose: To evaluate the commercially released Collapsed Cone convolution-based(CCC) dose calculation module of the Elekta OncentraBrachy(OcB) treatment planning system(TPS). Methods: An allwater phantom was used to perform TG43 benchmarks with single source and seventeen sources, separately. Furthermore, four real-patient heterogeneous geometries (chestwall, lung, breast and prostate) were used. They were selected based on their clinical representativity of a class of clinical anatomies that pose clear challenges. The plans were used as is(no modification). For each case, TG43 and CCC calculations were performed in the OcB TPS, with TG186-recommended materials properly assigned to ROIs. For comparison, Monte Carlo simulation was runmore » for each case with the same material scheme and grid mesh as TPS calculations. Both modes of CCC (standard and high quality) were tested. Results: For the benchmark case, the CCC dose, when divided by that of TG43, yields hot-n-cold spots in a radial pattern. The pattern of the high mode is denser than that of the standard mode and is representative of angular dicretization. The total deviation ((hot-cold)/TG43) is 18% for standard mode and 11% for high mode. Seventeen dwell positions help to reduce “ray-effect”, with the total deviation to 6% (standard) and 5% (high), respectively. For the four patient cases, CCC produces, as expected, more realistic dose distributions than TG43. A close agreement was observed between CCC and MC for all isodose lines, from 20% and up; the 10% isodose line of CCC appears shifted compared to that of MC. The DVH plots show dose deviations of CCC from MC in small volume, high dose regions (>100% isodose). For patient cases, the difference between standard and high modes is almost undiscernable. Conclusion: OncentraBrachy CCC algorithm marks a significant dosimetry improvement relative to TG43 in real-patient cases. Further researches are recommended regarding the clinical implications of the above observations. Support provided by a CIHR grant and CCC system provided by Elekta-Nucletron.« less

  17. How to benchmark methods for structure-based virtual screening of large compound libraries.

    PubMed

    Christofferson, Andrew J; Huang, Niu

    2012-01-01

    Structure-based virtual screening is a useful computational technique for ligand discovery. To systematically evaluate different docking approaches, it is important to have a consistent benchmarking protocol that is both relevant and unbiased. Here, we describe the designing of a benchmarking data set for docking screen assessment, a standard docking screening process, and the analysis and presentation of the enrichment of annotated ligands among a background decoy database.

  18. Do Medicare Advantage Plans Minimize Costs? Investigating the Relationship Between Benchmarks, Costs, and Rebates.

    PubMed

    Zuckerman, Stephen; Skopec, Laura; Guterman, Stuart

    2017-12-01

    Medicare Advantage (MA), the program that allows people to receive their Medicare benefits through private health plans, uses a benchmark-and-bidding system to induce plans to provide benefits at lower costs. However, prior research suggests medical costs, profits, and other plan costs are not as low under this system as they might otherwise be. To examine how well the current system encourages MA plans to bid their lowest cost by examining the relationship between costs and bonuses (rebates) and the benchmarks Medicare uses in determining plan payments. Regression analysis using 2015 data for HMO and local PPO plans. Costs and rebates are higher for MA plans in areas with higher benchmarks, and plan costs vary less than benchmarks do. A one-dollar increase in benchmarks is associated with 32-cent-higher plan costs and a 52-cent-higher rebate, even when controlling for market and plan factors that can affect costs. This suggests the current benchmark-and-bidding system allows plans to bid higher than local input prices and other market conditions would seem to warrant. To incentivize MA plans to maximize efficiency and minimize costs, Medicare could change the way benchmarks are set or used.

  19. Benchmarking of a treatment planning system for spot scanning proton therapy: Comparison and analysis of robustness to setup errors of photon IMRT and proton SFUD treatment plans of base of skull meningioma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harding, R., E-mail: ruth.harding2@wales.nhs.uk; Trnková, P.; Lomax, A. J.

    Purpose: Base of skull meningioma can be treated with both intensity modulated radiation therapy (IMRT) and spot scanned proton therapy (PT). One of the main benefits of PT is better sparing of organs at risk, but due to the physical and dosimetric characteristics of protons, spot scanned PT can be more sensitive to the uncertainties encountered in the treatment process compared with photon treatment. Therefore, robustness analysis should be part of a comprehensive comparison between these two treatment methods in order to quantify and understand the sensitivity of the treatment techniques to uncertainties. The aim of this work was tomore » benchmark a spot scanning treatment planning system for planning of base of skull meningioma and to compare the created plans and analyze their robustness to setup errors against the IMRT technique. Methods: Plans were produced for three base of skull meningioma cases: IMRT planned with a commercial TPS [Monaco (Elekta AB, Sweden)]; single field uniform dose (SFUD) spot scanning PT produced with an in-house TPS (PSI-plan); and SFUD spot scanning PT plan created with a commercial TPS [XiO (Elekta AB, Sweden)]. A tool for evaluating robustness to random setup errors was created and, for each plan, both a dosimetric evaluation and a robustness analysis to setup errors were performed. Results: It was possible to create clinically acceptable treatment plans for spot scanning proton therapy of meningioma with a commercially available TPS. However, since each treatment planning system uses different methods, this comparison showed different dosimetric results as well as different sensitivities to setup uncertainties. The results confirmed the necessity of an analysis tool for assessing plan robustness to provide a fair comparison of photon and proton plans. Conclusions: Robustness analysis is a critical part of plan evaluation when comparing IMRT plans with spot scanned proton therapy plans.« less

  20. Benchmarking health system performance across regions in Uganda: a systematic analysis of levels and trends in key maternal and child health interventions, 1990-2011.

    PubMed

    Roberts, D Allen; Ng, Marie; Ikilezi, Gloria; Gasasira, Anne; Dwyer-Lindgren, Laura; Fullman, Nancy; Nalugwa, Talemwa; Kamya, Moses; Gakidou, Emmanuela

    2015-12-03

    Globally, countries are increasingly prioritizing the reduction of health inequalities and provision of universal health coverage. While national benchmarking has become more common, such work at subnational levels is rare. The timely and rigorous measurement of local levels and trends in key health interventions and outcomes is vital to identifying areas of progress and detecting early signs of stalled or declining health system performance. Previous studies have yet to provide a comprehensive assessment of Uganda's maternal and child health (MCH) landscape at the subnational level. By triangulating a number of different data sources - population censuses, household surveys, and administrative data - we generated regional estimates of 27 key MCH outcomes, interventions, and socioeconomic indicators from 1990 to 2011. After calculating source-specific estimates of intervention coverage, we used a two-step statistical model involving a mixed-effects linear model as an input to Gaussian process regression to produce regional-level trends. We also generated national-level estimates and constructed an indicator of overall intervention coverage based on the average of 11 high-priority interventions. National estimates often veiled large differences in coverage levels and trends across Uganda's regions. Under-5 mortality declined dramatically, from 163 deaths per 1,000 live births in 1990 to 85 deaths per 1,000 live births in 2011, but a large gap between Kampala and the rest of the country persisted. Uganda rapidly scaled up a subset of interventions across regions, including household ownership of insecticide-treated nets, receipt of artemisinin-based combination therapies among children under 5, and pentavalent immunization. Conversely, most regions saw minimal increases, if not actual declines, in the coverage of indicators that required multiple contacts with the health system, such as four or more antenatal care visits, three doses of oral polio vaccine, and two doses of intermittent preventive therapy during pregnancy. Some of the regions with the lowest levels of overall intervention coverage in 1990, such as North and West Nile, saw marked progress by 2011; nonetheless, sizeable disparities remained between Kampala and the rest of the country. Countrywide, overall coverage increased from 40% in 1990 to 64% in 2011, but coverage in 2011 ranged from 57% to 70% across regions. The MCH landscape in Uganda has, for the most part, improved between 1990 and 2011. Subnational benchmarking quantified the persistence of geographic health inequalities and identified regions in need of additional health systems strengthening. The tracking and analysis of subnational health trends should be conducted regularly to better guide policy decisions and strengthen responsiveness to local health needs.

  1. Modelling anaerobic co-digestion in Benchmark Simulation Model No. 2: Parameter estimation, substrate characterisation and plant-wide integration.

    PubMed

    Arnell, Magnus; Astals, Sergi; Åmand, Linda; Batstone, Damien J; Jensen, Paul D; Jeppsson, Ulf

    2016-07-01

    Anaerobic co-digestion is an emerging practice at wastewater treatment plants (WWTPs) to improve the energy balance and integrate waste management. Modelling of co-digestion in a plant-wide WWTP model is a powerful tool to assess the impact of co-substrate selection and dose strategy on digester performance and plant-wide effects. A feasible procedure to characterise and fractionate co-substrates COD for the Benchmark Simulation Model No. 2 (BSM2) was developed. This procedure is also applicable for the Anaerobic Digestion Model No. 1 (ADM1). Long chain fatty acid inhibition was included in the ADM1 model to allow for realistic modelling of lipid rich co-substrates. Sensitivity analysis revealed that, apart from the biodegradable fraction of COD, protein and lipid fractions are the most important fractions for methane production and digester stability, with at least two major failure modes identified through principal component analysis (PCA). The model and procedure were tested on bio-methane potential (BMP) tests on three substrates, each rich on carbohydrates, proteins or lipids with good predictive capability in all three cases. This model was then applied to a plant-wide simulation study which confirmed the positive effects of co-digestion on methane production and total operational cost. Simulations also revealed the importance of limiting the protein load to the anaerobic digester to avoid ammonia inhibition in the digester and overloading of the nitrogen removal processes in the water train. In contrast, the digester can treat relatively high loads of lipid rich substrates without prolonged disturbances. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Summary of comparison and analysis of results from exercises 1 and 2 of the OECD PBMR coupled neutronics/thermal hydraulics transient benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mkhabela, P.; Han, J.; Tyobeka, B.

    2006-07-01

    The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has accepted, through the Nuclear Science Committee (NSC), the inclusion of the Pebble-Bed Modular Reactor 400 MW design (PBMR-400) coupled neutronics/thermal hydraulics transient benchmark problem as part of their official activities. The scope of the benchmark is to establish a well-defined problem, based on a common given library of cross sections, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events through a set of multi-dimensional computational test problems. The benchmark includes three steady state exercises andmore » six transient exercises. This paper describes the first two steady state exercises, their objectives and the international participation in terms of organization, country and computer code utilized. This description is followed by a comparison and analysis of the participants' results submitted for these two exercises. The comparison of results from different codes allows for an assessment of the sensitivity of a result to the method employed and can thus help to focus the development efforts on the most critical areas. The two first exercises also allow for removing of user-related modeling errors and prepare core neutronics and thermal-hydraulics models of the different codes for the rest of the exercises in the benchmark. (authors)« less

  3. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    PubMed

    Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja

    2015-01-01

    The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  4. Hospital benchmarking: are U.S. eye hospitals ready?

    PubMed

    de Korne, Dirk F; van Wijngaarden, Jeroen D H; Sol, Kees J C A; Betz, Robert; Thomas, Richard C; Schein, Oliver D; Klazinga, Niek S

    2012-01-01

    Benchmarking is increasingly considered a useful management instrument to improve quality in health care, but little is known about its applicability in hospital settings. The aims of this study were to assess the applicability of a benchmarking project in U.S. eye hospitals and compare the results with an international initiative. We evaluated multiple cases by applying an evaluation frame abstracted from the literature to five U.S. eye hospitals that used a set of 10 indicators for efficiency benchmarking. Qualitative analysis entailed 46 semistructured face-to-face interviews with stakeholders, document analyses, and questionnaires. The case studies only partially met the conditions of the evaluation frame. Although learning and quality improvement were stated as overall purposes, the benchmarking initiative was at first focused on efficiency only. No ophthalmic outcomes were included, and clinicians were skeptical about their reporting relevance and disclosure. However, in contrast with earlier findings in international eye hospitals, all U.S. hospitals worked with internal indicators that were integrated in their performance management systems and supported benchmarking. Benchmarking can support performance management in individual hospitals. Having a certain number of comparable institutes provide similar services in a noncompetitive milieu seems to lay fertile ground for benchmarking. International benchmarking is useful only when these conditions are not met nationally. Although the literature focuses on static conditions for effective benchmarking, our case studies show that it is a highly iterative and learning process. The journey of benchmarking seems to be more important than the destination. Improving patient value (health outcomes per unit of cost) requires, however, an integrative perspective where clinicians and administrators closely cooperate on both quality and efficiency issues. If these worlds do not share such a relationship, the added "public" value of benchmarking in health care is questionable.

  5. Initial characterization, dosimetric benchmark and performance validation of Dynamic Wave Arc.

    PubMed

    Burghelea, Manuela; Verellen, Dirk; Poels, Kenneth; Hung, Cecilia; Nakamura, Mitsuhiro; Dhont, Jennifer; Gevaert, Thierry; Van den Begin, Robbe; Collen, Christine; Matsuo, Yukinori; Kishi, Takahiro; Simon, Viorica; Hiraoka, Masahiro; de Ridder, Mark

    2016-04-29

    Dynamic Wave Arc (DWA) is a clinical approach designed to maximize the versatility of Vero SBRT system by synchronizing the gantry-ring noncoplanar movement with D-MLC optimization. The purpose of this study was to verify the delivery accuracy of DWA approach and to evaluate the potential dosimetric benefits. DWA is an extended form of VMAT with a continuous varying ring position. The main difference in the optimization modules of VMAT and DWA is during the angular spacing, where the DWA algorithm does not consider the gantry spacing, but only the Euclidian norm of the ring and gantry angle. A preclinical version of RayStation v4.6 (RaySearch Laboratories, Sweden) was used to create patient specific wave arc trajectories for 31 patients with various anatomical tumor regions (prostate, oligometatstatic cases, centrally-located non-small cell lung cancer (NSCLC) and locally advanced pancreatic cancer-LAPC). DWA was benchmarked against the current clinical approaches and coplanar VMAT. Each plan was evaluated with regards to dose distribution, modulation complexity (MCS), monitor units and treatment time efficiency. The delivery accuracy was evaluated using a 2D diode array that takes in consideration the multi-dimensionality of DWA during dose reconstruction. In centrally-located NSCLC cases, DWA improved the low dose spillage with 20 %, while the target coverage was increased with 17 % compared to 3D CRT. The structures that significantly benefited from using DWA were proximal bronchus and esophagus, with the maximal dose being reduced by 17 % and 24 %, respectively. For prostate and LAPC, neither technique seemed clearly superior to the other; however, DWA reduced with more than 65 % of the delivery time over IMRT. A steeper dose gradient outside the target was observed for all treatment sites (p < 0.01) with DWA. Except the oligometastatic cases, where the DWA-MCSs indicate a higher modulation, both DWA and VMAT modalities provide plans of similar complexity. The average ɣ (3 % /3 mm) passing rate for DWA plans was 99.2 ± 1 % (range from 96.8 to 100 %). DWA proven to be a fully functional treatment technique, allowing additional flexibility in dose shaping, while preserving dosimetrically robust delivery and treatment times comparable with coplanar VMAT.

  6. Potential Deep Seated Landslide Mapping from Various Temporal Data - Benchmark, Aerial Photo, and SAR

    NASA Astrophysics Data System (ADS)

    Wang, Kuo-Lung; Lin, Jun-Tin; Lee, Yi-Hsuan; Lin, Meei-Ling; Chen, Chao-Wei; Liao, Ray-Tang; Chi, Chung-Chi; Lin, Hsi-Hung

    2016-04-01

    Landslide is always not hazard until mankind development in highly potential area. The study tries to map deep seated landslide before the initiation of landslide. Study area in central Taiwan is selected and the geological condition is quite unique, which is slate. Major direction of bedding in this area is northeast and the dip ranges from 30-75 degree to southeast. Several deep seated landslides were discovered in the same side of bedding from rainfall events. The benchmarks from 2002 ~ 2009 are in this study. However, the benchmarks were measured along Highway No. 14B and the road was constructed along the peak of mountains. Taiwan located between sea plates and continental plate. The elevation of mountains is rising according to most GPS and benchmarks in the island. The same trend is discovered from benchmarks in this area. But some benchmarks are located in landslide area thus the elevation is below average and event negative. The aerial photos from 1979 to 2007 are used for orthophoto generation. The changes of land use are obvious during 30 years and enlargement of river channel is also observed in this area. Both benchmarks and aerial photos have discovered landslide potential did exist this area but how big of landslide in not easy to define currently. Thus SAR data utilization is adopted in this case. DInSAR and SBAS sar analysis are used in this research and ALOS/PALSAR from 2006 to 2010 is adopted. DInSAR analysis shows that landslide is possible mapped but the error is not easy to reduce. The error is possibly form several conditions such as vegetation, clouds, vapor, etc. To conquer the problem, time series analysis, SBAS, is adopted in this research. The result of SBAS in this area shows that large deep seated landslides are easy mapped and the accuracy of vertical displacement is reasonable.

  7. CT Dose Optimization in Pediatric Radiology: A Multiyear Effort to Preserve the Benefits of Imaging While Reducing the Risks.

    PubMed

    Greenwood, Taylor J; Lopez-Costa, Rodrigo I; Rhoades, Patrick D; Ramírez-Giraldo, Juan C; Starr, Matthew; Street, Mandie; Duncan, James; McKinstry, Robert C

    2015-01-01

    The marked increase in radiation exposure from medical imaging, especially in children, has caused considerable alarm and spurred efforts to preserve the benefits but reduce the risks of imaging. Applying the principles of the Image Gently campaign, data-driven process and quality improvement techniques such as process mapping and flowcharting, cause-and-effect diagrams, Pareto analysis, statistical process control (control charts), failure mode and effects analysis, "lean" or Six Sigma methodology, and closed feedback loops led to a multiyear program that has reduced overall computed tomographic (CT) examination volume by more than fourfold and concurrently decreased radiation exposure per CT study without compromising diagnostic utility. This systematic approach involving education, streamlining access to magnetic resonance imaging and ultrasonography, auditing with comparison with benchmarks, applying modern CT technology, and revising CT protocols has led to a more than twofold reduction in CT radiation exposure between 2005 and 2012 for patients at the authors' institution while maintaining diagnostic utility. (©)RSNA, 2015.

  8. Benchmarking routine psychological services: a discussion of challenges and methods.

    PubMed

    Delgadillo, Jaime; McMillan, Dean; Leach, Chris; Lucock, Mike; Gilbody, Simon; Wood, Nick

    2014-01-01

    Policy developments in recent years have led to important changes in the level of access to evidence-based psychological treatments. Several methods have been used to investigate the effectiveness of these treatments in routine care, with different approaches to outcome definition and data analysis. To present a review of challenges and methods for the evaluation of evidence-based treatments delivered in routine mental healthcare. This is followed by a case example of a benchmarking method applied in primary care. High, average and poor performance benchmarks were calculated through a meta-analysis of published data from services working under the Improving Access to Psychological Therapies (IAPT) Programme in England. Pre-post treatment effect sizes (ES) and confidence intervals were estimated to illustrate a benchmarking method enabling services to evaluate routine clinical outcomes. High, average and poor performance ES for routine IAPT services were estimated to be 0.91, 0.73 and 0.46 for depression (using PHQ-9) and 1.02, 0.78 and 0.52 for anxiety (using GAD-7). Data from one specific IAPT service exemplify how to evaluate and contextualize routine clinical performance against these benchmarks. The main contribution of this report is to summarize key recommendations for the selection of an adequate set of psychometric measures, the operational definition of outcomes, and the statistical evaluation of clinical performance. A benchmarking method is also presented, which may enable a robust evaluation of clinical performance against national benchmarks. Some limitations concerned significant heterogeneity among data sources, and wide variations in ES and data completeness.

  9. Electric load shape benchmarking for small- and medium-sized commercial buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Xuan; Hong, Tianzhen; Chen, Yixing

    Small- and medium-sized commercial buildings owners and utility managers often look for opportunities for energy cost savings through energy efficiency and energy waste minimization. However, they currently lack easy access to low-cost tools that help interpret the massive amount of data needed to improve understanding of their energy use behaviors. Benchmarking is one of the techniques used in energy audits to identify which buildings are priorities for an energy analysis. Traditional energy performance indicators, such as the energy use intensity (annual energy per unit of floor area), consider only the total annual energy consumption, lacking consideration of the fluctuation ofmore » energy use behavior over time, which reveals the time of use information and represents distinct energy use behaviors during different time spans. To fill the gap, this study developed a general statistical method using 24-hour electric load shape benchmarking to compare a building or business/tenant space against peers. Specifically, the study developed new forms of benchmarking metrics and data analysis methods to infer the energy performance of a building based on its load shape. We first performed a data experiment with collected smart meter data using over 2,000 small- and medium-sized businesses in California. We then conducted a cluster analysis of the source data, and determined and interpreted the load shape features and parameters with peer group analysis. Finally, we implemented the load shape benchmarking feature in an open-access web-based toolkit (the Commercial Building Energy Saver) to provide straightforward and practical recommendations to users. The analysis techniques were generic and flexible for future datasets of other building types and in other utility territories.« less

  10. Electric load shape benchmarking for small- and medium-sized commercial buildings

    DOE PAGES

    Luo, Xuan; Hong, Tianzhen; Chen, Yixing; ...

    2017-07-28

    Small- and medium-sized commercial buildings owners and utility managers often look for opportunities for energy cost savings through energy efficiency and energy waste minimization. However, they currently lack easy access to low-cost tools that help interpret the massive amount of data needed to improve understanding of their energy use behaviors. Benchmarking is one of the techniques used in energy audits to identify which buildings are priorities for an energy analysis. Traditional energy performance indicators, such as the energy use intensity (annual energy per unit of floor area), consider only the total annual energy consumption, lacking consideration of the fluctuation ofmore » energy use behavior over time, which reveals the time of use information and represents distinct energy use behaviors during different time spans. To fill the gap, this study developed a general statistical method using 24-hour electric load shape benchmarking to compare a building or business/tenant space against peers. Specifically, the study developed new forms of benchmarking metrics and data analysis methods to infer the energy performance of a building based on its load shape. We first performed a data experiment with collected smart meter data using over 2,000 small- and medium-sized businesses in California. We then conducted a cluster analysis of the source data, and determined and interpreted the load shape features and parameters with peer group analysis. Finally, we implemented the load shape benchmarking feature in an open-access web-based toolkit (the Commercial Building Energy Saver) to provide straightforward and practical recommendations to users. The analysis techniques were generic and flexible for future datasets of other building types and in other utility territories.« less

  11. Application of Benchmark Examples to Assess the Single and Mixed-Mode Static Delamination Propagation Capabilities in ANSYS

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2012-01-01

    The application of benchmark examples for the assessment of quasi-static delamination propagation capabilities is demonstrated for ANSYS. The examples are independent of the analysis software used and allow the assessment of the automated delamination propagation in commercial finite element codes based on the virtual crack closure technique (VCCT). The examples selected are based on two-dimensional finite element models of Double Cantilever Beam (DCB), End-Notched Flexure (ENF), Mixed-Mode Bending (MMB) and Single Leg Bending (SLB) specimens. First, the quasi-static benchmark examples were recreated for each specimen using the current implementation of VCCT in ANSYS . Second, the delamination was allowed to propagate under quasi-static loading from its initial location using the automated procedure implemented in the finite element software. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall the results are encouraging, but further assessment for three-dimensional solid models is required.

  12. Impact of dose engine algorithm in pencil beam scanning proton therapy for breast cancer.

    PubMed

    Tommasino, Francesco; Fellin, Francesco; Lorentini, Stefano; Farace, Paolo

    2018-06-01

    Proton therapy for the treatment of breast cancer is acquiring increasing interest, due to the potential reduction of radiation-induced side effects such as cardiac and pulmonary toxicity. While several in silico studies demonstrated the gain in plan quality offered by pencil beam scanning (PBS) compared to passive scattering techniques, the related dosimetric uncertainties have been poorly investigated so far. Five breast cancer patients were planned with Raystation 6 analytical pencil beam (APB) and Monte Carlo (MC) dose calculation algorithms. Plans were optimized with APB and then MC was used to recalculate dose distribution. Movable snout and beam splitting techniques (i.e. using two sub-fields for the same beam entrance, one with and the other without the use of a range shifter) were considered. PTV dose statistics were recorded. The same planning configurations were adopted for the experimental benchmark. Dose distributions were measured with a 2D array of ionization chambers and compared to APB and MC calculated ones by means of a γ analysis (agreement criteria 3%, 3 mm). Our results indicate that, when using proton PBS for breast cancer treatment, the Raystation 6 APB algorithm does not allow obtaining sufficient accuracy, especially with large air gaps. On the contrary, the MC algorithm resulted into much higher accuracy in all beam configurations tested and has to be recommended. Centers where a MC algorithm is not yet available should consider a careful use of APB, possibly combined with a movable snout system or in any case with strategies aimed at minimizing air gaps. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  13. A comparative study of space radiation organ doses and associated cancer risks using PHITS and HZETRN.

    PubMed

    Bahadori, Amir A; Sato, Tatsuhiko; Slaba, Tony C; Shavers, Mark R; Semones, Edward J; Van Baalen, Mary; Bolch, Wesley E

    2013-10-21

    NASA currently uses one-dimensional deterministic transport to generate values of the organ dose equivalent needed to calculate stochastic radiation risk following crew space exposures. In this study, organ absorbed doses and dose equivalents are calculated for 50th percentile male and female astronaut phantoms using both the NASA High Charge and Energy Transport Code to perform one-dimensional deterministic transport and the Particle and Heavy Ion Transport Code System to perform three-dimensional Monte Carlo transport. Two measures of radiation risk, effective dose and risk of exposure-induced death (REID) are calculated using the organ dose equivalents resulting from the two methods of radiation transport. For the space radiation environments and simplified shielding configurations considered, small differences (<8%) in the effective dose and REID are found. However, for the galactic cosmic ray (GCR) boundary condition, compensating errors are observed, indicating that comparisons between the integral measurements of complex radiation environments and code calculations can be misleading. Code-to-code benchmarks allow for the comparison of differential quantities, such as secondary particle differential fluence, to provide insight into differences observed in integral quantities for particular components of the GCR spectrum.

  14. A comparative study of space radiation organ doses and associated cancer risks using PHITS and HZETRN

    NASA Astrophysics Data System (ADS)

    Bahadori, Amir A.; Sato, Tatsuhiko; Slaba, Tony C.; Shavers, Mark R.; Semones, Edward J.; Van Baalen, Mary; Bolch, Wesley E.

    2013-10-01

    NASA currently uses one-dimensional deterministic transport to generate values of the organ dose equivalent needed to calculate stochastic radiation risk following crew space exposures. In this study, organ absorbed doses and dose equivalents are calculated for 50th percentile male and female astronaut phantoms using both the NASA High Charge and Energy Transport Code to perform one-dimensional deterministic transport and the Particle and Heavy Ion Transport Code System to perform three-dimensional Monte Carlo transport. Two measures of radiation risk, effective dose and risk of exposure-induced death (REID) are calculated using the organ dose equivalents resulting from the two methods of radiation transport. For the space radiation environments and simplified shielding configurations considered, small differences (<8%) in the effective dose and REID are found. However, for the galactic cosmic ray (GCR) boundary condition, compensating errors are observed, indicating that comparisons between the integral measurements of complex radiation environments and code calculations can be misleading. Code-to-code benchmarks allow for the comparison of differential quantities, such as secondary particle differential fluence, to provide insight into differences observed in integral quantities for particular components of the GCR spectrum.

  15. Additional adjoint Monte Carlo studies of the shielding of concrete structures against initial gamma radiation. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, M.; Cohen, M.O.

    1975-02-01

    The adjoint Monte Carlo method previously developed by MAGI has been applied to the calculation of initial radiation dose due to air secondary gamma rays and fission product gamma rays at detector points within buildings for a wide variety of problems. These provide an in-depth survey of structure shielding effects as well as many new benchmark problems for matching by simplified models. Specifically, elevated ring source results were obtained in the following areas: doses at on-and off-centerline detectors in four concrete blockhouse structures; doses at detector positions along the centerline of a high-rise structure without walls; dose mapping at basementmore » detector positions in the high-rise structure; doses at detector points within a complex concrete structure containing exterior windows and walls and interior partitions; modeling of the complex structure by replacing interior partitions by additional material at exterior walls; effects of elevation angle changes; effects on the dose of changes in fission product ambient spectra; and modeling of mutual shielding due to external structures. In addition, point source results yielding dose extremes about the ring source average were obtained. (auth)« less

  16. Benchmarking and the laboratory

    PubMed Central

    Galloway, M; Nadin, L

    2001-01-01

    This article describes how benchmarking can be used to assess laboratory performance. Two benchmarking schemes are reviewed, the Clinical Benchmarking Company's Pathology Report and the College of American Pathologists' Q-Probes scheme. The Clinical Benchmarking Company's Pathology Report is undertaken by staff based in the clinical management unit, Keele University with appropriate input from the professional organisations within pathology. Five annual reports have now been completed. Each report is a detailed analysis of 10 areas of laboratory performance. In this review, particular attention is focused on the areas of quality, productivity, variation in clinical practice, skill mix, and working hours. The Q-Probes scheme is part of the College of American Pathologists programme in studies of quality assurance. The Q-Probes scheme and its applicability to pathology in the UK is illustrated by reviewing two recent Q-Probe studies: routine outpatient test turnaround time and outpatient test order accuracy. The Q-Probes scheme is somewhat limited by the small number of UK laboratories that have participated. In conclusion, as a result of the government's policy in the UK, benchmarking is here to stay. Benchmarking schemes described in this article are one way in which pathologists can demonstrate that they are providing a cost effective and high quality service. Key Words: benchmarking • pathology PMID:11477112

  17. Performance Analysis of the ARL Linux Networx Cluster

    DTIC Science & Technology

    2004-06-01

    OVERFLOW, used processors selected by SGE. All benchmarks on the GAMESS, COBALT, LSDYNA and FLUENT. Each code Origin 3800 were executed using IRIX cpusets...scheduler. for these benchmarks defines a missile with grid fins consisting of seventeen million cells [31. 4. Application Performance Results and

  18. Analytical dose evaluation of neutron and secondary gamma-ray skyshine from nuclear facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayashi, K.; Nakamura, T.

    1985-11-01

    The skyshine dose distributions of neutron and secondary gamma rays were calculated systematically using the Monte Carlo method for distances up to 2 km from the source. The energy of source neutrons ranged from thermal to 400 MeV; their emission angle from 0 to 90 deg from the ver tical was treated with a distribution of the direction cosine containing five equal intervals. Calculated dose distributions D(r) were fitted to the formula; D(r) = Q exp (-r/lambda)/r. The value of Q and lambda are slowly varied functions of energy. This formula was applied to the benchmark problems of neutron skyshinemore » from fission, fusion, and accelerator facilities, and good agreement was achieved. This formula will be quite useful for shielding designs of various nuclear facilities.« less

  19. [Using fractional polynomials to estimate the safety threshold of fluoride in drinking water].

    PubMed

    Pan, Shenling; An, Wei; Li, Hongyan; Yang, Min

    2014-01-01

    To study the dose-response relationship between fluoride content in drinking water and prevalence of dental fluorosis on the national scale, then to determine the safety threshold of fluoride in drinking water. Meta-regression analysis was applied to the 2001-2002 national endemic fluorosis survey data of key wards. First, fractional polynomial (FP) was adopted to establish fixed effect model, determining the best FP structure, after that restricted maximum likelihood (REML) was adopted to estimate between-study variance, then the best random effect model was established. The best FP structure was first-order logarithmic transformation. Based on the best random effect model, the benchmark dose (BMD) of fluoride in drinking water and its lower limit (BMDL) was calculated as 0.98 mg/L and 0.78 mg/L. Fluoride in drinking water can only explain 35.8% of the variability of the prevalence, among other influencing factors, ward type was a significant factor, while temperature condition and altitude were not. Fractional polynomial-based meta-regression method is simple, practical and can provide good fitting effect, based on it, the safety threshold of fluoride in drinking water of our country is determined as 0.8 mg/L.

  20. Modeling Respiratory Toxicity of Authentic Lunar Dust

    NASA Technical Reports Server (NTRS)

    Santana, Patricia A.; James, John T.; Lam, Chiu-Wing

    2010-01-01

    The lunar expeditions of the Apollo operations from the 60 s and early 70 s have generated awareness about lunar dust exposures and their implication towards future lunar explorations. Critical analyses on the reports from the Apollo crew members suggest that lunar dust is a mild respiratory and ocular irritant. Currently, NASA s space toxicology group is functioning with the Lunar Airborne Dust Toxicity Assessment Group (LADTAG) and the National Institute for Occupational Safety and Health (NIOSH) to investigate and examine toxic effects to the respiratory system of rats in order to establish permissible exposure levels (PELs) for human exposure to lunar dust. In collaboration with the space toxicology group, LADTAG and NIOSH the goal of the present research is to analyze dose-response curves from rat exposures seven and twenty-eight days after intrapharyngeal instillations, and model the response using BenchMark Dose Software (BMDS) from the Environmental Protection Agency (EPA). Via this analysis, the relative toxicities of three types of Apollo 14 lunar dust samples and two control dust samples, titanium dioxide (TiO2) and quartz will be determined. This will be executed for several toxicity endpoints such as cell counts and biochemical markers in bronchoaveolar lavage fluid (BALF) harvested from the rats.

  1. Coupling a continuous watershed-scale microbial fate and transport model with a stochastic dose-response model to estimate risk of illness in an urban watershed.

    PubMed

    Liao, Hehuan; Krometis, Leigh-Anne H; Kline, Karen

    2016-05-01

    Within the United States, elevated levels of fecal indicator bacteria (FIB) remain the leading cause of surface water-quality impairments requiring formal remediation plans under the federal Clean Water Act's Total Maximum Daily Load (TMDL) program. The sufficiency of compliance with numerical FIB criteria as the targeted endpoint of TMDL remediation plans may be questionable given poor correlations between FIB and pathogenic microorganisms and varying degrees of risk associated with exposure to different fecal pollution sources (e.g. human vs animal). The present study linked a watershed-scale FIB fate and transport model with a dose-response model to continuously predict human health risks via quantitative microbial risk assessment (QMRA), for comparison to regulatory benchmarks. This process permitted comparison of risks associated with different fecal pollution sources in an impaired urban watershed in order to identify remediation priorities. Results indicate that total human illness risks were consistently higher than the regulatory benchmark of 36 illnesses/1000 people for the study watershed, even when the predicted FIB levels were in compliance with the Escherichia coli geometric mean standard of 126CFU/100mL. Sanitary sewer overflows were associated with the greatest risk of illness. This is of particular concern, given increasing indications that sewer leakage is ubiquitous in urban areas, yet not typically fully accounted for during TMDL development. Uncertainty analysis suggested the accuracy of risk estimates would be improved by more detailed knowledge of site-specific pathogen presence and densities. While previous applications of the QMRA process to impaired waterways have mostly focused on single storm events or hypothetical situations, the continuous modeling framework presented in this study could be integrated into long-term water quality management planning, especially the United States' TMDL program, providing greater clarity to watershed stakeholders and decision-makers. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Using benchmarking to identify inter-centre differences in persistent ductus arteriosus treatment: can we improve outcome?

    PubMed

    Jansen, Esther J S; Dijkman, Koen P; van Lingen, Richard A; de Vries, Willem B; Vijlbrief, Daniel C; de Boode, Willem P; Andriessen, Peter

    2017-10-01

    The aim of this study was to identify inter-centre differences in persistent ductus arteriosus treatment and their related outcomes. Materials and methods We carried out a retrospective, multicentre study including infants between 24+0 and 27+6 weeks of gestation in the period between 2010 and 2011. In all centres, echocardiography was used as the standard procedure to diagnose a patent ductus arteriosus and to document ductal closure. In total, 367 preterm infants were included. All four participating neonatal ICU had a comparable number of preterm infants; however, differences were observed in the incidence of treatment (33-63%), choice and dosing of medication (ibuprofen or indomethacin), number of pharmacological courses (1-4), and the need for surgical ligation after failure of pharmacological treatment (8-52%). Despite the differences in treatment, we found no difference in short-term morbidity between the centres. Adjusted mortality showed independent risk contribution of gestational age, birth weight, ductal ligation, and perinatal centre. Using benchmarking as a tool identified inter-centre differences. In these four perinatal centres, the factors that explained the differences in patent ductus arteriosus treatment are quite complex. Timing, choice of medication, and dosing are probably important determinants for successful patent ductus arteriosus closure.

  3. Key Performance Indicators in the Evaluation of the Quality of Radiation Safety Programs.

    PubMed

    Schultz, Cheryl Culver; Shaffer, Sheila; Fink-Bennett, Darlene; Winokur, Kay

    2016-08-01

    Beaumont is a multiple hospital health care system with a centralized radiation safety department. The health system operates under a broad scope Nuclear Regulatory Commission license but also maintains several other limited use NRC licenses in off-site facilities and clinics. The hospital-based program is expansive including diagnostic radiology and nuclear medicine (molecular imaging), interventional radiology, a comprehensive cardiovascular program, multiple forms of radiation therapy (low dose rate brachytherapy, high dose rate brachytherapy, external beam radiotherapy, and gamma knife), and the Research Institute (including basic bench top, human and animal). Each year, in the annual report, data is analyzed and then tracked and trended. While any summary report will, by nature, include items such as the number of pieces of equipment, inspections performed, staff monitored and educated and other similar parameters, not all include an objective review of the quality and effectiveness of the program. Through objective numerical data Beaumont adopted seven key performance indicators. The assertion made is that key performance indicators can be used to establish benchmarks for evaluation and comparison of the effectiveness and quality of radiation safety programs. Based on over a decade of data collection, and adoption of key performance indicators, this paper demonstrates one way to establish objective benchmarking for radiation safety programs in the health care environment.

  4. Multiple exposures to indoor contaminants: Derivation of benchmark doses and relative potency factors based on male reprotoxic effects.

    PubMed

    Fournier, K; Tebby, C; Zeman, F; Glorennec, P; Zmirou-Navier, D; Bonvallot, N

    2016-02-01

    Semi-Volatile Organic Compounds (SVOCs) are commonly present in dwellings and several are suspected of having effects on male reproductive function mediated by an endocrine disruption mode of action. To improve knowledge of the health impact of these compounds, cumulative toxicity indicators are needed. This work derives Benchmark Doses (BMD) and Relative Potency Factors (RPF) for SVOCs acting on the male reproductive system through the same mode of action. We included SVOCs fulfilling the following conditions: detection frequency (>10%) in French dwellings, availability of data on the mechanism/mode of action for male reproductive toxicity, and availability of comparable dose-response relationships. Of 58 SVOCs selected, 18 induce a decrease in serum testosterone levels. Six have sufficient and comparable data to derive BMDs based on 10 or 50% of the response. The SVOCs inducing the largest decrease in serum testosterone concentration are: for 10%, bisphenol A (BMD10 = 7.72E-07 mg/kg bw/d; RPF10 = 7,033,679); for 50%, benzo[a]pyrene (BMD50 = 0.030 mg/kg bw/d; RPF50 = 1630), and the one inducing the smallest one is benzyl butyl phthalate (RPF10 and RPF50 = 0.095). This approach encompasses contaminants from diverse chemical families acting through similar modes of action, and makes possible a cumulative risk assessment in indoor environments. The main limitation remains the lack of comparable toxicological data. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Alcohol calibration of tests measuring skills related to car driving.

    PubMed

    Jongen, Stefan; Vuurman, Eric; Ramaekers, Jan; Vermeeren, Annemiek

    2014-06-01

    Medication and illicit drugs can have detrimental side effects which impair driving performance. A drug's impairing potential should be determined by well-validated, reliable, and sensitive tests and ideally be calibrated by benchmark drugs and doses. To date, no consensus has been reached on the issue of which psychometric tests are best suited for initial screening of a drug's driving impairment potential. The aim of this alcohol calibration study is to determine which performance tests are useful to measure drug-induced impairment. The effects of alcohol are used to compare the psychometric quality between tests and as benchmark to quantify performance changes in each test associated with potentially impairing drug effects. Twenty-four healthy volunteers participated in a double-blind, four-way crossover study. Treatments were placebo and three different doses of alcohol leading to blood alcohol concentrations (BACs) of 0.2, 0.5, and 0.8 g/L. Main effects of alcohol were found in most tests. Compared with placebo, performance in the Divided Attention Test (DAT) was significantly impaired after all alcohol doses and performance in the Psychomotor Vigilance Test (PVT) and the Balance Test was impaired with a BAC of 0.5 and 0.8 g/L. The largest effect sizes were found on postural balance with eyes open and mean reaction time in the divided attention and the psychomotor vigilance test. The preferable tests for initial screening are the DAT and the PVT, as these tests were most sensitive to the impairing effects of alcohol and being considerably valid in assessing potential driving impairment.

  6. Benchmarking the MCNP Monte Carlo code with a photon skyshine experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olsher, R.H.; Hsu, Hsiao Hua; Harvey, W.F.

    1993-07-01

    The MCNP Monte Carlo transport code is used by the Los Alamos National Laboratory Health and Safety Division for a broad spectrum of radiation shielding calculations. One such application involves the determination of skyshine dose for a variety of photon sources. To verify the accuracy of the code, it was benchmarked with the Kansas State Univ. (KSU) photon skyshine experiment of 1977. The KSU experiment for the unshielded source geometry was simulated in great detail to include the contribution of groundshine, in-silo photon scatter, and the effect of spectral degradation in the source capsule. The standard deviation of the KSUmore » experimental data was stated to be 7%, while the statistical uncertainty of the simulation was kept at or under 1%. The results of the simulation agreed closely with the experimental data, generally to within 6%. At distances of under 100 m from the silo, the modeling of the in-silo scatter was crucial to achieving close agreement with the experiment. Specifically, scatter off the top layer of the source cask accounted for [approximately]12% of the dose at 50 m. At distance >300m, using the [sup 60]Co line spectrum led to a dose overresponse as great as 19% at 700 m. It was necessary to use the actual source spectrum, which includes a Compton tail from photon collisions in the source capsule, to achieve close agreement with experimental data. These results highlight the importance of using Monte Carlo transport techniques to account for the nonideal features of even simple experiments''.« less

  7. A deterministic partial differential equation model for dose calculation in electron radiotherapy.

    PubMed

    Duclous, R; Dubroca, B; Frank, M

    2010-07-07

    High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g.Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung, Compton scattering and the production of delta electrons are added to our model, the computation time will only slightly increase. Its margin of error, on the other hand, will decrease and should be within a few per cent of the actual dose. Therefore, the new model has the potential to become useful for dose calculations in clinical practice.

  8. A deterministic partial differential equation model for dose calculation in electron radiotherapy

    NASA Astrophysics Data System (ADS)

    Duclous, R.; Dubroca, B.; Frank, M.

    2010-07-01

    High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g. Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung, Compton scattering and the production of δ electrons are added to our model, the computation time will only slightly increase. Its margin of error, on the other hand, will decrease and should be within a few per cent of the actual dose. Therefore, the new model has the potential to become useful for dose calculations in clinical practice.

  9. A Mode-of-Action Approach for the Identification of Genotoxic Carcinogens

    PubMed Central

    Hernández, Lya G.; van Benthem, Jan; Johnson, George E.

    2013-01-01

    Distinguishing between clastogens and aneugens is vital in cancer risk assessment because the default assumption is that clastogens and aneugens have linear and non-linear dose-response curves, respectively. Any observed non-linearity must be supported by mode of action (MOA) analyses where biological mechanisms are linked with dose-response evaluations. For aneugens, the MOA has been well characterised as disruptors of mitotic machinery where chromosome loss via micronuclei (MN) formation is an accepted endpoint used in risk assessment. In this study we performed the cytokinesis-block micronucleus assay and immunofluorescence mitotic machinery visualisation in human lymphoblastoid (AHH-1) and Chinese Hamster fibroblast (V79) cell lines after treatment with the aneugen 17-β-oestradiol (E2). Results were compared to previously published data on bisphenol-A (BPA) and Rotenone data. Two concentration-response approaches (the threshold-[Td] and benchmark-dose [BMD] approaches) were applied to derive a point of departure (POD) for in vitro MN induction. BMDs were also derived from the most sensitive carcinogenic endpoint. Ranking comparisons of the PODs from the in vitro MN and the carcinogenicity studies demonstrated a link between these two endpoints for BPA, E2 and Rotenone. This analysis was extended to include 5 additional aneugens, 5 clastogens and 3 mutagens and further concentration and dose-response correlations were observed between PODs from the in vitro MN and carcinogenicity. This approach is promising and may be further extended to other genotoxic carcinogens, where MOA and quantitative information from the in vitro MN studies could be used in a quantitative manner to further inform cancer risk assessment. PMID:23675539

  10. Population modelling to compare chronic external radiotoxicity between individual and population endpoints in four taxonomic groups.

    PubMed

    Alonzo, Frédéric; Hertel-Aas, Turid; Real, Almudena; Lance, Emilie; Garcia-Sanchez, Laurent; Bradshaw, Clare; Vives I Batlle, Jordi; Oughton, Deborah H; Garnier-Laplace, Jacqueline

    2016-02-01

    In this study, we modelled population responses to chronic external gamma radiation in 12 laboratory species (including aquatic and soil invertebrates, fish and terrestrial mammals). Our aim was to compare radiosensitivity between individual and population endpoints and to examine how internationally proposed benchmarks for environmental radioprotection protected species against various risks at the population level. To do so, we used population matrix models, combining life history and chronic radiotoxicity data (derived from laboratory experiments and described in the literature and the FREDERICA database) to simulate changes in population endpoints (net reproductive rate R0, asymptotic population growth rate λ, equilibrium population size Neq) for a range of dose rates. Elasticity analyses of models showed that population responses differed depending on the affected individual endpoint (juvenile or adult survival, delay in maturity or reduction in fecundity), the considered population endpoint (R0, λ or Neq) and the life history of the studied species. Among population endpoints, net reproductive rate R0 showed the lowest EDR10 (effective dose rate inducing 10% effect) in all species, with values ranging from 26 μGy h(-1) in the mouse Mus musculus to 38,000 μGy h(-1) in the fish Oryzias latipes. For several species, EDR10 for population endpoints were lower than the lowest EDR10 for individual endpoints. Various population level risks, differing in severity for the population, were investigated. Population extinction (predicted when radiation effects caused population growth rate λ to decrease below 1, indicating that no population growth in the long term) was predicted for dose rates ranging from 2700 μGy h(-1) in fish to 12,000 μGy h(-1) in soil invertebrates. A milder risk, that population growth rate λ will be reduced by 10% of the reduction causing extinction, was predicted for dose rates ranging from 24 μGy h(-1) in mammals to 1800 μGy h(-1) in soil invertebrates. These predictions suggested that proposed reference benchmarks from the literature for different taxonomic groups protected all simulated species against population extinction. A generic reference benchmark of 10 μGy h(-1) protected all simulated species against 10% of the effect causing population extinction. Finally, a risk of pseudo-extinction was predicted from 2.0 μGy h(-1) in mammals to 970 μGy h(-1) in soil invertebrates, representing a slight but statistically significant population decline, the importance of which remains to be evaluated in natural settings. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. 75 FR 35289 - International Services Surveys: BE-180, Benchmark Survey of Financial Services Transactions...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-22

    ...-AA73 International Services Surveys: BE-180, Benchmark Survey of Financial Services Transactions Between U.S. Financial Services Providers and Foreign Persons AGENCY: Bureau of Economic Analysis... Survey of Financial Services Transactions between U.S. Financial Services Providers and Foreign Persons...

  12. Integral Full Core Multi-Physics PWR Benchmark with Measured Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forget, Benoit; Smith, Kord; Kumar, Shikhar

    In recent years, the importance of modeling and simulation has been highlighted extensively in the DOE research portfolio with concrete examples in nuclear engineering with the CASL and NEAMS programs. These research efforts and similar efforts worldwide aim at the development of high-fidelity multi-physics analysis tools for the simulation of current and next-generation nuclear power reactors. Like all analysis tools, verification and validation is essential to guarantee proper functioning of the software and methods employed. The current approach relies mainly on the validation of single physic phenomena (e.g. critical experiment, flow loops, etc.) and there is a lack of relevantmore » multiphysics benchmark measurements that are necessary to validate high-fidelity methods being developed today. This work introduces a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading and re-loading patterns. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from 58 instrumented assemblies. The benchmark description is now available online and has been used by many groups. However, much work remains to be done on the quantification of uncertainties and modeling sensitivities. This work aims to address these deficiencies and make this benchmark a true non-proprietary international benchmark for the validation of high-fidelity tools. This report details the BEAVRS uncertainty quantification for the first two cycle of operations and serves as the final report of the project.« less

  13. A New Performance Improvement Model: Adding Benchmarking to the Analysis of Performance Indicator Data.

    PubMed

    Al-Kuwaiti, Ahmed; Homa, Karen; Maruthamuthu, Thennarasu

    2016-01-01

    A performance improvement model was developed that focuses on the analysis and interpretation of performance indicator (PI) data using statistical process control and benchmarking. PIs are suitable for comparison with benchmarks only if the data fall within the statistically accepted limit-that is, show only random variation. Specifically, if there is no significant special-cause variation over a period of time, then the data are ready to be benchmarked. The proposed Define, Measure, Control, Internal Threshold, and Benchmark model is adapted from the Define, Measure, Analyze, Improve, Control (DMAIC) model. The model consists of the following five steps: Step 1. Define the process; Step 2. Monitor and measure the variation over the period of time; Step 3. Check the variation of the process; if stable (no significant variation), go to Step 4; otherwise, control variation with the help of an action plan; Step 4. Develop an internal threshold and compare the process with it; Step 5.1. Compare the process with an internal benchmark; and Step 5.2. Compare the process with an external benchmark. The steps are illustrated through the use of health care-associated infection (HAI) data collected for 2013 and 2014 from the Infection Control Unit, King Fahd Hospital, University of Dammam, Saudi Arabia. Monitoring variation is an important strategy in understanding and learning about a process. In the example, HAI was monitored for variation in 2013, and the need to have a more predictable process prompted the need to control variation by an action plan. The action plan was successful, as noted by the shift in the 2014 data, compared to the historical average, and, in addition, the variation was reduced. The model is subject to limitations: For example, it cannot be used without benchmarks, which need to be calculated the same way with similar patient populations, and it focuses only on the "Analyze" part of the DMAIC model.

  14. Global-local methodologies and their application to nonlinear analysis

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1989-01-01

    An assessment is made of the potential of different global-local analysis strategies for predicting the nonlinear and postbuckling responses of structures. Two postbuckling problems of composite panels are used as benchmarks and the application of different global-local methodologies to these benchmarks is outlined. The key elements of each of the global-local strategies are discussed and future research areas needed to realize the full potential of global-local methodologies are identified.

  15. Alternative industrial carbon emissions benchmark based on input-output analysis

    NASA Astrophysics Data System (ADS)

    Han, Mengyao; Ji, Xi

    2016-12-01

    Some problems exist in the current carbon emissions benchmark setting systems. The primary consideration for industrial carbon emissions standards highly relate to direct carbon emissions (power-related emissions) and only a portion of indirect emissions are considered in the current carbon emissions accounting processes. This practice is insufficient and may cause double counting to some extent due to mixed emission sources. To better integrate and quantify direct and indirect carbon emissions, an embodied industrial carbon emissions benchmark setting method is proposed to guide the establishment of carbon emissions benchmarks based on input-output analysis. This method attempts to link direct carbon emissions with inter-industrial economic exchanges and systematically quantifies carbon emissions embodied in total product delivery chains. The purpose of this study is to design a practical new set of embodied intensity-based benchmarks for both direct and indirect carbon emissions. Beijing, at the first level of carbon emissions trading pilot schemes in China, plays a significant role in the establishment of these schemes and is chosen as an example in this study. The newly proposed method tends to relate emissions directly to each responsibility in a practical way through the measurement of complex production and supply chains and reduce carbon emissions from their original sources. This method is expected to be developed under uncertain internal and external contexts and is further expected to be generalized to guide the establishment of industrial benchmarks for carbon emissions trading schemes in China and other countries.

  16. Energy benchmarking of commercial buildings: a low-cost pathway toward urban sustainability

    NASA Astrophysics Data System (ADS)

    Cox, Matt; Brown, Marilyn A.; Sun, Xiaojing

    2013-09-01

    US cities are beginning to experiment with a regulatory approach to address information failures in the real estate market by mandating the energy benchmarking of commercial buildings. Understanding how a commercial building uses energy has many benefits; for example, it helps building owners and tenants identify poor-performing buildings and subsystems and it enables high-performing buildings to achieve greater occupancy rates, rents, and property values. This paper estimates the possible impacts of a national energy benchmarking mandate through analysis chiefly utilizing the Georgia Tech version of the National Energy Modeling System (GT-NEMS). Correcting input discount rates results in a 4.0% reduction in projected energy consumption for seven major classes of equipment relative to the reference case forecast in 2020, rising to 8.7% in 2035. Thus, the official US energy forecasts appear to overestimate future energy consumption by underestimating investments in energy-efficient equipment. Further discount rate reductions spurred by benchmarking policies yield another 1.3-1.4% in energy savings in 2020, increasing to 2.2-2.4% in 2035. Benchmarking would increase the purchase of energy-efficient equipment, reducing energy bills, CO2 emissions, and conventional air pollution. Achieving comparable CO2 savings would require more than tripling existing US solar capacity. Our analysis suggests that nearly 90% of the energy saved by a national benchmarking policy would benefit metropolitan areas, and the policy’s benefits would outweigh its costs, both to the private sector and society broadly.

  17. Principal Angle Enrichment Analysis (PAEA): Dimensionally Reduced Multivariate Gene Set Enrichment Analysis Tool

    PubMed Central

    Clark, Neil R.; Szymkiewicz, Maciej; Wang, Zichen; Monteiro, Caroline D.; Jones, Matthew R.; Ma’ayan, Avi

    2016-01-01

    Gene set analysis of differential expression, which identifies collectively differentially expressed gene sets, has become an important tool for biology. The power of this approach lies in its reduction of the dimensionality of the statistical problem and its incorporation of biological interpretation by construction. Many approaches to gene set analysis have been proposed, but benchmarking their performance in the setting of real biological data is difficult due to the lack of a gold standard. In a previously published work we proposed a geometrical approach to differential expression which performed highly in benchmarking tests and compared well to the most popular methods of differential gene expression. As reported, this approach has a natural extension to gene set analysis which we call Principal Angle Enrichment Analysis (PAEA). PAEA employs dimensionality reduction and a multivariate approach for gene set enrichment analysis. However, the performance of this method has not been assessed nor its implementation as a web-based tool. Here we describe new benchmarking protocols for gene set analysis methods and find that PAEA performs highly. The PAEA method is implemented as a user-friendly web-based tool, which contains 70 gene set libraries and is freely available to the community. PMID:26848405

  18. Principal Angle Enrichment Analysis (PAEA): Dimensionally Reduced Multivariate Gene Set Enrichment Analysis Tool.

    PubMed

    Clark, Neil R; Szymkiewicz, Maciej; Wang, Zichen; Monteiro, Caroline D; Jones, Matthew R; Ma'ayan, Avi

    2015-11-01

    Gene set analysis of differential expression, which identifies collectively differentially expressed gene sets, has become an important tool for biology. The power of this approach lies in its reduction of the dimensionality of the statistical problem and its incorporation of biological interpretation by construction. Many approaches to gene set analysis have been proposed, but benchmarking their performance in the setting of real biological data is difficult due to the lack of a gold standard. In a previously published work we proposed a geometrical approach to differential expression which performed highly in benchmarking tests and compared well to the most popular methods of differential gene expression. As reported, this approach has a natural extension to gene set analysis which we call Principal Angle Enrichment Analysis (PAEA). PAEA employs dimensionality reduction and a multivariate approach for gene set enrichment analysis. However, the performance of this method has not been assessed nor its implementation as a web-based tool. Here we describe new benchmarking protocols for gene set analysis methods and find that PAEA performs highly. The PAEA method is implemented as a user-friendly web-based tool, which contains 70 gene set libraries and is freely available to the community.

  19. A benchmark for statistical microarray data analysis that preserves actual biological and technical variance.

    PubMed

    De Hertogh, Benoît; De Meulder, Bertrand; Berger, Fabrice; Pierre, Michael; Bareke, Eric; Gaigneaux, Anthoula; Depiereux, Eric

    2010-01-11

    Recent reanalysis of spike-in datasets underscored the need for new and more accurate benchmark datasets for statistical microarray analysis. We present here a fresh method using biologically-relevant data to evaluate the performance of statistical methods. Our novel method ranks the probesets from a dataset composed of publicly-available biological microarray data and extracts subset matrices with precise information/noise ratios. Our method can be used to determine the capability of different methods to better estimate variance for a given number of replicates. The mean-variance and mean-fold change relationships of the matrices revealed a closer approximation of biological reality. Performance analysis refined the results from benchmarks published previously.We show that the Shrinkage t test (close to Limma) was the best of the methods tested, except when two replicates were examined, where the Regularized t test and the Window t test performed slightly better. The R scripts used for the analysis are available at http://urbm-cluster.urbm.fundp.ac.be/~bdemeulder/.

  20. The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grebe, A.; Leveling, A.; Lu, T.

    The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay gamma-quanta by the residuals in the activated structures and scoring the prompt doses of these gamma-quanta at arbitrary distances frommore » those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and showed a good agreement. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.« less

  1. The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose

    NASA Astrophysics Data System (ADS)

    Grebe, A.; Leveling, A.; Lu, T.; Mokhov, N.; Pronskikh, V.

    2018-01-01

    The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay γ-quanta by the residuals in the activated structures and scoring the prompt doses of these γ-quanta at arbitrary distances from those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and against experimental data from the CERF facility at CERN, and FermiCORD showed reasonable agreement with these. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.

  2. TU-FG-201-06: Remote Dosimetric Auditing for Clinical Trials Using EPID Dosimetry: A Pilot Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miri, N; Legge, K; Greer, P

    2016-06-15

    Purpose: To perform a pilot study for remote dosimetric credentialing of intensity modulated radiation therapy (IMRT) based clinical trials. The study introduces a novel, time efficient and inexpensive dosimetry audit method for multi-center credentialing. The method employs electronic portal imaging device (EPID) to reconstruct delivered dose inside a virtual flat/cylindrical water phantom. Methods: Five centers, including different accelerator types and treatment planning systems (TPS), were asked to download two CT data sets of a Head and Neck (H&N) and Postprostatectomy (P-P) patients to produce benchmark plans. These were then transferred to virtual flat and cylindrical phantom data sets that weremore » also provided. In-air EPID images of the plans were then acquired, and the data sent to the central site for analysis. At the central site, these were converted to DICOM format, all images were used to reconstruct 2D and 3D dose distributions inside respectively the flat and cylindrical phantoms using inhouse EPID to dose conversion software. 2D dose was calculated for individual fields and 3D dose for the combined fields. The results were compared to corresponding TPS doses. Three gamma criteria were used, 3%3mm-3%/2mm–2%/2mm with a 10% dose threshold, to compare the calculated and prescribed dose. Results: All centers had a high pass rate for the criteria of 3%/3 mm. For 2D dose, the average of centers mean pass rate was 99.6% (SD: 0.3%) and 99.8% (SD: 0.3%) for respectively H&N and PP patients. For 3D dose, 3D gamma was used to compare the model dose with TPS combined dose. The mean pass rate was 97.7% (SD: 2.8%) and 98.3% (SD: 1.6%). Conclusion: Successful performance of the method for the pilot centers establishes the method for dosimetric multi-center credentialing. The results are promising and show a high level of gamma agreement and, the procedure is efficient, consistent and inexpensive. Funding has been provided from Department of Radiation Oncology, TROG Cancer Research and the University of Newcastle. Narges Miri is a recipient of a University of Newcastle postgraduate scholarship.« less

  3. Austin Community College Benchmarking Update.

    ERIC Educational Resources Information Center

    Austin Community Coll., TX. Office of Institutional Effectiveness.

    Austin Community College contracted with MGT of America, Inc. in spring 1999 to develop a peer and benchmark (best) practices analysis on key indicators. These indicators were updated in spring 2002 using data from eight Texas community colleges and four non-Texas institutions that represent large, comprehensive, urban community colleges, similar…

  4. An Analysis of Academic Research Libraries Assessment Data: A Look at Professional Models and Benchmarking Data

    ERIC Educational Resources Information Center

    Lewin, Heather S.; Passonneau, Sarah M.

    2012-01-01

    This research provides the first review of publicly available assessment information found on Association of Research Libraries (ARL) members' websites. After providing an overarching review of benchmarking assessment data, and of professionally recommended assessment models, this paper examines if libraries contextualized their assessment…

  5. Benchmarking 2011: Trends in Education Philanthropy

    ERIC Educational Resources Information Center

    Grantmakers for Education, 2011

    2011-01-01

    The analysis in "Benchmarking 2011" is based on data from an unduplicated sample of 184 education grantmaking organizations--approximately two-thirds of Grantmakers for Education's (GFE's) network of grantmakers--who responded to an online survey consisting of fixed-choice and open-ended questions. Because a different subset of funders elects to…

  6. Benchmarking Universities' Efficiency Indicators in the Presence of Internal Heterogeneity

    ERIC Educational Resources Information Center

    Agasisti, Tommaso; Bonomi, Francesca

    2014-01-01

    When benchmarking its performance, a university is usually considered as a single strategic unit. According to the evidence, however, lower levels within an organisation (such as faculties, departments and schools) play a significant role in institutional governance, affecting the overall performance. In this article, an empirical analysis was…

  7. 78 FR 27957 - Fisheries of the South Atlantic, Southeast Data, Assessment, and Review (SEDAR); Public Meetings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-13

    ..., describes the fisheries, evaluates the status of the stock, estimates biological benchmarks, projects future.... Participants will evaluate and recommend datasets appropriate for assessment analysis, employ assessment models to evaluate stock status, estimate population benchmarks and management criteria, and project future...

  8. The Role of Institutional Research in Conducting Comparative Analysis of Peers

    ERIC Educational Resources Information Center

    Trainer, James F.

    2008-01-01

    In this age of accountability, transparency, and accreditation, colleges and universities increasingly conduct comparative analyses and engage in benchmarking activities. Meant to inform institutional planning and decision making, comparative analyses and benchmarking are employed to let stakeholders know how an institution stacks up against its…

  9. Preliminary Results for the OECD/NEA Time Dependent Benchmark using Rattlesnake, Rattlesnake-IQS and TDKENO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeHart, Mark D.; Mausolff, Zander; Weems, Zach

    2016-08-01

    One goal of the MAMMOTH M&S project is to validate the analysis capabilities within MAMMOTH. Historical data has shown limited value for validation of full three-dimensional (3D) multi-physics methods. Initial analysis considered the TREAT startup minimum critical core and one of the startup transient tests. At present, validation is focusing on measurements taken during the M8CAL test calibration series. These exercises will valuable in preliminary assessment of the ability of MAMMOTH to perform coupled multi-physics calculations; calculations performed to date are being used to validate the neutron transport solver Rattlesnake\\cite{Rattlesnake} and the fuels performance code BISON. Other validation projects outsidemore » of TREAT are available for single-physics benchmarking. Because the transient solution capability of Rattlesnake is one of the key attributes that makes it unique for TREAT transient simulations, validation of the transient solution of Rattlesnake using other time dependent kinetics benchmarks has considerable value. The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has recently developed a computational benchmark for transient simulations. This benchmark considered both two-dimensional (2D) and 3D configurations for a total number of 26 different transients. All are negative reactivity insertions, typically returning to the critical state after some time.« less

  10. Benchmark Evaluation of the HTR-PROTEUS Absorber Rod Worths (Core 4)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess; Leland M. Montierth

    2014-06-01

    PROTEUS was a zero-power research reactor at the Paul Scherrer Institute (PSI) in Switzerland. The critical assembly was constructed from a large graphite annulus surrounding a central cylindrical cavity. Various experimental programs were investigated in PROTEUS; during the years 1992 through 1996, it was configured as a pebble-bed reactor and designated HTR-PROTEUS. Various critical configurations were assembled with each accompanied by an assortment of reactor physics experiments including differential and integral absorber rod measurements, kinetics, reaction rate distributions, water ingress effects, and small sample reactivity effects [1]. Four benchmark reports were previously prepared and included in the March 2013 editionmore » of the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook) [2] evaluating eleven critical configurations. A summary of that effort was previously provided [3] and an analysis of absorber rod worth measurements for Cores 9 and 10 have been performed prior to this analysis and included in PROTEUS-GCR-EXP-004 [4]. In the current benchmark effort, absorber rod worths measured for Core Configuration 4, which was the only core with a randomly-packed pebble loading, have been evaluated for inclusion as a revision to the HTR-PROTEUS benchmark report PROTEUS-GCR-EXP-002.« less

  11. A benchmark for vehicle detection on wide area motion imagery

    NASA Astrophysics Data System (ADS)

    Catrambone, Joseph; Amzovski, Ismail; Liang, Pengpeng; Blasch, Erik; Sheaff, Carolyn; Wang, Zhonghai; Chen, Genshe; Ling, Haibin

    2015-05-01

    Wide area motion imagery (WAMI) has been attracting an increased amount of research attention due to its large spatial and temporal coverage. An important application includes moving target analysis, where vehicle detection is often one of the first steps before advanced activity analysis. While there exist many vehicle detection algorithms, a thorough evaluation of them on WAMI data still remains a challenge mainly due to the lack of an appropriate benchmark data set. In this paper, we address a research need by presenting a new benchmark for wide area motion imagery vehicle detection data. The WAMI benchmark is based on the recently available Wright-Patterson Air Force Base (WPAFB09) dataset and the Temple Resolved Uncertainty Target History (TRUTH) associated target annotation. Trajectory annotations were provided in the original release of the WPAFB09 dataset, but detailed vehicle annotations were not available with the dataset. In addition, annotations of static vehicles, e.g., in parking lots, are also not identified in the original release. Addressing these issues, we re-annotated the whole dataset with detailed information for each vehicle, including not only a target's location, but also its pose and size. The annotated WAMI data set should be useful to community for a common benchmark to compare WAMI detection, tracking, and identification methods.

  12. Clinical decision-making tools for exam selection, reporting and dose tracking.

    PubMed

    Brink, James A

    2014-10-01

    Although many efforts have been made to reduce the radiation dose associated with individual medical imaging examinations to "as low as reasonably achievable," efforts to ensure such examinations are performed only when medically indicated and appropriate are equally if not more important. Variations in the use of ionizing radiation for medical imaging are concerning, regardless of whether they occur on a local, regional or national basis. Such variations among practices can be reduced with the use of decision support tools at the time of order entry. These tools help reduce radiation exposure among practices through the appropriate use of medical imaging. Similarly, adoption of best practices among imaging facilities can be promoted through tracking the radiation exposure among imaging patients. Practices can benchmark their aggregate radiation exposures for medical imaging through the use of dose index registries. However several variables must be considered when contemplating individual patient dose tracking. The specific dose measures and the variation among them introduced by variations in body habitus must be understood. Moreover the uncertainties in risk estimation from dose metrics related to age, gender and life expectancy must also be taken into account.

  13. Quantitative assessment of the dose-response of alkylating agents in DNA repair proficient and deficient ames tester strains.

    PubMed

    Tang, Leilei; Guérard, Melanie; Zeller, Andreas

    2014-01-01

    Mutagenic and clastogenic effects of some DNA damaging agents such as methyl methanesulfonate (MMS) and ethyl methanesulfonate (EMS) have been demonstrated to exhibit a nonlinear or even "thresholded" dose-response in vitro and in vivo. DNA repair seems to be mainly responsible for these thresholds. To this end, we assessed several mutagenic alkylators in the Ames test with four different strains of Salmonella typhimurium: the alkyl transferases proficient strain TA1535 (Ogt+/Ada+), as well as the alkyl transferases deficient strains YG7100 (Ogt+/Ada-), YG7104 (Ogt-/Ada+) and YG7108 (Ogt-/Ada-). The known genotoxins EMS, MMS, temozolomide (TMZ), ethylnitrosourea (ENU) and methylnitrosourea (MNU) were tested in as many as 22 concentration levels. Dose-response curves were statistically fitted by the PROAST benchmark dose model and the Lutz-Lutz "hockeystick" model. These dose-response curves suggest efficient DNA-repair for lesions inflicted by all agents in strain TA1535. In the absence of Ogt, Ada is predominantly repairing methylations but not ethylations. It is concluded that the capacity of alkyl-transferases to successfully repair DNA lesions up to certain dose levels contributes to genotoxicity thresholds. Copyright © 2013 Wiley Periodicals, Inc.

  14. Low-dose CT image reconstruction using gain intervention-based dictionary learning

    NASA Astrophysics Data System (ADS)

    Pathak, Yadunath; Arya, K. V.; Tiwari, Shailendra

    2018-05-01

    Computed tomography (CT) approach is extensively utilized in clinical diagnoses. However, X-ray residue in human body may introduce somatic damage such as cancer. Owing to radiation risk, research has focused on the radiation exposure distributed to patients through CT investigations. Therefore, low-dose CT has become a significant research area. Many researchers have proposed different low-dose CT reconstruction techniques. But, these techniques suffer from various issues such as over smoothing, artifacts, noise, etc. Therefore, in this paper, we have proposed a novel integrated low-dose CT reconstruction technique. The proposed technique utilizes global dictionary-based statistical iterative reconstruction (GDSIR) and adaptive dictionary-based statistical iterative reconstruction (ADSIR)-based reconstruction techniques. In case the dictionary (D) is predetermined, then GDSIR can be used and if D is adaptively defined then ADSIR is appropriate choice. The gain intervention-based filter is also used as a post-processing technique for removing the artifacts from low-dose CT reconstructed images. Experiments have been done by considering the proposed and other low-dose CT reconstruction techniques on well-known benchmark CT images. Extensive experiments have shown that the proposed technique outperforms the available approaches.

  15. Simulation-based comprehensive benchmarking of RNA-seq aligners

    PubMed Central

    Baruzzo, Giacomo; Hayer, Katharina E; Kim, Eun Ji; Di Camillo, Barbara; FitzGerald, Garret A; Grant, Gregory R

    2018-01-01

    Alignment is the first step in most RNA-seq analysis pipelines, and the accuracy of downstream analyses depends heavily on it. Unlike most steps in the pipeline, alignment is particularly amenable to benchmarking with simulated data. We performed a comprehensive benchmarking of 14 common splice-aware aligners for base, read, and exon junction-level accuracy and compared default with optimized parameters. We found that performance varied by genome complexity, and accuracy and popularity were poorly correlated. The most widely cited tool underperforms for most metrics, particularly when using default settings. PMID:27941783

  16. Performance evaluation of tile-based Fisher Ratio analysis using a benchmark yeast metabolome dataset.

    PubMed

    Watson, Nathanial E; Parsons, Brendon A; Synovec, Robert E

    2016-08-12

    Performance of tile-based Fisher Ratio (F-ratio) data analysis, recently developed for discovery-based studies using comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC×GC-TOFMS), is evaluated with a metabolomics dataset that had been previously analyzed in great detail, but while taking a brute force approach. The previously analyzed data (referred to herein as the benchmark dataset) were intracellular extracts from Saccharomyces cerevisiae (yeast), either metabolizing glucose (repressed) or ethanol (derepressed), which define the two classes in the discovery-based analysis to find metabolites that are statistically different in concentration between the two classes. Beneficially, this previously analyzed dataset provides a concrete means to validate the tile-based F-ratio software. Herein, we demonstrate and validate the significant benefits of applying tile-based F-ratio analysis. The yeast metabolomics data are analyzed more rapidly in about one week versus one year for the prior studies with this dataset. Furthermore, a null distribution analysis is implemented to statistically determine an adequate F-ratio threshold, whereby the variables with F-ratio values below the threshold can be ignored as not class distinguishing, which provides the analyst with confidence when analyzing the hit table. Forty-six of the fifty-four benchmarked changing metabolites were discovered by the new methodology while consistently excluding all but one of the benchmarked nineteen false positive metabolites previously identified. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Global-local methodologies and their application to nonlinear analysis. [for structural postbuckling study

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1986-01-01

    An assessment is made of the potential of different global-local analysis strategies for predicting the nonlinear and postbuckling responses of structures. Two postbuckling problems of composite panels are used as benchmarks and the application of different global-local methodologies to these benchmarks is outlined. The key elements of each of the global-local strategies are discussed and future research areas needed to realize the full potential of global-local methodologies are identified.

  18. Evaluating the Quantitative Capabilities of Metagenomic Analysis Software.

    PubMed

    Kerepesi, Csaba; Grolmusz, Vince

    2016-05-01

    DNA sequencing technologies are applied widely and frequently today to describe metagenomes, i.e., microbial communities in environmental or clinical samples, without the need for culturing them. These technologies usually return short (100-300 base-pairs long) DNA reads, and these reads are processed by metagenomic analysis software that assign phylogenetic composition-information to the dataset. Here we evaluate three metagenomic analysis software (AmphoraNet--a webserver implementation of AMPHORA2--, MG-RAST, and MEGAN5) for their capabilities of assigning quantitative phylogenetic information for the data, describing the frequency of appearance of the microorganisms of the same taxa in the sample. The difficulties of the task arise from the fact that longer genomes produce more reads from the same organism than shorter genomes, and some software assign higher frequencies to species with longer genomes than to those with shorter ones. This phenomenon is called the "genome length bias." Dozens of complex artificial metagenome benchmarks can be found in the literature. Because of the complexity of those benchmarks, it is usually difficult to judge the resistance of a metagenomic software to this "genome length bias." Therefore, we have made a simple benchmark for the evaluation of the "taxon-counting" in a metagenomic sample: we have taken the same number of copies of three full bacterial genomes of different lengths, break them up randomly to short reads of average length of 150 bp, and mixed the reads, creating our simple benchmark. Because of its simplicity, the benchmark is not supposed to serve as a mock metagenome, but if a software fails on that simple task, it will surely fail on most real metagenomes. We applied three software for the benchmark. The ideal quantitative solution would assign the same proportion to the three bacterial taxa. We have found that AMPHORA2/AmphoraNet gave the most accurate results and the other two software were under-performers: they counted quite reliably each short read to their respective taxon, producing the typical genome length bias. The benchmark dataset is available at http://pitgroup.org/static/3RandomGenome-100kavg150bps.fna.

  19. Benchmark datasets for phylogenomic pipeline validation, applications for foodborne pathogen surveillance

    PubMed Central

    Rand, Hugh; Shumway, Martin; Trees, Eija K.; Simmons, Mustafa; Agarwala, Richa; Davis, Steven; Tillman, Glenn E.; Defibaugh-Chavez, Stephanie; Carleton, Heather A.; Klimke, William A.; Katz, Lee S.

    2017-01-01

    Background As next generation sequence technology has advanced, there have been parallel advances in genome-scale analysis programs for determining evolutionary relationships as proxies for epidemiological relationship in public health. Most new programs skip traditional steps of ortholog determination and multi-gene alignment, instead identifying variants across a set of genomes, then summarizing results in a matrix of single-nucleotide polymorphisms or alleles for standard phylogenetic analysis. However, public health authorities need to document the performance of these methods with appropriate and comprehensive datasets so they can be validated for specific purposes, e.g., outbreak surveillance. Here we propose a set of benchmark datasets to be used for comparison and validation of phylogenomic pipelines. Methods We identified four well-documented foodborne pathogen events in which the epidemiology was concordant with routine phylogenomic analyses (reference-based SNP and wgMLST approaches). These are ideal benchmark datasets, as the trees, WGS data, and epidemiological data for each are all in agreement. We have placed these sequence data, sample metadata, and “known” phylogenetic trees in publicly-accessible databases and developed a standard descriptive spreadsheet format describing each dataset. To facilitate easy downloading of these benchmarks, we developed an automated script that uses the standard descriptive spreadsheet format. Results Our “outbreak” benchmark datasets represent the four major foodborne bacterial pathogens (Listeria monocytogenes, Salmonella enterica, Escherichia coli, and Campylobacter jejuni) and one simulated dataset where the “known tree” can be accurately called the “true tree”. The downloading script and associated table files are available on GitHub: https://github.com/WGS-standards-and-analysis/datasets. Discussion These five benchmark datasets will help standardize comparison of current and future phylogenomic pipelines, and facilitate important cross-institutional collaborations. Our work is part of a global effort to provide collaborative infrastructure for sequence data and analytic tools—we welcome additional benchmark datasets in our recommended format, and, if relevant, we will add these on our GitHub site. Together, these datasets, dataset format, and the underlying GitHub infrastructure present a recommended path for worldwide standardization of phylogenomic pipelines. PMID:29372115

  20. Benchmark datasets for phylogenomic pipeline validation, applications for foodborne pathogen surveillance.

    PubMed

    Timme, Ruth E; Rand, Hugh; Shumway, Martin; Trees, Eija K; Simmons, Mustafa; Agarwala, Richa; Davis, Steven; Tillman, Glenn E; Defibaugh-Chavez, Stephanie; Carleton, Heather A; Klimke, William A; Katz, Lee S

    2017-01-01

    As next generation sequence technology has advanced, there have been parallel advances in genome-scale analysis programs for determining evolutionary relationships as proxies for epidemiological relationship in public health. Most new programs skip traditional steps of ortholog determination and multi-gene alignment, instead identifying variants across a set of genomes, then summarizing results in a matrix of single-nucleotide polymorphisms or alleles for standard phylogenetic analysis. However, public health authorities need to document the performance of these methods with appropriate and comprehensive datasets so they can be validated for specific purposes, e.g., outbreak surveillance. Here we propose a set of benchmark datasets to be used for comparison and validation of phylogenomic pipelines. We identified four well-documented foodborne pathogen events in which the epidemiology was concordant with routine phylogenomic analyses (reference-based SNP and wgMLST approaches). These are ideal benchmark datasets, as the trees, WGS data, and epidemiological data for each are all in agreement. We have placed these sequence data, sample metadata, and "known" phylogenetic trees in publicly-accessible databases and developed a standard descriptive spreadsheet format describing each dataset. To facilitate easy downloading of these benchmarks, we developed an automated script that uses the standard descriptive spreadsheet format. Our "outbreak" benchmark datasets represent the four major foodborne bacterial pathogens ( Listeria monocytogenes , Salmonella enterica , Escherichia coli , and Campylobacter jejuni ) and one simulated dataset where the "known tree" can be accurately called the "true tree". The downloading script and associated table files are available on GitHub: https://github.com/WGS-standards-and-analysis/datasets. These five benchmark datasets will help standardize comparison of current and future phylogenomic pipelines, and facilitate important cross-institutional collaborations. Our work is part of a global effort to provide collaborative infrastructure for sequence data and analytic tools-we welcome additional benchmark datasets in our recommended format, and, if relevant, we will add these on our GitHub site. Together, these datasets, dataset format, and the underlying GitHub infrastructure present a recommended path for worldwide standardization of phylogenomic pipelines.

  1. High resolution propagation-based imaging system for in vivo dynamic computed tomography of lungs in small animals

    NASA Astrophysics Data System (ADS)

    Preissner, M.; Murrie, R. P.; Pinar, I.; Werdiger, F.; Carnibella, R. P.; Zosky, G. R.; Fouras, A.; Dubsky, S.

    2018-04-01

    We have developed an x-ray imaging system for in vivo four-dimensional computed tomography (4DCT) of small animals for pre-clinical lung investigations. Our customized laboratory facility is capable of high resolution in vivo imaging at high frame rates. Characterization using phantoms demonstrate a spatial resolution of slightly below 50 μm at imaging rates of 30 Hz, and the ability to quantify material density differences of at least 3%. We benchmark our system against existing small animal pre-clinical CT scanners using a quality factor that combines spatial resolution, image noise, dose and scan time. In vivo 4DCT images obtained on our system demonstrate resolution of important features such as blood vessels and small airways, of which the smallest discernible were measured as 55–60 μm in cross section. Quantitative analysis of the images demonstrate regional differences in ventilation between injured and healthy lungs.

  2. Dosimetric evaluation of a Monte Carlo IMRT treatment planning system incorporating the MIMiC

    NASA Astrophysics Data System (ADS)

    Rassiah-Szegedi, P.; Fuss, M.; Sheikh-Bagheri, D.; Szegedi, M.; Stathakis, S.; Lancaster, J.; Papanikolaou, N.; Salter, B.

    2007-12-01

    The high dose per fraction delivered to lung lesions in stereotactic body radiation therapy (SBRT) demands high dose calculation and delivery accuracy. The inhomogeneous density in the thoracic region along with the small fields used typically in intensity-modulated radiation therapy (IMRT) treatments poses a challenge in the accuracy of dose calculation. In this study we dosimetrically evaluated a pre-release version of a Monte Carlo planning system (PEREGRINE 1.6b, NOMOS Corp., Cranberry Township, PA), which incorporates the modeling of serial tomotherapy IMRT treatments with the binary multileaf intensity modulating collimator (MIMiC). The aim of this study is to show the validation process of PEREGRINE 1.6b since it was used as a benchmark to investigate the accuracy of doses calculated by a finite size pencil beam (FSPB) algorithm for lung lesions treated on the SBRT dose regime via serial tomotherapy in our previous study. Doses calculated by PEREGRINE were compared against measurements in homogeneous and inhomogeneous materials carried out on a Varian 600C with a 6 MV photon beam. Phantom studies simulating various sized lesions were also carried out to explain some of the large dose discrepancies seen in the dose calculations with small lesions. Doses calculated by PEREGRINE agreed to within 2% in water and up to 3% for measurements in an inhomogeneous phantom containing lung, bone and unit density tissue.

  3. Exposure-response relationship and risk assessment for cognitive deficits in early welding-induced manganism.

    PubMed

    Park, Robert M; Bowler, Rosemarie M; Roels, Harry A

    2009-10-01

    The exposure-response relationship for manganese (Mn)-induced adverse nervous system effects is not well described. Symptoms and neuropsychological deficits associated with early manganism were previously reported for welders constructing bridge piers during 2003 to 2004. A reanalysis using improved exposure, work history information, and diverse exposure metrics is presented here. Ten neuropsychological performance measures were examined, including working memory index (WMI), verbal intelligence quotient, design fluency, Stroop color word test, Rey-Osterrieth Complex Figure, and Auditory Consonant Trigram tests. Mn blood levels and air sampling data in the form of both personal and area samples were available. The exposure metrics used were cumulative exposure to Mn, body burden assuming simple first-order kinetics for Mn elimination, and cumulative burden (effective dose). Benchmark doses were calculated. Burden with a half-life of about 150 days was the best predictor of blood Mn. WMI performance declined by 3.6 (normal = 100, SD = 15) for each 1.0 mg/m3 x mo exposure (P = 0.02, one tailed). At the group mean exposure metric (burden; half-life = 275 days), WMI performance was at the lowest 17th percentile of normal, and at the maximum observed metric, performance was at the lowest 2.5 percentiles. Four other outcomes also exhibited statistically significant associations (verbal intelligence quotient, verbal comprehension index, design fluency, Stroop color word test); no dose-rate effect was observed for three of the five outcomes. A risk assessment performed for the five stronger effects, choosing various percentiles of normal performance to represent impairment, identified benchmark doses for a 2-year exposure leading to 5% excess impairment prevalence in the range of 0.03 to 0.15 mg/m3, or 30 to 150 microg/m3, total Mn in air, levels that are far below those permitted by current occupational standards. More than one-third of workers would be impaired after working 2 years at 0.2 mg/m3 Mn (the current threshold limit value).

  4. SU-F-T-231: Improving the Efficiency of a Radiotherapy Peer-Review System for Quality Assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsu, S; Basavatia, A; Garg, M

    Purpose: To improve the efficiency of a radiotherapy peer-review system using a commercially available software application for plan quality evaluation and documentation. Methods: A commercial application, FullAccess (Radialogica LLC, Version 1.4.4), was implemented in a Citrix platform for peer-review process and patient documentation. This application can display images, isodose lines, and dose-volume histograms and create plan reports for peer-review process. Dose metrics in the report can also be benchmarked for plan quality evaluation. Site-specific templates were generated based on departmental treatment planning policies and procedures for each disease site, which generally follow RTOG protocols as well as published prospective clinicalmore » trial data, including both conventional fractionation and hypo-fractionation schema. Once a plan is ready for review, the planner exports the plan to FullAccess, applies the site-specific template, and presents the report for plan review. The plan is still reviewed in the treatment planning system, as that is the legal record. Upon physician’s approval of a plan, the plan is packaged for peer review with the plan report and dose metrics are saved to the database. Results: The reports show dose metrics of PTVs and critical organs for the plans and also indicate whether or not the metrics are within tolerance. Graphical results with green, yellow, and red lights are displayed of whether planning objectives have been met. In addition, benchmarking statistics are collected to see where the current plan falls compared to all historical plans on each metric. All physicians in peer review can easily verify constraints by these reports. Conclusion: We have demonstrated the improvement in a radiotherapy peer-review system, which allows physicians to easily verify planning constraints for different disease sites and fractionation schema, allows for standardization in the clinic to ensure that departmental policies are maintained, and builds a comprehensive database for potential clinical outcome evaluation.« less

  5. SU-E-T-22: A Deterministic Solver of the Boltzmann-Fokker-Planck Equation for Dose Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, X; Gao, H; Paganetti, H

    2015-06-15

    Purpose: The Boltzmann-Fokker-Planck equation (BFPE) accurately models the migration of photons/charged particles in tissues. While the Monte Carlo (MC) method is popular for solving BFPE in a statistical manner, we aim to develop a deterministic BFPE solver based on various state-of-art numerical acceleration techniques for rapid and accurate dose calculation. Methods: Our BFPE solver is based on the structured grid that is maximally parallelizable, with the discretization in energy, angle and space, and its cross section coefficients are derived or directly imported from the Geant4 database. The physical processes that are taken into account are Compton scattering, photoelectric effect, pairmore » production for photons, and elastic scattering, ionization and bremsstrahlung for charged particles.While the spatial discretization is based on the diamond scheme, the angular discretization synergizes finite element method (FEM) and spherical harmonics (SH). Thus, SH is used to globally expand the scattering kernel and FFM is used to locally discretize the angular sphere. As a Result, this hybrid method (FEM-SH) is both accurate in dealing with forward-peaking scattering via FEM, and efficient for multi-energy-group computation via SH. In addition, FEM-SH enables the analytical integration in energy variable of delta scattering kernel for elastic scattering with reduced truncation error from the numerical integration based on the classic SH-based multi-energy-group method. Results: The accuracy of the proposed BFPE solver was benchmarked against Geant4 for photon dose calculation. In particular, FEM-SH had improved accuracy compared to FEM, while both were within 2% of the results obtained with Geant4. Conclusion: A deterministic solver of the Boltzmann-Fokker-Planck equation is developed for dose calculation, and benchmarked against Geant4. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Russell Feder and Mahmoud Z. Yousef

    Neutronics analysis to find nuclear heating rates and personnel dose rates were conducted in support of the integration of diagnostics in to the ITER Upper Port Plugs. Simplified shielding models of the Visible-Infrared diagnostic and of the ECH heating system were incorporated in to the ITER global CAD model. Results for these systems are representative of typical designs with maximum shielding and a small aperture (Vis-IR) and minimal shielding with a large aperture (ECH). The neutronics discrete-ordinates code ATTILA® and SEVERIAN® (the ATTILA parallel processing version) was used. Material properties and the 500 MW D-T volume source were taken frommore » the ITER “Brand Model” MCNP benchmark model. A biased quadrature set equivelant to Sn=32 and a scattering degree of Pn=3 were used along with a 46-neutron and 21-gamma FENDL energy subgrouping. Total nuclear heating (neutron plug gamma heating) in the upper port plugs ranged between 380 and 350 kW for the Vis-IR and ECH cases. The ECH or Large Aperture model exhibited lower total heating but much higher peak volumetric heating on the upper port plug structure. Personnel dose rates are calculated in a three step process involving a neutron-only transport calculation, the generation of activation volume sources at pre-defined time steps and finally gamma transport analyses are run for selected time steps. ANSI-ANS 6.1.1 1977 Flux-to-Dose conversion factors were used. Dose rates were evaluated for 1 full year of 500 MW DT operation which is comprised of 3000 1800-second pulses. After one year the machine is shut down for maintenance and personnel are permitted to access the diagnostic interspace after 2-weeks if dose rates are below 100 μSv/hr. Dose rates in the Visible-IR diagnostic model after one day of shutdown were 130 μSv/hr but fell below the limit to 90 μSv/hr 2-weeks later. The Large Aperture or ECH style shielding model exhibited higher and more persistent dose rates. After 1-day the dose rate was 230 μSv/hr but was still at 120 μSv/hr 4-weeks later. __________________________________________________« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Russell E. Feder and Mahmoud Z. Youssef

    Neutronics analysis to find nuclear heating rates and personnel dose rates were conducted in support of the integration of diagnostics in to the ITER Upper Port Plugs. Simplified shielding models of the Visible-Infrared diagnostic and of a large aperture diagnostic were incorporated in to the ITER global CAD model. Results for these systems are representative of typical designs with maximum shielding and a small aperture (Vis-IR) and minimal shielding with a large aperture. The neutronics discrete-ordinates code ATTILA® and SEVERIAN® (the ATTILA parallel processing version) was used. Material properties and the 500 MW D-T volume source were taken from themore » ITER “Brand Model” MCNP benchmark model. A biased quadrature set equivelant to Sn=32 and a scattering degree of Pn=3 were used along with a 46-neutron and 21-gamma FENDL energy subgrouping. Total nuclear heating (neutron plug gamma heating) in the upper port plugs ranged between 380 and 350 kW for the Vis-IR and Large Aperture cases. The Large Aperture model exhibited lower total heating but much higher peak volumetric heating on the upper port plug structure. Personnel dose rates are calculated in a three step process involving a neutron-only transport calculation, the generation of activation volume sources at pre-defined time steps and finally gamma transport analyses are run for selected time steps. ANSI-ANS 6.1.1 1977 Flux-to-Dose conversion factors were used. Dose rates were evaluated for 1 full year of 500 MW DT operation which is comprised of 3000 1800-second pulses. After one year the machine is shut down for maintenance and personnel are permitted to access the diagnostic interspace after 2-weeks if dose rates are below 100 μSv/hr. Dose rates in the Visible-IR diagnostic model after one day of shutdown were 130 μSv/hr but fell below the limit to 90 μSv/hr 2-weeks later. The Large Aperture style shielding model exhibited higher and more persistent dose rates. After 1-day the dose rate was 230 μSv/hr but was still at 120 μSv/hr 4-weeks later.« less

  8. Cumulative organophosphate pesticide exposure and risk assessment among pregnant women living in an agricultural community: a case study from the CHAMACOS cohort.

    PubMed Central

    Castorina, Rosemary; Bradman, Asa; McKone, Thomas E; Barr, Dana B; Harnly, Martha E; Eskenazi, Brenda

    2003-01-01

    Approximately 230,000 kg of organophosphate (OP) pesticides are applied annually in California's Salinas Valley. These activities have raised concerns about exposures to area residents. We collected three spot urine samples from pregnant women (between 1999 and 2001) enrolled in CHAMACOS (Center for the Health Assessment of Mothers and Children of Salinas), a longitudinal birth cohort study, and analyzed them for six dialkyl phosphate metabolites. We used urine from 446 pregnant women to estimate OP pesticide doses with two deterministic steady-state modeling methods: method 1, which assumed the metabolites were attributable entirely to a single diethyl or dimethyl OP pesticide; and method 2, which adapted U.S. Environmental Protection Agency (U.S. EPA) draft guidelines for cumulative risk assessment to estimate dose from a mixture of OP pesticides that share a common mechanism of toxicity. We used pesticide use reporting data for the Salinas Valley to approximate the mixture to which the women were exposed. Based on average OP pesticide dose estimates that assumed exposure to a single OP pesticide (method 1), between 0% and 36.1% of study participants' doses failed to attain a margin of exposure (MOE) of 100 relative to the U.S. EPA oral benchmark dose(10) (BMD(10)), depending on the assumption made about the parent compound. These BMD(10) values are doses expected to produce a 10% reduction in brain cholinesterase activity compared with background response in rats. Given the participants' average cumulative OP pesticide dose estimates (method 2) and regardless of the index chemical selected, we found that 14.8% of the doses failed to attain an MOE of 100 relative to the BMD(10) of the selected index. An uncertainty analysis of the pesticide mixture parameter, which is extrapolated from pesticide application data for the study area and not directly quantified for each individual, suggests that this point estimate could range from 1 to 34%. In future analyses, we will use pesticide-specific urinary metabolites, when available, to evaluate cumulative OP pesticide exposures. PMID:14527844

  9. Benchmarks for effective primary care-based nursing services for adults with depression: a Delphi study.

    PubMed

    McIlrath, Carole; Keeney, Sinead; McKenna, Hugh; McLaughlin, Derek

    2010-02-01

    This paper is a report of a study conducted to identify and gain consensus on appropriate benchmarks for effective primary care-based nursing services for adults with depression. Worldwide evidence suggests that between 5% and 16% of the population have a diagnosis of depression. Most of their care and treatment takes place in primary care. In recent years, primary care nurses, including community mental health nurses, have become more involved in the identification and management of patients with depression; however, there are no appropriate benchmarks to guide, develop and support their practice. In 2006, a three-round electronic Delphi survey was completed by a United Kingdom multi-professional expert panel (n = 67). Round 1 generated 1216 statements relating to structures (such as training and protocols), processes (such as access and screening) and outcomes (such as patient satisfaction and treatments). Content analysis was used to collapse statements into 140 benchmarks. Seventy-three benchmarks achieved consensus during subsequent rounds. Of these, 45 (61%) were related to structures, 18 (25%) to processes and 10 (14%) to outcomes. Multi-professional primary care staff have similar views about the appropriate benchmarks for care of adults with depression. These benchmarks could serve as a foundation for depression improvement initiatives in primary care and ongoing research into depression management by nurses.

  10. Research on computer systems benchmarking

    NASA Technical Reports Server (NTRS)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  11. Inverse treatment planning for spinal robotic radiosurgery: an international multi‐institutional benchmark trial

    PubMed Central

    Wang, Lei; Baus, Wolfgang; Grimm, Jimm; Lacornerie, Thomas; Nilsson, Joakim; Luchkovskyi, Sergii; Cano, Isabel Palazon; Shou, Zhenyu; Ayadi, Myriam; Treuer, Harald; Viard, Romain; Siebert, Frank‐Andre; Chan, Mark K.H.; Hildebrandt, Guido; Dunst, Jürgen; Imhoff, Detlef; Wurster, Stefan; Wolff, Robert; Romanelli, Pantaleo; Lartigau, Eric; Semrau, Robert; Soltys, Scott G.; Schweikard, Achim

    2016-01-01

    Stereotactic radiosurgery (SRS) is the accurate, conformal delivery of high‐dose radiation to well‐defined targets while minimizing normal structure doses via steep dose gradients. While inverse treatment planning (ITP) with computerized optimization algorithms are routine, many aspects of the planning process remain user‐dependent. We performed an international, multi‐institutional benchmark trial to study planning variability and to analyze preferable ITP practice for spinal robotic radiosurgery. 10 SRS treatment plans were generated for a complex‐shaped spinal metastasis with 21 Gy in 3 fractions and tight constraints for spinal cord (V14Gy<2 cc, V18Gy<0.1 cc) and target (coverage >95%). The resulting plans were rated on a scale from 1 to 4 (excellent‐poor) in five categories (constraint compliance, optimization goals, low‐dose regions, ITP complexity, and clinical acceptability) by a blinded review panel. Additionally, the plans were mathematically rated based on plan indices (critical structure and target doses, conformity, monitor units, normal tissue complication probability, and treatment time) and compared to the human rankings. The treatment plans and the reviewers' rankings varied substantially among the participating centers. The average mean overall rank was 2.4 (1.2‐4.0) and 8/10 plans were rated excellent in at least one category by at least one reviewer. The mathematical rankings agreed with the mean overall human rankings in 9/10 cases pointing toward the possibility for sole mathematical plan quality comparison. The final rankings revealed that a plan with a well‐balanced trade‐off among all planning objectives was preferred for treatment by most participants, reviewers, and the mathematical ranking system. Furthermore, this plan was generated with simple planning techniques. Our multi‐institutional planning study found wide variability in ITP approaches for spinal robotic radiosurgery. The participants', reviewers', and mathematical match on preferable treatment plans and ITP techniques indicate that agreement on treatment planning and plan quality can be reached for spinal robotic radiosurgery. PACS number(s): 87.55.de PMID:27167291

  12. Benchmarking Equity in Transfer Policies for Career and Technical Associate's Degrees

    ERIC Educational Resources Information Center

    Chase, Megan M.

    2011-01-01

    Using critical policy analysis, this study considers state policies that impede technical credit transfer from public 2-year colleges to 4-year institutions of higher education. The states of Ohio, Texas, Washington, and Wisconsin are considered, and seven policy benchmarks for facilitating the transfer of technical credits are proposed. (Contains…

  13. Benchmarking and Threshold Standards in Higher Education. Staff and Educational Development Series.

    ERIC Educational Resources Information Center

    Smith, Helen, Ed.; Armstrong, Michael, Ed.; Brown, Sally, Ed.

    This book explores the issues involved in developing standards in higher education, examining the practical issues involved in benchmarking and offering a critical analysis of the problems associated with this developmental tool. The book focuses primarily on experience in the United Kingdom (UK), but looks also at international activity in this…

  14. Building Bridges Between Geoscience and Data Science through Benchmark Data Sets

    NASA Astrophysics Data System (ADS)

    Thompson, D. R.; Ebert-Uphoff, I.; Demir, I.; Gel, Y.; Hill, M. C.; Karpatne, A.; Güereque, M.; Kumar, V.; Cabral, E.; Smyth, P.

    2017-12-01

    The changing nature of observational field data demands richer and more meaningful collaboration between data scientists and geoscientists. Thus, among other efforts, the Working Group on Case Studies of the NSF-funded RCN on Intelligent Systems Research To Support Geosciences (IS-GEO) is developing a framework to strengthen such collaborations through the creation of benchmark datasets. Benchmark datasets provide an interface between disciplines without requiring extensive background knowledge. The goals are to create (1) a means for two-way communication between geoscience and data science researchers; (2) new collaborations, which may lead to new approaches for data analysis in the geosciences; and (3) a public, permanent repository of complex data sets, representative of geoscience problems, useful to coordinate efforts in research and education. The group identified 10 key elements and characteristics for ideal benchmarks. High impact: A problem with high potential impact. Active research area: A group of geoscientists should be eager to continue working on the topic. Challenge: The problem should be challenging for data scientists. Data science generality and versatility: It should stimulate development of new general and versatile data science methods. Rich information content: Ideally the data set provides stimulus for analysis at many different levels. Hierarchical problem statement: A hierarchy of suggested analysis tasks, from relatively straightforward to open-ended tasks. Means for evaluating success: Data scientists and geoscientists need means to evaluate whether the algorithms are successful and achieve intended purpose. Quick start guide: Introduction for data scientists on how to easily read the data to enable rapid initial data exploration. Geoscience context: Summary for data scientists of the specific data collection process, instruments used, any pre-processing and the science questions to be answered. Citability: A suitable identifier to facilitate tracking the use of the benchmark later on, e.g. allowing search engines to find all research papers using it. A first sample benchmark developed in collaboration with the Jet Propulsion Laboratory (JPL) deals with the automatic analysis of imaging spectrometer data to detect significant methane sources in the atmosphere.

  15. Validation of Shielding Analysis Capability of SuperMC with SINBAD

    NASA Astrophysics Data System (ADS)

    Chen, Chaobin; Yang, Qi; Wu, Bin; Han, Yuncheng; Song, Jing

    2017-09-01

    Abstract: The shielding analysis capability of SuperMC was validated with the Shielding Integral Benchmark Archive Database (SINBAD). The SINBAD was compiled by RSICC and NEA, it includes numerous benchmark experiments performed with the D-T fusion neutron source facilities of OKTAVIAN, FNS, IPPE, etc. The results from SuperMC simulation were compared with experimental data and MCNP results. Very good agreement with deviation lower than 1% was achieved and it suggests that SuperMC is reliable in shielding calculation.

  16. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    NASA Technical Reports Server (NTRS)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  17. Delayed persistence of giant-nucleated cells induced by X-ray and proton irradiation in the progeny of replicating normal human f ibroblast cells

    NASA Astrophysics Data System (ADS)

    Almahwasi, A. A.; Jeynes, J. C.; Merchant, M. J.; Bradley, D. A.; Regan, P. H.

    2017-08-01

    Ionising radiation can induce giant-nucleated cells (GCs) in the progeny of irradiated populations, as demonstrated in various cellular systems. Most in vitro studies have utilised quiescent cancerous or normal cell lines but it is not clear whether radiation-induced GCs persist in the progeny of normal replicated cells. In the current work we show persistent induction of GCs in the progeny of normal human-diploid skin fibroblasts (AG1522). These cells were originally irradiated with a single equivalent clinical dose of 0.2, 1 or 2 Gy of either X-ray or proton irradiation and maintained in an active state for various post-irradiation incubation interval times before they were replated for GC analysis. The results demonstrate that the formation of GCs in the progeny of X-ray or proton irradiated cells was increased in a dose-dependent manner when measured 7 days after irradiation and this finding is in agreement with that reported for the AG1522 cells using other radiation qualities. For the 1 Gy X-ray doses it was found that the GC yield increased continually with time up to 21 days post-irradiation. These results can act as benchmark data for such work and may have important implications for studies aimed at evaluating the efficacy of radiation therapy and in determining the risk of delayed effects particularly when applying protons.

  18. Correlation of In Vivo Versus In Vitro Benchmark Doses (BMDs) Derived From Micronucleus Test Data: A Proof of Concept Study.

    PubMed

    Soeteman-Hernández, Lya G; Fellows, Mick D; Johnson, George E; Slob, Wout

    2015-12-01

    In this study, we explored the applicability of using in vitro micronucleus (MN) data from human lymphoblastoid TK6 cells to derive in vivo genotoxicity potency information. Nineteen chemicals covering a broad spectrum of genotoxic modes of action were tested in an in vitro MN test using TK6 cells using the same study protocol. Several of these chemicals were considered to need metabolic activation, and these were administered in the presence of S9. The Benchmark dose (BMD) approach was applied using the dose-response modeling program PROAST to estimate the genotoxic potency from the in vitro data. The resulting in vitro BMDs were compared with previously derived BMDs from in vivo MN and carcinogenicity studies. A proportional correlation was observed between the BMDs from the in vitro MN and the BMDs from the in vivo MN assays. Further, a clear correlation was found between the BMDs from in vitro MN and the associated BMDs for malignant tumors. Although these results are based on only 19 compounds, they show that genotoxicity potencies estimated from in vitro tests may result in useful information regarding in vivo genotoxic potency, as well as expected cancer potency. Extension of the number of compounds and further investigation of metabolic activation (S9) and of other toxicokinetic factors would be needed to validate our initial conclusions. However, this initial work suggests that this approach could be used for in vitro to in vivo extrapolations which would support the reduction of animals used in research (3Rs: replacement, reduction, and refinement). © The Author 2015. Published by Oxford University Press on behalf of the Society of Toxicology.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, S; Ho, M; Chen, C

    Purpose: The use of log files to perform patient specific quality assurance for both protons and IMRT has been established. Here, we extend that approach to a proprietary log file format and compare our results to measurements in phantom. Our goal was to generate a system that would permit gross errors to be found within 3 fractions until direct measurements. This approach could eventually replace direct measurements. Methods: Spot scanning protons pass through multi-wire ionization chambers which provide information about the charge, location, and size of each delivered spot. We have generated a program that calculates the dose in phantommore » from these log files and compares the measurements with the plan. The program has 3 different spot shape models: single Gaussian, double Gaussian and the ASTROID model. The program was benchmarked across different treatment sites for 23 patients and 74 fields. Results: The dose calculated from the log files were compared to those generate by the treatment planning system (Raystation). While the dual Gaussian model often gave better agreement, overall, the ASTROID model gave the most consistent results. Using a 5%–3 mm gamma with a 90% passing criteria and excluding doses below 20% of prescription all patient samples passed. However, the degree of agreement of the log file approach was slightly worse than that of the chamber array measurement approach. Operationally, this implies that if the beam passes the log file model, it should pass direct measurement. Conclusion: We have established and benchmarked a model for log file QA in an IBA proteus plus system. The choice of optimal spot model for a given class of patients may be affected by factors such as site, field size, and range shifter and will be investigated further.« less

  20. Use of benchmarking techniques to justify the evolution of antibiotic management programs in healthcare systems.

    PubMed

    Schentag, J J; Paladino, J A; Birmingham, M C; Zimmer, G; Carr, J R; Hanson, S C

    1995-01-01

    To apply basic benchmarking techniques to hospital antibiotic expenditures and clinical pharmacy personnel and their duties, to identify cost savings strategies for clinical pharmacy services. Prospective survey of 18 hospitals ranging in size from 201 to 942 beds. Each was asked to provide antibiotic expenditures, an overview of their clinical pharmacy services, and to describe the duties of clinical pharmacists involved in antibiotic management activities. Specific information was sought on the use of pharmacokinetic dosing services, antibiotic streamlining, and oral switch in each of the hospitals. Most smaller hospitals (< 300 beds) did not employ clinical pharmacists with the specific duties of antibiotic management or streamlining. At these institutions, antibiotic management services consisted of formulary enforcement and aminoglycoside and/or vancomycin dosing services. The larger hospitals we surveyed employed clinical pharmacists designated as antibiotic management specialists, but their usual activities were aminoglycoside and/or vancomycin dosing services and formulary enforcement. In virtually all hospitals, the yearly expenses for antibiotics exceeded those of Millard Fillmore Hospitals by $2,000-3,000 per occupied bed. In a 500-bed hospital, this difference in expenditures would exceed $1.5 million yearly. Millard Fillmore Health System has similar types of patients, but employs clinical pharmacists to perform streamlining and/or switch functions at days 2-4, when cultures come back from the laboratory. The antibiotic streamlining and oral switch duties of clinical pharmacy specialists are associated with the majority of cost savings in hospital antibiotic management programs. The savings are considerable to the extent that most hospitals with 200-300 beds could readily cost-justify a full-time clinical pharmacist to perform these activities on a daily basis. Expenses of the program would be offset entirely by the reduction in the actual pharmacy expenditures on antibiotics.

  1. GeneNetWeaver: in silico benchmark generation and performance profiling of network inference methods.

    PubMed

    Schaffter, Thomas; Marbach, Daniel; Floreano, Dario

    2011-08-15

    Over the last decade, numerous methods have been developed for inference of regulatory networks from gene expression data. However, accurate and systematic evaluation of these methods is hampered by the difficulty of constructing adequate benchmarks and the lack of tools for a differentiated analysis of network predictions on such benchmarks. Here, we describe a novel and comprehensive method for in silico benchmark generation and performance profiling of network inference methods available to the community as an open-source software called GeneNetWeaver (GNW). In addition to the generation of detailed dynamical models of gene regulatory networks to be used as benchmarks, GNW provides a network motif analysis that reveals systematic prediction errors, thereby indicating potential ways of improving inference methods. The accuracy of network inference methods is evaluated using standard metrics such as precision-recall and receiver operating characteristic curves. We show how GNW can be used to assess the performance and identify the strengths and weaknesses of six inference methods. Furthermore, we used GNW to provide the international Dialogue for Reverse Engineering Assessments and Methods (DREAM) competition with three network inference challenges (DREAM3, DREAM4 and DREAM5). GNW is available at http://gnw.sourceforge.net along with its Java source code, user manual and supporting data. Supplementary data are available at Bioinformatics online. dario.floreano@epfl.ch.

  2. Simplified Numerical Analysis of ECT Probe - Eddy Current Benchmark Problem 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sikora, R.; Chady, T.; Gratkowski, S.

    2005-04-09

    In this paper a third eddy current benchmark problem is considered. The objective of the benchmark is to determine optimal operating frequency and size of the pancake coil designated for testing tubes made of Inconel. It can be achieved by maximization of the change in impedance of the coil due to a flaw. Approximation functions of the probe (coil) characteristic were developed and used in order to reduce number of required calculations. It results in significant speed up of the optimization process. An optimal testing frequency and size of the probe were achieved as a final result of the calculation.

  3. Benchmark experiments at ASTRA facility on definition of space distribution of {sup 235}U fission reaction rate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bobrov, A. A.; Boyarinov, V. F.; Glushkov, A. E.

    2012-07-01

    Results of critical experiments performed at five ASTRA facility configurations modeling the high-temperature helium-cooled graphite-moderated reactors are presented. Results of experiments on definition of space distribution of {sup 235}U fission reaction rate performed at four from these five configurations are presented more detail. Analysis of available information showed that all experiments on criticality at these five configurations are acceptable for use them as critical benchmark experiments. All experiments on definition of space distribution of {sup 235}U fission reaction rate are acceptable for use them as physical benchmark experiments. (authors)

  4. Decreasing unnecessary utilization in acute bronchiolitis care: results from the value in inpatient pediatrics network.

    PubMed

    Ralston, Shawn; Garber, Matthew; Narang, Steve; Shen, Mark; Pate, Brian; Pope, John; Lossius, Michele; Croland, Trina; Bennett, Jeff; Jewell, Jennifer; Krugman, Scott; Robbins, Elizabeth; Nazif, Joanne; Liewehr, Sheila; Miller, Ansley; Marks, Michelle; Pappas, Rita; Pardue, Jeanann; Quinonez, Ricardo; Fine, Bryan R; Ryan, Michael

    2013-01-01

    Acute viral bronchiolitis is the most common diagnosis resulting in hospital admission in pediatrics. Utilization of non-evidence-based therapies and testing remains common despite a large volume of evidence to guide quality improvement efforts. Our objective was to reduce utilization of unnecessary therapies in the inpatient care of bronchiolitis across a diverse network of clinical sites. We formed a voluntary quality improvement collaborative of pediatric hospitalists for the purpose of benchmarking the use of bronchodilators, steroids, chest radiography, chest physiotherapy, and viral testing in bronchiolitis using hospital administrative data. We shared resources within the network, including protocols, scores, order sets, and key bibliographies, and established group norms for decreasing utilization. Aggregate data on 11,568 hospitalizations for bronchiolitis from 17 centers was analyzed for this report. The network was organized in 2008. By 2010, we saw a 46% reduction in overall volume of bronchodilators used, a 3.4 dose per patient absolute decrease in utilization (95% confidence interval [CI] 1.4-5.8). Overall exposure to any dose of bronchodilator decreased by 12 percentage points as well (95% CI 5%-25%). There was also a statistically significant decline in chest physiotherapy usage, but not for steroids, chest radiography, or viral testing. Benchmarking within a voluntary pediatric hospitalist collaborative facilitated decreased utilization of bronchodilators and chest physiotherapy in bronchiolitis. Copyright © 2012 Society of Hospital Medicine.

  5. OECD/NEA expert group on uncertainty analysis for criticality safety assessment: Results of benchmark on sensitivity calculation (phase III)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ivanova, T.; Laville, C.; Dyrda, J.

    2012-07-01

    The sensitivities of the k{sub eff} eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplificationsmore » impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods. (authors)« less

  6. Benchmarking Non-Hardware Balance-of-System (Soft) Costs for U.S. Photovoltaic Systems, Using a Bottom-Up Approach and Installer Survey - Second Edition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman, B.; Ardani, K.; Feldman, D.

    2013-10-01

    This report presents results from the second U.S. Department of Energy (DOE) sponsored, bottom-up data-collection and analysis of non-hardware balance-of-system costs -- often referred to as 'business process' or 'soft' costs -- for U.S. residential and commercial photovoltaic (PV) systems. In service to DOE's SunShot Initiative, annual expenditure and labor-hour-productivity data are analyzed to benchmark 2012 soft costs related to (1) customer acquisition and system design (2) permitting, inspection, and interconnection (PII). We also include an in-depth analysis of costs related to financing, overhead, and profit. Soft costs are both a major challenge and a major opportunity for reducing PVmore » system prices and stimulating SunShot-level PV deployment in the United States. The data and analysis in this series of benchmarking reports are a step toward the more detailed understanding of PV soft costs required to track and accelerate these price reductions.« less

  7. Analysis of Students' Assessments in Middle School Curriculum Materials: Aiming Precisely at Benchmarks and Standards.

    ERIC Educational Resources Information Center

    Stern, Luli; Ahlgren, Andrew

    2002-01-01

    Project 2061 of the American Association for the Advancement of Science (AAAS) developed and field-tested a procedure for analyzing curriculum materials, including assessments, in terms of contribution to the attainment of benchmarks and standards. Using this procedure, Project 2061 produced a database of reports on nine science middle school…

  8. Benchmark Analysis of Career and Technical Education in Lenawee County. Final Report.

    ERIC Educational Resources Information Center

    Hollenbeck, Kevin

    The career and technical education (CTE) provided in grades K-12 in the county's vocational-technical center and 12 local public school districts of Lenawee County, Michigan, was benchmarked with respect to its attention to career development. Data were collected from the following sources: structured interviews with a number of key respondents…

  9. Pesticides and public health: an analysis of the regulatory approach to assessing the carcinogenicity of glyphosate in the European Union.

    PubMed

    Clausing, Peter; Robinson, Claire; Burtscher-Schaden, Helmut

    2018-03-13

    The present paper scrutinises the European authorities' assessment of the carcinogenic hazard posed by glyphosate based on Regulation (EC) 1272/2008. We use the authorities' own criteria as a benchmark to analyse their weight of evidence (WoE) approach. Therefore, our analysis goes beyond the comparison of the assessments made by the European Food Safety Authority and the International Agency for Research on Cancer published by others. We show that not classifying glyphosate as a carcinogen by the European authorities, including the European Chemicals Agency, appears to be not consistent with, and in some instances, a direct violation of the applicable guidance and guideline documents. In particular, we criticise an arbitrary attenuation by the authorities of the power of statistical analyses; their disregard of existing dose-response relationships; their unjustified claim that the doses used in the mouse carcinogenicity studies were too high and their contention that the carcinogenic effects were not reproducible by focusing on quantitative and neglecting qualitative reproducibility. Further aspects incorrectly used were historical control data, multisite responses and progression of lesions to malignancy. Contrary to the authorities' evaluations, proper application of statistical methods and WoE criteria inevitably leads to the conclusion that glyphosate is 'probably carcinogenic' (corresponding to category 1B in the European Union). © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  10. Energy saving in WWTP: Daily benchmarking under uncertainty and data availability limitations.

    PubMed

    Torregrossa, D; Schutz, G; Cornelissen, A; Hernández-Sancho, F; Hansen, J

    2016-07-01

    Efficient management of Waste Water Treatment Plants (WWTPs) can produce significant environmental and economic benefits. Energy benchmarking can be used to compare WWTPs, identify targets and use these to improve their performance. Different authors have performed benchmark analysis on monthly or yearly basis but their approaches suffer from a time lag between an event, its detection, interpretation and potential actions. The availability of on-line measurement data on many WWTPs should theoretically enable the decrease of the management response time by daily benchmarking. Unfortunately this approach is often impossible because of limited data availability. This paper proposes a methodology to perform a daily benchmark analysis under database limitations. The methodology has been applied to the Energy Online System (EOS) developed in the framework of the project "INNERS" (INNovative Energy Recovery Strategies in the urban water cycle). EOS calculates a set of Key Performance Indicators (KPIs) for the evaluation of energy and process performances. In EOS, the energy KPIs take in consideration the pollutant load in order to enable the comparison between different plants. For example, EOS does not analyse the energy consumption but the energy consumption on pollutant load. This approach enables the comparison of performances for plants with different loads or for a single plant under different load conditions. The energy consumption is measured by on-line sensors, while the pollutant load is measured in the laboratory approximately every 14 days. Consequently, the unavailability of the water quality parameters is the limiting factor in calculating energy KPIs. In this paper, in order to overcome this limitation, the authors have developed a methodology to estimate the required parameters and manage the uncertainty in the estimation. By coupling the parameter estimation with an interval based benchmark approach, the authors propose an effective, fast and reproducible way to manage infrequent inlet measurements. Its use enables benchmarking on a daily basis and prepares the ground for further investigation. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Competency based training in robotic surgery: benchmark scores for virtual reality robotic simulation.

    PubMed

    Raison, Nicholas; Ahmed, Kamran; Fossati, Nicola; Buffi, Nicolò; Mottrie, Alexandre; Dasgupta, Prokar; Van Der Poel, Henk

    2017-05-01

    To develop benchmark scores of competency for use within a competency based virtual reality (VR) robotic training curriculum. This longitudinal, observational study analysed results from nine European Association of Urology hands-on-training courses in VR simulation. In all, 223 participants ranging from novice to expert robotic surgeons completed 1565 exercises. Competency was set at 75% of the mean expert score. Benchmark scores for all general performance metrics generated by the simulator were calculated. Assessment exercises were selected by expert consensus and through learning-curve analysis. Three basic skill and two advanced skill exercises were identified. Benchmark scores based on expert performance offered viable targets for novice and intermediate trainees in robotic surgery. Novice participants met the competency standards for most basic skill exercises; however, advanced exercises were significantly more challenging. Intermediate participants performed better across the seven metrics but still did not achieve the benchmark standard in the more difficult exercises. Benchmark scores derived from expert performances offer relevant and challenging scores for trainees to achieve during VR simulation training. Objective feedback allows both participants and trainers to monitor educational progress and ensures that training remains effective. Furthermore, the well-defined goals set through benchmarking offer clear targets for trainees and enable training to move to a more efficient competency based curriculum. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.

  12. Effectiveness of Social Marketing Interventions to Promote Physical Activity Among Adults: A Systematic Review.

    PubMed

    Xia, Yuan; Deshpande, Sameer; Bonates, Tiberius

    2016-11-01

    Social marketing managers promote desired behaviors to an audience by making them tangible in the form of environmental opportunities to enhance benefits and reduce barriers. This study proposed "benchmarks," modified from those found in the past literature, that would match important concepts of the social marketing framework and the inclusion of which would ensure behavior change effectiveness. In addition, we analyzed behavior change interventions on a "social marketing continuum" to assess whether the number of benchmarks and the role of specific benchmarks influence the effectiveness of physical activity promotion efforts. A systematic review of social marketing interventions available in academic studies published between 1997 and 2013 revealed 173 conditions in 92 interventions. Findings based on χ 2 , Mallows' Cp, and Logical Analysis of Data tests revealed that the presence of more benchmarks in interventions increased the likelihood of success in promoting physical activity. The presence of more than 3 benchmarks improved the success of the interventions; specifically, all interventions were successful when more than 7.5 benchmarks were present. Further, primary formative research, core product, actual product, augmented product, promotion, and behavioral competition all had a significant influence on the effectiveness of interventions. Social marketing is an effective approach in promoting physical activity among adults when a substantial number of benchmarks are used and when managers understand the audience, make the desired behavior tangible, and promote the desired behavior persuasively.

  13. Benchmarking of venous thromboembolism prophylaxis practice with ENT.UK guidelines.

    PubMed

    Al-Qahtani, Ali S

    2017-05-01

    The aim of this study was to benchmark our guidelines of prevention of venous thromboembolism (VTE) in ENT surgical population against ENT.UK guidelines, and also to encourage healthcare providers to utilize benchmarking as an effective method of improving performance. The study design is prospective descriptive analysis. The setting of this study is tertiary referral centre (Assir Central Hospital, Abha, Saudi Arabia). In this study, we are benchmarking our practice guidelines of the prevention of VTE in the ENT surgical population against that of ENT.UK guidelines to mitigate any gaps. ENT guidelines 2010 were downloaded from the ENT.UK Website. Our guidelines were compared with the possibilities that either our performance meets or fall short of ENT.UK guidelines. Immediate corrective actions will take place if there is quality chasm between the two guidelines. ENT.UK guidelines are evidence-based and updated which may serve as role-model for adoption and benchmarking. Our guidelines were accordingly amended to contain all factors required in providing a quality service to ENT surgical patients. While not given appropriate attention, benchmarking is a useful tool in improving quality of health care. It allows learning from others' practices and experiences, and works towards closing any quality gaps. In addition, benchmarking clinical outcomes is critical for quality improvement and informing decisions concerning service provision. It is recommended to be included on the list of quality improvement methods of healthcare services.

  14. Monte Carlo simulations and benchmark measurements on the response of TE(TE) and Mg(Ar) ionization chambers in photon, electron and neutron beams

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Chun; Huang, Tseng-Te; Liu, Yuan-Hao; Chen, Wei-Lin; Chen, Yen-Fu; Wu, Shu-Wei; Nievaart, Sander; Jiang, Shiang-Huei

    2015-06-01

    The paired ionization chambers (ICs) technique is commonly employed to determine neutron and photon doses in radiology or radiotherapy neutron beams, where neutron dose shows very strong dependence on the accuracy of accompanying high energy photon dose. During the dose derivation, it is an important issue to evaluate the photon and electron response functions of two commercially available ionization chambers, denoted as TE(TE) and Mg(Ar), used in our reactor based epithermal neutron beam. Nowadays, most perturbation corrections for accurate dose determination and many treatment planning systems are based on the Monte Carlo technique. We used general purposed Monte Carlo codes, MCNP5, EGSnrc, FLUKA or GEANT4 for benchmark verifications among them and carefully measured values for a precise estimation of chamber current from absorbed dose rate of cavity gas. Also, energy dependent response functions of two chambers were calculated in a parallel beam with mono-energies from 20 keV to 20 MeV photons and electrons by using the optimal simple spherical and detailed IC models. The measurements were performed in the well-defined (a) four primary M-80, M-100, M120 and M150 X-ray calibration fields, (b) primary 60Co calibration beam, (c) 6 MV and 10 MV photon, (d) 6 MeV and 18 MeV electron LINACs in hospital and (e) BNCT clinical trials neutron beam. For the TE(TE) chamber, all codes were almost identical over the whole photon energy range. In the Mg(Ar) chamber, MCNP5 showed lower response than other codes for photon energy region below 0.1 MeV and presented similar response above 0.2 MeV (agreed within 5% in the simple spherical model). With the increase of electron energy, the response difference between MCNP5 and other codes became larger in both chambers. Compared with the measured currents, MCNP5 had the difference from the measurement data within 5% for the 60Co, 6 MV, 10 MV, 6 MeV and 18 MeV LINACs beams. But for the Mg(Ar) chamber, the derivations reached 7.8-16.5% below 120 kVp X-ray beams. In this study, we were especially interested in BNCT doses where low energy photon contribution is less to ignore, MCNP model is recognized as the most suitable to simulate wide photon-electron and neutron energy distributed responses of the paired ICs. Also, MCNP provides the best prediction of BNCT source adjustment by the detector's neutron and photon responses.

  15. Use of computer code for dose distribution studies in A 60CO industrial irradiator

    NASA Astrophysics Data System (ADS)

    Piña-Villalpando, G.; Sloan, D. P.

    1995-09-01

    This paper presents a benchmark comparison between calculated and experimental absorbed dose values tor a typical product, in a 60Co industrial irradiator, located at ININ, México. The irradiator is a two levels, two layers system with overlapping product configuration with activity around 300kCi. Experimental values were obtanied from routine dosimetry, using red acrylic pellets. Typical product was Petri dishes packages, apparent density 0.13 g/cm3; that product was chosen because uniform size, large quantity and low density. Minimum dose was fixed in 15 kGy. Calculated values were obtained from QAD-CGGP code. This code uses a point kernel technique, build-up factors fitting was done by geometrical progression and combinatorial geometry is used for system description. Main modifications for the code were related with source sumilation, using punctual sources instead of pencils and an energy and anisotropic emission spectrums were included. Results were, for maximum dose, calculated value (18.2 kGy) was 8% higher than experimental average value (16.8 kGy); for minimum dose, calculated value (13.8 kGy) was 3% higher than experimental average value (14.3 kGy).

  16. Gadolinia depletion analysis by CASMO-4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kobayashi, Y.; Saji, E.; Toba, A.

    1993-01-01

    CASMO-4 is the most recent version of the lattice physics code CASMO introduced by Studsvik. The principal aspects of the CASMO-4 model that differ from the models in previous CASMO versions are as follows: (1) heterogeneous model for two-dimensional transport theory calculations; and (2) microregion depletion model for burnable absorbers, such as gadolinia. Of these aspects, the first has previously been benchmarked against measured data of critical experiments and Monte Carlo calculations, verifying the high degree of accuracy. To proceed with CASMO-4 benchmarking, it is desirable to benchmark the microregion depletion model, which enables CASMO-4 to calculate gadolinium depletion directlymore » without the need for precalculated MICBURN cross-section data. This paper presents the benchmarking results for the microregion depletion model in CASMO-4 using the measured data of depleted gadolinium rods.« less

  17. Access to a simulator is not enough: the benefits of virtual reality training based on peer-group-derived benchmarks--a randomized controlled trial.

    PubMed

    von Websky, Martin W; Raptis, Dimitri A; Vitz, Martina; Rosenthal, Rachel; Clavien, P A; Hahnloser, Dieter

    2013-11-01

    Virtual reality (VR) simulators are widely used to familiarize surgical novices with laparoscopy, but VR training methods differ in efficacy. In the present trial, self-controlled basic VR training (SC-training) was tested against training based on peer-group-derived benchmarks (PGD-training). First, novice laparoscopic residents were randomized into a SC group (n = 34), and a group using PGD-benchmarks (n = 34) for basic laparoscopic training. After completing basic training, both groups performed 60 VR laparoscopic cholecystectomies for performance analysis. Primary endpoints were simulator metrics; secondary endpoints were program adherence, trainee motivation, and training efficacy. Altogether, 66 residents completed basic training, and 3,837 of 3,960 (96.8 %) cholecystectomies were available for analysis. Course adherence was good, with only two dropouts, both in the SC-group. The PGD-group spent more time and repetitions in basic training until the benchmarks were reached and subsequently showed better performance in the readout cholecystectomies: Median time (gallbladder extraction) showed significant differences of 520 s (IQR 354-738 s) in SC-training versus 390 s (IQR 278-536 s) in the PGD-group (p < 0.001) and 215 s (IQR 175-276 s) in experts, respectively. Path length of the right instrument also showed significant differences, again with the PGD-training group being more efficient. Basic VR laparoscopic training based on PGD benchmarks with external assessment is superior to SC training, resulting in higher trainee motivation and better performance in simulated laparoscopic cholecystectomies. We recommend such a basic course based on PGD benchmarks before advancing to more elaborate VR training.

  18. Determining the sample size required to establish whether a medical device is non-inferior to an external benchmark.

    PubMed

    Sayers, Adrian; Crowther, Michael J; Judge, Andrew; Whitehouse, Michael R; Blom, Ashley W

    2017-08-28

    The use of benchmarks to assess the performance of implants such as those used in arthroplasty surgery is a widespread practice. It provides surgeons, patients and regulatory authorities with the reassurance that implants used are safe and effective. However, it is not currently clear how or how many implants should be statistically compared with a benchmark to assess whether or not that implant is superior, equivalent, non-inferior or inferior to the performance benchmark of interest.We aim to describe the methods and sample size required to conduct a one-sample non-inferiority study of a medical device for the purposes of benchmarking. Simulation study. Simulation study of a national register of medical devices. We simulated data, with and without a non-informative competing risk, to represent an arthroplasty population and describe three methods of analysis (z-test, 1-Kaplan-Meier and competing risks) commonly used in surgical research. We evaluate the performance of each method using power, bias, root-mean-square error, coverage and CI width. 1-Kaplan-Meier provides an unbiased estimate of implant net failure, which can be used to assess if a surgical device is non-inferior to an external benchmark. Small non-inferiority margins require significantly more individuals to be at risk compared with current benchmarking standards. A non-inferiority testing paradigm provides a useful framework for determining if an implant meets the required performance defined by an external benchmark. Current contemporary benchmarking standards have limited power to detect non-inferiority, and substantially larger samples sizes, in excess of 3200 procedures, are required to achieve a power greater than 60%. It is clear when benchmarking implant performance, net failure estimated using 1-KM is preferential to crude failure estimated by competing risk models. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  19. International benchmarking of specialty hospitals. A series of case studies on comprehensive cancer centres.

    PubMed

    van Lent, Wineke A M; de Beer, Relinde D; van Harten, Wim H

    2010-08-31

    Benchmarking is one of the methods used in business that is applied to hospitals to improve the management of their operations. International comparison between hospitals can explain performance differences. As there is a trend towards specialization of hospitals, this study examines the benchmarking process and the success factors of benchmarking in international specialized cancer centres. Three independent international benchmarking studies on operations management in cancer centres were conducted. The first study included three comprehensive cancer centres (CCC), three chemotherapy day units (CDU) were involved in the second study and four radiotherapy departments were included in the final study. Per multiple case study a research protocol was used to structure the benchmarking process. After reviewing the multiple case studies, the resulting description was used to study the research objectives. We adapted and evaluated existing benchmarking processes through formalizing stakeholder involvement and verifying the comparability of the partners. We also devised a framework to structure the indicators to produce a coherent indicator set and better improvement suggestions. Evaluating the feasibility of benchmarking as a tool to improve hospital processes led to mixed results. Case study 1 resulted in general recommendations for the organizations involved. In case study 2, the combination of benchmarking and lean management led in one CDU to a 24% increase in bed utilization and a 12% increase in productivity. Three radiotherapy departments of case study 3, were considering implementing the recommendations.Additionally, success factors, such as a well-defined and small project scope, partner selection based on clear criteria, stakeholder involvement, simple and well-structured indicators, analysis of both the process and its results and, adapt the identified better working methods to the own setting, were found. The improved benchmarking process and the success factors can produce relevant input to improve the operations management of specialty hospitals.

  20. International benchmarking of specialty hospitals. A series of case studies on comprehensive cancer centres

    PubMed Central

    2010-01-01

    Background Benchmarking is one of the methods used in business that is applied to hospitals to improve the management of their operations. International comparison between hospitals can explain performance differences. As there is a trend towards specialization of hospitals, this study examines the benchmarking process and the success factors of benchmarking in international specialized cancer centres. Methods Three independent international benchmarking studies on operations management in cancer centres were conducted. The first study included three comprehensive cancer centres (CCC), three chemotherapy day units (CDU) were involved in the second study and four radiotherapy departments were included in the final study. Per multiple case study a research protocol was used to structure the benchmarking process. After reviewing the multiple case studies, the resulting description was used to study the research objectives. Results We adapted and evaluated existing benchmarking processes through formalizing stakeholder involvement and verifying the comparability of the partners. We also devised a framework to structure the indicators to produce a coherent indicator set and better improvement suggestions. Evaluating the feasibility of benchmarking as a tool to improve hospital processes led to mixed results. Case study 1 resulted in general recommendations for the organizations involved. In case study 2, the combination of benchmarking and lean management led in one CDU to a 24% increase in bed utilization and a 12% increase in productivity. Three radiotherapy departments of case study 3, were considering implementing the recommendations. Additionally, success factors, such as a well-defined and small project scope, partner selection based on clear criteria, stakeholder involvement, simple and well-structured indicators, analysis of both the process and its results and, adapt the identified better working methods to the own setting, were found. Conclusions The improved benchmarking process and the success factors can produce relevant input to improve the operations management of specialty hospitals. PMID:20807408

  1. Oral toxicity of 3-nitro-1,2,4-triazol-5-one in rats.

    PubMed

    Crouse, Lee C B; Lent, Emily May; Leach, Glenn J

    2015-01-01

    3-Nitro-1,2,4-triazol-5-one (NTO), an insensitive explosive, was evaluated to assess potential environmental and human health effects. A 14-day oral toxicity study in Sprague-Dawley rats was conducted with NTO in polyethylene glycol -200 by gavage at doses of 0, 250, 500, 1000, 1500, or 2000 mg/kg-d. Body mass and food consumption decreased in males (2000 mg/kg-d), and testes mass was reduced at doses of 500 mg/kg-d and greater. Based on the findings in the 14-day study, a 90-day study was conducted at doses of 0, 30, 100, 315, or 1000 mg/kg-d NTO. There was no effect on food consumption, body mass, or neurobehavioral parameters. Males in the 315 and 1000 mg/kg-d groups had reduced testes mass with associated tubular degeneration and atrophy. The testicular effects were the most sensitive adverse effect and were used to derive a benchmark dose (BMD) of 70 mg/kg-d with a 10% effect level (BMDL10) of 40 mg/kg-d. © The Author(s) 2015.

  2. Modification and validation of an analytical source model for external beam radiotherapy Monte Carlo dose calculations.

    PubMed

    Davidson, Scott E; Cui, Jing; Kry, Stephen; Deasy, Joseph O; Ibbott, Geoffrey S; Vicic, Milos; White, R Allen; Followill, David S

    2016-08-01

    A dose calculation tool, which combines the accuracy of the dose planning method (DPM) Monte Carlo code and the versatility of a practical analytical multisource model, which was previously reported has been improved and validated for the Varian 6 and 10 MV linear accelerators (linacs). The calculation tool can be used to calculate doses in advanced clinical application studies. One shortcoming of current clinical trials that report dose from patient plans is the lack of a standardized dose calculation methodology. Because commercial treatment planning systems (TPSs) have their own dose calculation algorithms and the clinical trial participant who uses these systems is responsible for commissioning the beam model, variation exists in the reported calculated dose distributions. Today's modern linac is manufactured to tight specifications so that variability within a linac model is quite low. The expectation is that a single dose calculation tool for a specific linac model can be used to accurately recalculate dose from patient plans that have been submitted to the clinical trial community from any institution. The calculation tool would provide for a more meaningful outcome analysis. The analytical source model was described by a primary point source, a secondary extra-focal source, and a contaminant electron source. Off-axis energy softening and fluence effects were also included. The additions of hyperbolic functions have been incorporated into the model to correct for the changes in output and in electron contamination with field size. A multileaf collimator (MLC) model is included to facilitate phantom and patient dose calculations. An offset to the MLC leaf positions was used to correct for the rudimentary assumed primary point source. Dose calculations of the depth dose and profiles for field sizes 4 × 4 to 40 × 40 cm agree with measurement within 2% of the maximum dose or 2 mm distance to agreement (DTA) for 95% of the data points tested. The model was capable of predicting the depth of the maximum dose within 1 mm. Anthropomorphic phantom benchmark testing of modulated and patterned MLCs treatment plans showed agreement to measurement within 3% in target regions using thermoluminescent dosimeters (TLD). Using radiochromic film normalized to TLD, a gamma criteria of 3% of maximum dose and 2 mm DTA was applied with a pass rate of least 85% in the high dose, high gradient, and low dose regions. Finally, recalculations of patient plans using DPM showed good agreement relative to a commercial TPS when comparing dose volume histograms and 2D dose distributions. A unique analytical source model coupled to the dose planning method Monte Carlo dose calculation code has been modified and validated using basic beam data and anthropomorphic phantom measurement. While this tool can be applied in general use for a particular linac model, specifically it was developed to provide a singular methodology to independently assess treatment plan dose distributions from those clinical institutions participating in National Cancer Institute trials.

  3. Benchmarks of Global Clean Energy Manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandor, Debra; Chung, Donald; Keyser, David

    The Clean Energy Manufacturing Analysis Center (CEMAC), sponsored by the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE), provides objective analysis and up-to-date data on global supply chains and manufacturing of clean energy technologies. Benchmarks of Global Clean Energy Manufacturing sheds light on several fundamental questions about the global clean technology manufacturing enterprise: How does clean energy technology manufacturing impact national economies? What are the economic opportunities across the manufacturing supply chain? What are the global dynamics of clean energy technology manufacturing?

  4. Determination of MLC model parameters for Monaco using commercial diode arrays.

    PubMed

    Kinsella, Paul; Shields, Laura; McCavana, Patrick; McClean, Brendan; Langan, Brian

    2016-07-08

    Multileaf collimators (MLCs) need to be characterized accurately in treatment planning systems to facilitate accurate intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT). The aim of this study was to examine the use of MapCHECK 2 and ArcCHECK diode arrays for optimizing MLC parameters in Monaco X-ray voxel Monte Carlo (XVMC) dose calculation algorithm. A series of radiation test beams designed to evaluate MLC model parameters were delivered to MapCHECK 2, ArcCHECK, and EBT3 Gafchromic film for comparison. Initial comparison of the calculated and ArcCHECK-measured dose distributions revealed it was unclear how to change the MLC parameters to gain agreement. This ambiguity arose due to an insufficient sampling of the test field dose distributions and unexpected discrepancies in the open parts of some test fields. Consequently, the XVMC MLC parameters were optimized based on MapCHECK 2 measurements. Gafchromic EBT3 film was used to verify the accuracy of MapCHECK 2 measured dose distributions. It was found that adjustment of the MLC parameters from their default values resulted in improved global gamma analysis pass rates for MapCHECK 2 measurements versus calculated dose. The lowest pass rate of any MLC-modulated test beam improved from 68.5% to 93.5% with 3% and 2 mm gamma criteria. Given the close agreement of the optimized model to both MapCHECK 2 and film, the optimized model was used as a benchmark to highlight the relatively large discrepancies in some of the test field dose distributions found with ArcCHECK. Comparison between the optimized model-calculated dose and ArcCHECK-measured dose resulted in global gamma pass rates which ranged from 70.0%-97.9% for gamma criteria of 3% and 2 mm. The simple square fields yielded high pass rates. The lower gamma pass rates were attributed to the ArcCHECK overestimating the dose in-field for the rectangular test fields whose long axis was parallel to the long axis of the ArcCHECK. Considering ArcCHECK measurement issues and the lower gamma pass rates for the MLC-modulated test beams, it was concluded that MapCHECK 2 was a more suitable detector than ArcCHECK for the optimization process. © 2016 The Authors

  5. Benchmark On Sensitivity Calculation (Phase III)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ivanova, Tatiana; Laville, Cedric; Dyrda, James

    2012-01-01

    The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impactmore » the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.« less

  6. The use of National Weather Service Data to Compute the Dose to the MEOI.

    PubMed

    Vickers, Linda

    2018-05-01

    The Turner method is the "benchmark method" for computing the stability class that is used to compute the X/Q (s m). The Turner method should be used to ascertain the validity of X/Q results determined by other methods. This paper used site-specific meteorological data obtained from the National Weather Service. The Turner method described herein is simple, quick, accurate, and transparent because all of the data, calculations, and results are visible for verification and validation with published literature.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hua Chiaho, E-mail: Chia-Ho.Hua@stjude.org; Merchant, Thomas E.; Gajjar, Amar

    Purpose: To characterize therapy-induced changes in normal-appearing brainstems of childhood brain tumor patients by serial diffusion tensor imaging (DTI). Methods and Materials: We analyzed 109 DTI studies from 20 brain tumor patients, aged 4 to 23 years, with normal-appearing brainstems included in the treatment fields. Those with medulloblastomas, supratentorial primitive neuroectodermal tumors, and atypical teratoid rhabdoid tumors (n = 10) received postoperative craniospinal irradiation (23.4-39.6 Gy) and a cumulative dose of 55.8 Gy to the primary site, followed by four cycles of high-dose chemotherapy. Patients with high-grade gliomas (n = 10) received erlotinib during and after irradiation (54-59.4 Gy). Parametricmore » maps of fractional anisotropy (FA) and apparent diffusion coefficient (ADC) were computed and spatially registered to three-dimensional radiation dose data. Volumes of interest included corticospinal tracts, medial lemnisci, and the pons. Serving as an age-related benchmark for comparison, 37 DTI studies from 20 healthy volunteers, aged 6 to 25 years, were included in the analysis. Results: The median DTI follow-up time was 3.5 years (range, 1.6-5.0 years). The median mean dose to the pons was 56 Gy (range, 7-59 Gy). Three patterns were seen in longitudinal FA and apparent diffusion coefficient changes: (1) a stable or normal developing time trend, (2) initial deviation from normal with subsequent recovery, and (3) progressive deviation without evidence of complete recovery. The maximal decline in FA often occurred 1.5 to 3.5 years after the start of radiation therapy. A full recovery time trend could be observed within 4 years. Patients with incomplete recovery often had a larger decline in FA within the first year. Radiation dose alone did not predict long-term recovery patterns. Conclusions: Variations existed among individual patients after therapy in longitudinal evolution of brainstem white matter injury and recovery. Early response in brainstem anisotropy may serve as an indicator of the recovery time trend over 5 years after radiation therapy.« less

  8. Comparison of methods for individualized astronaut organ dosimetry: Morphometry-based phantom library versus body contour autoscaling of a reference phantom

    NASA Astrophysics Data System (ADS)

    Sands, Michelle M.; Borrego, David; Maynard, Matthew R.; Bahadori, Amir A.; Bolch, Wesley E.

    2017-11-01

    One of the hazards faced by space crew members in low-Earth orbit or in deep space is exposure to ionizing radiation. It has been shown previously that while differences in organ-specific and whole-body risk estimates due to body size variations are small for highly-penetrating galactic cosmic rays, large differences in these quantities can result from exposure to shorter-range trapped proton or solar particle event radiations. For this reason, it is desirable to use morphometrically accurate computational phantoms representing each astronaut for a risk analysis, especially in the case of a solar particle event. An algorithm was developed to automatically sculpt and scale the UF adult male and adult female hybrid reference phantom to the individual outer body contour of a given astronaut. This process begins with the creation of a laser-measured polygon mesh model of the astronaut's body contour. Using the auto-scaling program and selecting several anatomical landmarks, the UF adult male or female phantom is adjusted to match the laser-measured outer body contour of the astronaut. A dosimetry comparison study was conducted to compare the organ dose accuracy of both the autoscaled phantom and that based upon a height-weight matched phantom from the UF/NCI Computational Phantom Library. Monte Carlo methods were used to simulate the environment of the August 1972 and February 1956 solar particle events. Using a series of individual-specific voxel phantoms as a local benchmark standard, autoscaled phantom organ dose estimates were shown to provide a 1% and 10% improvement in organ dose accuracy for a population of females and males, respectively, as compared to organ doses derived from height-weight matched phantoms from the UF/NCI Computational Phantom Library. In addition, this slight improvement in organ dose accuracy from the autoscaled phantoms is accompanied by reduced computer storage requirements and a more rapid method for individualized phantom generation when compared to the UF/NCI Computational Phantom Library.

  9. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    deWit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure, This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  10. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    de Wit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure. This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  11. BACT Simulation User Guide (Version 7.0)

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.

    1997-01-01

    This report documents the structure and operation of a simulation model of the Benchmark Active Control Technology (BACT) Wind-Tunnel Model. The BACT system was designed, built, and tested at NASA Langley Research Center as part of the Benchmark Models Program and was developed to perform wind-tunnel experiments to obtain benchmark quality data to validate computational fluid dynamics and computational aeroelasticity codes, to verify the accuracy of current aeroservoelasticity design and analysis tools, and to provide an active controls testbed for evaluating new and innovative control algorithms for flutter suppression and gust load alleviation. The BACT system has been especially valuable as a control system testbed.

  12. Benchmark Testing of a New 56Fe Evaluation for Criticality Safety Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leal, Luiz C; Ivanov, E.

    2015-01-01

    The SAMMY code was used to evaluate resonance parameters of the 56Fe cross section in the resolved resonance energy range of 0–2 MeV using transmission data, capture, elastic, inelastic, and double differential elastic cross sections. The resonance analysis was performed with the code SAMMY that fits R-matrix resonance parameters using the generalized least-squares technique (Bayes’ theory). The evaluation yielded a set of resonance parameters that reproduced the experimental data very well, along with a resonance parameter covariance matrix for data uncertainty calculations. Benchmark tests were conducted to assess the evaluation performance in benchmark calculations.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y M; Bush, K; Han, B

    Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) methodmore » that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high performance dose calculation in modern RT. The approach is generalizable to all modalities where heterogeneities play a large role, notably particle therapy.« less

  14. Operating Room Efficiency before and after Entrance in a Benchmarking Program for Surgical Process Data.

    PubMed

    Pedron, Sara; Winter, Vera; Oppel, Eva-Maria; Bialas, Enno

    2017-08-23

    Operating room (OR) efficiency continues to be a high priority for hospitals. In this context the concept of benchmarking has gained increasing importance as a means to improve OR performance. The aim of this study was to investigate whether and how participation in a benchmarking and reporting program for surgical process data was associated with a change in OR efficiency, measured through raw utilization, turnover times, and first-case tardiness. The main analysis is based on panel data from 202 surgical departments in German hospitals, which were derived from the largest database for surgical process data in Germany. Panel regression modelling was applied. Results revealed no clear and univocal trend of participation in a benchmarking and reporting program for surgical process data. The largest trend was observed for first-case tardiness. In contrast to expectations, turnover times showed a generally increasing trend during participation. For raw utilization no clear and statistically significant trend could be evidenced. Subgroup analyses revealed differences in effects across different hospital types and department specialties. Participation in a benchmarking and reporting program and thus the availability of reliable, timely and detailed analysis tools to support the OR management seemed to be correlated especially with an increase in the timeliness of staff members regarding first-case starts. The increasing trend in turnover time revealed the absence of effective strategies to improve this aspect of OR efficiency in German hospitals and could have meaningful consequences for the medium- and long-run capacity planning in the OR.

  15. Benchmarking Diagnostic Algorithms on an Electrical Power System Testbed

    NASA Technical Reports Server (NTRS)

    Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia, David; Wright, Stephanie

    2009-01-01

    Diagnostic algorithms (DAs) are key to enabling automated health management. These algorithms are designed to detect and isolate anomalies of either a component or the whole system based on observations received from sensors. In recent years a wide range of algorithms, both model-based and data-driven, have been developed to increase autonomy and improve system reliability and affordability. However, the lack of support to perform systematic benchmarking of these algorithms continues to create barriers for effective development and deployment of diagnostic technologies. In this paper, we present our efforts to benchmark a set of DAs on a common platform using a framework that was developed to evaluate and compare various performance metrics for diagnostic technologies. The diagnosed system is an electrical power system, namely the Advanced Diagnostics and Prognostics Testbed (ADAPT) developed and located at the NASA Ames Research Center. The paper presents the fundamentals of the benchmarking framework, the ADAPT system, description of faults and data sets, the metrics used for evaluation, and an in-depth analysis of benchmarking results obtained from testing ten diagnostic algorithms on the ADAPT electrical power system testbed.

  16. Quality assessment in head and neck oncologic surgery in a Brazilian cancer center compared with MD Anderson Cancer Center benchmarks.

    PubMed

    Lira, Renan Bezerra; de Carvalho, André Ywata; de Carvalho, Genival Barbosa; Lewis, Carol M; Weber, Randal S; Kowalski, Luiz Paulo

    2016-07-01

    Quality assessment is a major tool for evaluation of health care delivery. In head and neck surgery, the University of Texas MD Anderson Cancer Center (MD Anderson) has defined quality standards by publishing benchmarks. We conducted an analysis of 360 head and neck surgeries performed at the AC Camargo Cancer Center (AC Camargo). The procedures were stratified into low-acuity procedures (LAPs) or high-acuity procedures (HAPs) and outcome indicators where compared to MD Anderson benchmarks. In the 360 cases, there were 332 LAPs (92.2%) and 28 HAPs (7.8%). Patients with any comorbid condition had a higher incidence of negative outcome indicators (p = .005). In the LAPs, we achieved the MD Anderson benchmarks in all outcome indicators. In HAPs, the rate of surgical site infection and length of hospital stay were higher than what is established by the benchmarks. Quality assessment of head and neck surgery is possible and should be disseminated, improving effectiveness in health care delivery. © 2015 Wiley Periodicals, Inc. Head Neck 38: 1002-1007, 2016. © 2015 Wiley Periodicals, Inc.

  17. An automated protocol for performance benchmarking a widefield fluorescence microscope.

    PubMed

    Halter, Michael; Bier, Elianna; DeRose, Paul C; Cooksey, Gregory A; Choquette, Steven J; Plant, Anne L; Elliott, John T

    2014-11-01

    Widefield fluorescence microscopy is a highly used tool for visually assessing biological samples and for quantifying cell responses. Despite its widespread use in high content analysis and other imaging applications, few published methods exist for evaluating and benchmarking the analytical performance of a microscope. Easy-to-use benchmarking methods would facilitate the use of fluorescence imaging as a quantitative analytical tool in research applications, and would aid the determination of instrumental method validation for commercial product development applications. We describe and evaluate an automated method to characterize a fluorescence imaging system's performance by benchmarking the detection threshold, saturation, and linear dynamic range to a reference material. The benchmarking procedure is demonstrated using two different materials as the reference material, uranyl-ion-doped glass and Schott 475 GG filter glass. Both are suitable candidate reference materials that are homogeneously fluorescent and highly photostable, and the Schott 475 GG filter glass is currently commercially available. In addition to benchmarking the analytical performance, we also demonstrate that the reference materials provide for accurate day to day intensity calibration. Published 2014 Wiley Periodicals Inc. Published 2014 Wiley Periodicals Inc. This article is a US government work and, as such, is in the public domain in the United States of America.

  18. Results of the GABLS3 diurnal-cycle benchmark for wind energy applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodrigo, J. Sanz; Allaerts, D.; Avila, M.

    We present results of the GABLS3 model intercomparison benchmark revisited for wind energy applications. The case consists of a diurnal cycle, measured at the 200-m tall Cabauw tower in the Netherlands, including a nocturnal low-level jet. The benchmark includes a sensitivity analysis of WRF simulations using two input meteorological databases and five planetary boundary-layer schemes. A reference set of mesoscale tendencies is used to drive microscale simulations using RANS k-ϵ and LES turbulence models. The validation is based on rotor-based quantities of interest. Cycle-integrated mean absolute errors are used to quantify model performance. The results of the benchmark are usedmore » to discuss input uncertainties from mesoscale modelling, different meso-micro coupling strategies (online vs offline) and consistency between RANS and LES codes when dealing with boundary-layer mean flow quantities. Altogether, all the microscale simulations produce a consistent coupling with mesoscale forcings.« less

  19. Results of the GABLS3 diurnal-cycle benchmark for wind energy applications

    DOE PAGES

    Rodrigo, J. Sanz; Allaerts, D.; Avila, M.; ...

    2017-06-13

    We present results of the GABLS3 model intercomparison benchmark revisited for wind energy applications. The case consists of a diurnal cycle, measured at the 200-m tall Cabauw tower in the Netherlands, including a nocturnal low-level jet. The benchmark includes a sensitivity analysis of WRF simulations using two input meteorological databases and five planetary boundary-layer schemes. A reference set of mesoscale tendencies is used to drive microscale simulations using RANS k-ϵ and LES turbulence models. The validation is based on rotor-based quantities of interest. Cycle-integrated mean absolute errors are used to quantify model performance. The results of the benchmark are usedmore » to discuss input uncertainties from mesoscale modelling, different meso-micro coupling strategies (online vs offline) and consistency between RANS and LES codes when dealing with boundary-layer mean flow quantities. Altogether, all the microscale simulations produce a consistent coupling with mesoscale forcings.« less

  20. Update on the Code Intercomparison and Benchmark for Muon Fluence and Absorbed Dose Induced by an 18 GeV Electron Beam After Massive Iron Shielding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fasso, A.; Ferrari, A.; Ferrari, A.

    In 1974, Nelson, Kase and Svensson published an experimental investigation on muon shielding around SLAC high-energy electron accelerators [1]. They measured muon fluence and absorbed dose induced by 14 and 18 GeV electron beams hitting a copper/water beamdump and attenuated in a thick steel shielding. In their paper, they compared the results with the theoretical models available at that time. In order to compare their experimental results with present model calculations, we use the modern transport Monte Carlo codes MARS15, FLUKA2011 and GEANT4 to model the experimental setup and run simulations. The results are then compared between the codes, andmore » with the SLAC data.« less

  1. Integrated Sensing Processor, Phase 2

    DTIC Science & Technology

    2005-12-01

    performance analysis for several baseline classifiers including neural nets, linear classifiers, and kNN classifiers. Use of CCDR as a preprocessing step...below the level of the benchmark non-linear classifier for this problem ( kNN ). Furthermore, the CCDR preconditioned kNN achieved a 10% improvement over...the benchmark kNN without CCDR. Finally, we found an important connection between intrinsic dimension estimation via entropic graphs and the optimal

  2. Cost-effectiveness of pneumococcal conjugate vaccination in the prevention of child mortality: an international economic analysis.

    PubMed

    Sinha, Anushua; Levine, Orin; Knoll, Maria D; Muhib, Farzana; Lieu, Tracy A

    2007-02-03

    Routine vaccination of infants against Streptococcus pneumoniae (pneumococcus) needs substantial investment by governments and charitable organisations. Policymakers need information about the projected health benefits, costs, and cost-effectiveness of vaccination when considering these investments. Our aim was to incorporate these data into an economic analysis of pneumococcal vaccination of infants in countries eligible for financial support from the Global Alliance for Vaccines & Immunization (GAVI). We constructed a decision analysis model to compare pneumococcal vaccination of infants aged 6, 10, and 14 weeks with no vaccination in the 72 countries that were eligible as of 2005. We used published and unpublished data to estimate child mortality, effectiveness of pneumococcal conjugate vaccine, and immunisation rates. Pneumococcal vaccination at the rate of diptheria-tetanus-pertussis vaccine coverage was projected to prevent 262,000 deaths per year (7%) in children aged 3-29 months in the 72 developing countries studied, thus averting 8.34 million disability-adjusted life years (DALYs) yearly. If every child could be reached, up to 407,000 deaths per year would be prevented. At a vaccine cost of International 5 dollars per dose, vaccination would have a net cost of 838 million dollars, a cost of 100 dollars per DALY averted. Vaccination at this price was projected to be highly cost-effective in 68 of 72 countries when each country's per head gross domestic product per DALY averted was used as a benchmark. At a vaccine cost of between 1 dollar and 5 dollars per dose, purchase and accelerated uptake of pneumococcal vaccine in the world's poorest countries is projected to substantially reduce childhood mortality and to be highly cost-effective.

  3. Computation of Cosmic Ray Ionization and Dose at Mars: a Comparison of HZETRN and Planetocosmics for Proton and Alpha Particles

    NASA Technical Reports Server (NTRS)

    Gronoff, Guillaume; Norman, Ryan B.; Mertens, Christopher J.

    2014-01-01

    The ability to evaluate the cosmic ray environment at Mars is of interest for future manned exploration. To support exploration, tools must be developed to accurately access the radiation environment in both free space and on planetary surfaces. The primary tool NASA uses to quantify radiation exposure behind shielding materials is the space radiation transport code, HZETRN. In order to build confidence in HZETRN, code benchmarking against Monte Carlo radiation transport codes is often used. This work compares the dose calculations at Mars by HZETRN and the Geant4 application Planetocosmics. The dose at ground and the energy deposited in the atmosphere by galactic cosmic ray protons and alpha particles has been calculated for the Curiosity landing conditions. In addition, this work has considered Solar Energetic Particle events, allowing for the comparison of varying input radiation environments. The results for protons and alpha particles show very good agreement between HZETRN and Planetocosmics.

  4. Comparative evaluation of 1D and quasi-2D hydraulic models based on benchmark and real-world applications for uncertainty assessment in flood mapping

    NASA Astrophysics Data System (ADS)

    Dimitriadis, Panayiotis; Tegos, Aristoteles; Oikonomou, Athanasios; Pagana, Vassiliki; Koukouvinos, Antonios; Mamassis, Nikos; Koutsoyiannis, Demetris; Efstratiadis, Andreas

    2016-03-01

    One-dimensional and quasi-two-dimensional hydraulic freeware models (HEC-RAS, LISFLOOD-FP and FLO-2d) are widely used for flood inundation mapping. These models are tested on a benchmark test with a mixed rectangular-triangular channel cross section. Using a Monte-Carlo approach, we employ extended sensitivity analysis by simultaneously varying the input discharge, longitudinal and lateral gradients and roughness coefficients, as well as the grid cell size. Based on statistical analysis of three output variables of interest, i.e. water depths at the inflow and outflow locations and total flood volume, we investigate the uncertainty enclosed in different model configurations and flow conditions, without the influence of errors and other assumptions on topography, channel geometry and boundary conditions. Moreover, we estimate the uncertainty associated to each input variable and we compare it to the overall one. The outcomes of the benchmark analysis are further highlighted by applying the three models to real-world flood propagation problems, in the context of two challenging case studies in Greece.

  5. Heart rate changes during electroconvulsive therapy

    PubMed Central

    2013-01-01

    Background This observational study documented heart rate over the entire course of electrically induced seizures and aimed to evaluate the effects of stimulus electrode placement, patients' age, stimulus dose, and additional predictors. Method In 119 consecutive patients with 64 right unilateral (RUL) and 55 bifrontal (BF) electroconvulsive treatments, heart rate graphs based on beat-to-beat measurements were plotted up to durations of 130 s. Results In RUL stimulation, the initial drop in heart rate lasted for 12.5 ± 2.6 s (mean ± standard deviation). This depended on stimulus train duration, age, and baseline heart rate. In seizures induced with BF electrode placement, a sympathetic response was observed within the first few seconds of the stimulation phase (median 3.5 s). This was also the case with subconvulsive stimulations. The mean peak heart rate in all 119 treatments amounted to 135 ± 20 bpm and depended on baseline heart rate and seizure duration; electrode placement, charge dose, and age were insignificant in regression analysis. A marked decline in heart rate in connection with seizure cessation occurred in 71% of treatments. Conclusions A significant independent effect of stimulus electrode positioning on cardiac action was evident only in the initial phase of the seizures. Electrical stimulation rather than the seizure causes the initial heart rate increase in BF treatments. The data reveal no rationale for setting the stimulus doses as a function of intraictal peak heart rates (‘benchmark method’). The marked decline in heart rate at the end of most seizures is probably mediated by a baroreceptor reflex. PMID:23764036

  6. Radiological assessment for bauxite mining and alumina refining.

    PubMed

    O'Connor, Brian H; Donoghue, A Michael; Manning, Timothy J H; Chesson, Barry J

    2013-01-01

    Two international benchmarks assess whether the mining and processing of ores containing Naturally Occurring Radioactive Material (NORM) require management under radiological regulations set by local jurisdictions. First, the 1 Bq/g benchmark for radionuclide head of chain activity concentration determines whether materials may be excluded from radiological regulation. Second, processes may be exempted from radiological regulation where occupational above-background exposures for members of the workforce do not exceed 1 mSv/year. This is also the upper-limit of exposure prescribed for members of the public. Alcoa of Australia Limited (Alcoa) has undertaken radiological evaluations of the mining and processing of bauxite from the Darling Range of Western Australia since the 1980s. Short-term monitoring projects have demonstrated that above-background exposures for workers do not exceed 1 mSv/year. A whole-of-year evaluation of above-background, occupational radiological doses for bauxite mining, alumina refining and residue operations was conducted during 2008/2009 as part of the Alcoa NORM Quality Assurance System (NQAS). The NQAS has been guided by publications from the International Commission on Radiological Protection (ICRP), the International Atomic Energy Agency (IAEA) and the Australian Radiation Protection and Nuclear Safety Agency (ARPANSA). The NQAS has been developed specifically in response to implementation of the Australian National Directory on Radiation Protection (NDRP). Positional monitoring was undertaken to increase the accuracy of natural background levels required for correction of occupational exposures. This is important in view of the small increments in exposure that occur in bauxite mining, alumina refining and residue operations relative to natural background. Positional monitoring was also undertaken to assess the potential for exposure in operating locations. Personal monitoring was undertaken to characterise exposures in Similar Exposure Groups (SEGs). The monitoring was undertaken over 12 months, to provide annual average assessments of above-background doses, thereby reducing temporal variations, especially for radon exposures. The monitoring program concentrated on gamma and radon exposures, rather than gross alpha exposures, as past studies have shown that gross alpha exposures from inhalable dust for most of the workforce are small in comparison to combined gamma and radon exposures. The natural background determinations were consistent with data in the literature for localities near Alcoa's mining, refining and residue operations in Western Australia, and also with UNSCEAR global data. Within the mining operations, there was further consistency between the above-background dose estimates and the local geochemistry, with slight elevation of dose levels in mining pits. Conservative estimates of above-background levels for the workforce have been made using an assumption of 100% occupancy (1920 hours per year) for the SEGs considered. Total incremental composite doses for individuals were clearly less than 1.0 mSv/year when gamma, radon progeny and gross alpha exposures were considered. This is despite the activity concentration of some materials being slightly higher than the benchmark of 1 Bq/g. The results are consistent with previous monitoring and demonstrate compliance with the 1 mSv/year exemption level within mining, refining and residue operations. These results will be of value to bauxite mines and alumina refineries elsewhere in the world.

  7. SU-F-303-12: Implementation of MR-Only Simulation for Brain Cancer: A Virtual Clinical Trial

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glide-Hurst, C; Zheng, W; Kim, J

    2015-06-15

    Purpose: To perform a retrospective virtual clinical trial using an MR-only workflow for a variety of brain cancer cases by incorporating novel imaging sequences, tissue segmentation using phase images, and an innovative synthetic CT (synCT) solution. Methods: Ten patients (16 lesions) were evaluated using a 1.0T MR-SIM including UTE-DIXON imaging (TE = 0.144/3.4/6.9ms). Bone-enhanced images were generated from DIXON-water/fat and inverted UTE. Automated air segmentation was performed using unwrapped UTE phase maps. Segmentation accuracy was assessed by calculating intersection and Dice similarity coefficients (DSC) using CT-SIM as ground truth. SynCTs were generated using voxel-based weighted summation incorporating T2, FLAIR, UTE1,more » and bone-enhanced images. Mean absolute error (MAE) characterized HU differences between synCT and CT-SIM. Dose was recalculated on synCTs; differences were quantified using planar gamma analysis (2%/2 mm dose difference/distance to agreement) at isocenter. Digitally reconstructed radiographs (DRRs) were compared. Results: On average, air maps intersected 80.8 ±5.5% (range: 71.8–88.8%) between MR-SIM and CT-SIM yielding DSCs of 0.78 ± 0.04 (range: 0.70–0.83). Whole-brain MAE between synCT and CT-SIM was 160.7±8.8 HU, with the largest uncertainty arising from bone (MAE = 423.3±33.2 HU). Gamma analysis revealed pass rates of 99.4 ± 0.04% between synCT and CT-SIM for the cohort. Dose volume histogram analysis revealed that synCT tended to yield slightly higher doses. Organs at risk such as the chiasm and optic nerves were most sensitive due to their proximities to air/bone interfaces. DRRs generated via synCT and CT-SIM were within clinical tolerances. Conclusion: Our approach for MR-only simulation for brain cancer treatment planning yielded clinically acceptable results relative to the CT-based benchmark. While slight dose differences were observed, reoptimization of treatment plans and improved image registration can address this limitation. Future work will incorporate automated registration between setup images (cone-beam CT and kilovoltage images) for synCT and CT-SIM. Submitting institution holds research agreements with Philips HealthCare, Best, Netherlands and Varian Medical Systems, Palo Alto, CA. Research partially sponsored via an Internal Mentored Research Grant.« less

  8. Quality assurance of the SCOPE 1 trial in oesophageal radiotherapy.

    PubMed

    Wills, Lucy; Maggs, Rhydian; Lewis, Geraint; Jones, Gareth; Nixon, Lisette; Staffurth, John; Crosby, Tom

    2017-11-15

    SCOPE 1 was the first UK based multi-centre trial involving radiotherapy of the oesophagus. A comprehensive radiotherapy trials quality assurance programme was launched with two main aims: 1. To assist centres, where needed, to adapt their radiotherapy techniques in order to achieve protocol compliance and thereby enable their participation in the trial. 2. To support the trial's clinical outcomes by ensuring the consistent planning and delivery of radiotherapy across all participating centres. A detailed information package was provided and centres were required to complete a benchmark case in which the delineated target volumes and organs at risk, dose distribution and completion of a plan assessment form were assessed prior to recruiting patients into the trial. Upon recruiting, the quality assurance (QA) programme continued to monitor the outlining and planning of radiotherapy treatments. Completion of a questionnaire was requested in order to gather information about each centre's equipment and techniques relating to their trial participation and to assess the impact of the trial nationally on standard practice for radiotherapy of the oesophagus. During the trial, advice was available for individual planning issues, and was circulated amongst the SCOPE 1 community in response to common areas of concern using bulletins. 36 centres were supported through QA processes to enable their participation in SCOPE1. We discuss the issues which have arisen throughout this process and present details of the benchmark case solutions, centre questionnaires and on-trial protocol compliance. The range of submitted benchmark case GTV volumes was 29.8-67.8cm 3 ; and PTV volumes 221.9-513.3 cm 3 . For the dose distributions associated with these volumes, the percentage volume of the lungs receiving 20Gy (V20Gy) ranged from 20.4 to 33.5%. Similarly, heart V40Gy ranged from 16.1 to 33.0%. Incidence of incorrect outlining of OAR volumes increased from 50% of centres at benchmark case, to 64% on trial. Sixty-five percent of centres, who returned the trial questionnaire, stated that their standard practice had changed as a result of their participation in the SCOPE1 trial. The SCOPE 1 QA programme outcomes lend support to the trial's clinical conclusions. The range of patient planning outcomes for the benchmark case indicated, at the outset of the trial, the significant degree of variation present in UK oesophageal radiotherapy planning outcomes, despite the presence of a protocol. This supports the case for increasingly detailed definition of practice by means of consensus protocols, training and peer review. The incidence of minor inconsistencies of technique highlights the potential for improved QA systems and the need for sufficient resource for this to be addressed within future trials. As indicated in questionnaire responses, the QA exercise as a whole has contributed to greater consistency of oesophageal radiotherapy in the UK via the adoption into standard practice of elements of the protocol. The SCOPE1 trial is an International Standard Randomized Controlled Trial, ISRCTN47718479 .

  9. Growth hormone regimens in Australia: analysis of the first 3 years of treatment for idiopathic growth hormone deficiency and idiopathic short stature.

    PubMed

    Hughes, Ian P; Harris, Mark; Choong, Catherine S; Ambler, Geoff; Cutfield, Wayne; Hofman, Paul; Cowell, Chris T; Werther, George; Cotterill, Andrew; Davies, Peter S W

    2012-07-01

    To investigate response to growth hormone (GH) in the first, second and third years of treatment for all idiopathic GH-deficient (GHD) and idiopathic short stature (ISS) patients in Australia. Eligibility for subsidized GH treatment in Australia is determined on auxological criteria for the indication of Short Stature and Slow Growth (SSSG), which includes ISS (SSSG-ISS). The biochemical GHD (BGHD, peak GH < 10 mU/l) and SSSG indications are treated similarly: starting dose of 4·5 mg/m(2)/week with provision for incremental dosing. Some ISS patients were specifically diagnosed with familial short stature (SSSG-FSS). Responses for each year of treatment for BGHD, SSSG-ISS and SSSG-FSS cohorts were compared in relation to influencing variables and with international benchmarks. The effect of incremental dosing was assessed. Australian BGHD, SSSG-ISS and SSSG-FSS patients who had completed 1, 2, or 3 years of treatment and were currently receiving GH. Growth hormone dose, change in height-standard deviation score (ΔSDS) and growth velocity (GV). First-year response was 2-3 times greater than that in subsequent years: ΔSDS(1st year) = 0·92, 0·50 and 0·46 for BGHD, SSSG-ISS and SSSG-FSS, respectively. Responses were similar to international reports and inversely related to age at commencement of GH. First-year GV-for-age for BGHD patients was similar to international standards for idiopathic GHD. However, girls had an inferior response to boys when treatment commenced at <6 years of age. First-year GV-for-age for SSSG-ISS/FSS patients was less than ISS standards. Dose increments attenuated the first- to second-year decline in response to BGHD but marginally improved the responses for SSSG-ISS/FSS. The Australian auxology-based GH programme produces comparable responses to international programmes. A lower starting dose is offset by the initiation of treatment at younger ages. Incremental dosing does not appear optimal. A first-year dose of 6·4-6·9 mg/m(2)/week for GHD and 8·9 mg/m(2)/week for ISS with early commencement of GH treatment may be most efficacious. © 2012 Blackwell Publishing Ltd.

  10. Piping benchmark problems. Volume 1. Dynamic analysis uniform support motion response spectrum method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bezler, P.; Hartzman, M.; Reich, M.

    1980-08-01

    A set of benchmark problems and solutions have been developed for verifying the adequacy of computer programs used for dynamic analysis and design of nuclear piping systems by the Response Spectrum Method. The problems range from simple to complex configurations which are assumed to experience linear elastic behavior. The dynamic loading is represented by uniform support motion, assumed to be induced by seismic excitation in three spatial directions. The solutions consist of frequencies, participation factors, nodal displacement components and internal force and moment components. Solutions to associated anchor point motion static problems are not included.

  11. REPORT FOR COMMERCIAL GRADE NICKEL CHARACTERIZATION AND BENCHMARKING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2012-12-20

    Oak Ridge Associated Universities (ORAU), under the Oak Ridge Institute for Science and Education (ORISE) contract, has completed the collection, sample analysis, and review of analytical results to benchmark the concentrations of gross alpha-emitting radionuclides, gross beta-emitting radionuclides, and technetium-99 in commercial grade nickel. This report presents methods, change management, observations, and statistical analysis of materials procured from sellers representing nine countries on four continents. The data suggest there is a low probability of detecting alpha- and beta-emitting radionuclides in commercial nickel. Technetium-99 was not detected in any samples, thus suggesting it is not present in commercial nickel.

  12. Particle shape analysis of volcanic clast samples with the Matlab tool MORPHEO

    NASA Astrophysics Data System (ADS)

    Charpentier, Isabelle; Sarocchi, Damiano; Rodriguez Sedano, Luis Angel

    2013-02-01

    This paper presents a modular Matlab tool, namely MORPHEO, devoted to the study of particle morphology by Fourier analysis. A benchmark made of four sample images with different features (digitized coins, a pebble chart, gears, digitized volcanic clasts) is then proposed to assess the abilities of the software. Attention is brought to the Weibull distribution introduced to enhance fine variations of particle morphology. Finally, as an example, samples pertaining to a lahar deposit located in La Lumbre ravine (Colima Volcano, Mexico) are analysed. MORPHEO and the benchmark are freely available for research purposes.

  13. Length of stay benchmarking in the Australian private hospital sector.

    PubMed

    Hanning, Brian W T

    2007-02-01

    Length of stay (LOS) benchmarking is a means of comparing hospital efficiency. Analysis of private cases in private facilities using Australian Institute of Health and Welfare (AIHW) data shows interstate variation in same-day (SD) cases and overnight average LOS (ONALOS) on an Australian Refined Diagnosis Related Groups version 4 (ARDRGv4) standardised basis. ARDRGv4 standardised analysis from 1998-99 to 2003-04 shows a steady increase in private sector SD cases (approximately 1.4% per annum) and a decrease in ONALOS (approximately 4.3% per annum). Overall, the data show significant variation in LOS parameters between private hospitals.

  14. multiDE: a dimension reduced model based statistical method for differential expression analysis using RNA-sequencing data with multiple treatment conditions.

    PubMed

    Kang, Guangliang; Du, Li; Zhang, Hong

    2016-06-22

    The growing complexity of biological experiment design based on high-throughput RNA sequencing (RNA-seq) is calling for more accommodative statistical tools. We focus on differential expression (DE) analysis using RNA-seq data in the presence of multiple treatment conditions. We propose a novel method, multiDE, for facilitating DE analysis using RNA-seq read count data with multiple treatment conditions. The read count is assumed to follow a log-linear model incorporating two factors (i.e., condition and gene), where an interaction term is used to quantify the association between gene and condition. The number of the degrees of freedom is reduced to one through the first order decomposition of the interaction, leading to a dramatically power improvement in testing DE genes when the number of conditions is greater than two. In our simulation situations, multiDE outperformed the benchmark methods (i.e. edgeR and DESeq2) even if the underlying model was severely misspecified, and the power gain was increasing in the number of conditions. In the application to two real datasets, multiDE identified more biologically meaningful DE genes than the benchmark methods. An R package implementing multiDE is available publicly at http://homepage.fudan.edu.cn/zhangh/softwares/multiDE . When the number of conditions is two, multiDE performs comparably with the benchmark methods. When the number of conditions is greater than two, multiDE outperforms the benchmark methods.

  15. Benchmarking Discount Rate in Natural Resource Damage Assessment with Risk Aversion.

    PubMed

    Wu, Desheng; Chen, Shuzhen

    2017-08-01

    Benchmarking a credible discount rate is of crucial importance in natural resource damage assessment (NRDA) and restoration evaluation. This article integrates a holistic framework of NRDA with prevailing low discount rate theory, and proposes a discount rate benchmarking decision support system based on service-specific risk aversion. The proposed approach has the flexibility of choosing appropriate discount rates for gauging long-term services, as opposed to decisions based simply on duration. It improves injury identification in NRDA since potential damages and side-effects to ecosystem services are revealed within the service-specific framework. A real embankment case study demonstrates valid implementation of the method. © 2017 Society for Risk Analysis.

  16. SU-F-T-153: Experimental Validation and Calculation Benchmark for a Commercial Monte Carlo Pencil Beam Scanning Proton Therapy Treatment Planning System in Water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, L; Huang, S; Kang, M

    Purpose: Eclipse proton Monte Carlo AcurosPT 13.7 was commissioned and experimentally validated for an IBA dedicated PBS nozzle in water. Topas 1.3 was used to isolate the cause of differences in output and penumbra between simulation and experiment. Methods: The spot profiles were measured in air at five locations using Lynx. PTW-34070 Bragg peak chamber (Freiburg, Germany) was used to collect the relative integral Bragg peak for 15 proton energies from 100 MeV to 225 MeV. The phase space parameters (σx, σθ, ρxθ) number of protons per MU, energy spread and calculated mean energy provided by AcurosPT were identically implementedmore » into Topas. The absolute dose, profiles and field size factors measured using ionization chamber arrays were compared with both AcurosPT and Topas. Results: The beam spot size, σx, and the angular spread, σθ, in air were both energy-dependent: in particular, the spot size in air at isocentre ranged from 2.8 to 5.3 mm, and the angular spread ranged from 2.7 mrad to 6 mrad. The number of protons per MU increased from ∼9E7 at 100 MeV to ∼1.5E8 at 225 MeV. Both AcurosPT and TOPAS agree with experiment within 2 mm penumbra difference or 3% dose difference for scenarios including central axis depth dose and profiles at two depths in multi-spot square fields, from 40 to 200 mm, for all the investigated single-energy and multi-energy beams, indicating clinically acceptable source model and radiation transport algorithm in water. Conclusion: By comparing measured data and TOPAS simulation using the same source model, the AcurosPT 13.7 was validated in water within 2 mm penumbra difference or 3% dose difference. Benchmarks versus an independent Monte Carlo code are recommended to study the agreement in output, filed size factors and penumbra differences. This project is partially supported by the Varian grant under the master agreement between University of Pennsylvania and Varian.« less

  17. On the development of a comprehensive MC simulation model for the Gamma Knife Perfexion radiosurgery unit

    NASA Astrophysics Data System (ADS)

    Pappas, E. P.; Moutsatsos, A.; Pantelis, E.; Zoros, E.; Georgiou, E.; Torrens, M.; Karaiskos, P.

    2016-02-01

    This work presents a comprehensive Monte Carlo (MC) simulation model for the Gamma Knife Perfexion (PFX) radiosurgery unit. Model-based dosimetry calculations were benchmarked in terms of relative dose profiles (RDPs) and output factors (OFs), against corresponding EBT2 measurements. To reduce the rather prolonged computational time associated with the comprehensive PFX model MC simulations, two approximations were explored and evaluated on the grounds of dosimetric accuracy. The first consists in directional biasing of the 60Co photon emission while the second refers to the implementation of simplified source geometric models. The effect of the dose scoring volume dimensions in OF calculations accuracy was also explored. RDP calculations for the comprehensive PFX model were found to be in agreement with corresponding EBT2 measurements. Output factors of 0.819  ±  0.004 and 0.8941  ±  0.0013 were calculated for the 4 mm and 8 mm collimator, respectively, which agree, within uncertainties, with corresponding EBT2 measurements and published experimental data. Volume averaging was found to affect OF results by more than 0.3% for scoring volume radii greater than 0.5 mm and 1.4 mm for the 4 mm and 8 mm collimators, respectively. Directional biasing of photon emission resulted in a time efficiency gain factor of up to 210 with respect to the isotropic photon emission. Although no considerable effect on relative dose profiles was detected, directional biasing led to OF overestimations which were more pronounced for the 4 mm collimator and increased with decreasing emission cone half-angle, reaching up to 6% for a 5° angle. Implementation of simplified source models revealed that omitting the sources’ stainless steel capsule significantly affects both OF results and relative dose profiles, while the aluminum-based bushing did not exhibit considerable dosimetric effect. In conclusion, the results of this work suggest that any PFX simulation model should be benchmarked in terms of both RDP and OF results.

  18. Evaluation of radiation doses and associated risk from the Fukushima nuclear accident to marine biota and human consumers of seafood

    PubMed Central

    Fisher, Nicholas S.; Beaugelin-Seiller, Karine; Hinton, Thomas G.; Baumann, Zofia; Madigan, Daniel J.; Garnier-Laplace, Jacqueline

    2013-01-01

    Radioactive isotopes originating from the damaged Fukushima nuclear reactor in Japan following the earthquake and tsunami in March 2011 were found in resident marine animals and in migratory Pacific bluefin tuna (PBFT). Publication of this information resulted in a worldwide response that caused public anxiety and concern, although PBFT captured off California in August 2011 contained activity concentrations below those from naturally occurring radionuclides. To link the radioactivity to possible health impairments, we calculated doses, attributable to the Fukushima-derived and the naturally occurring radionuclides, to both the marine biota and human fish consumers. We showed that doses in all cases were dominated by the naturally occurring alpha-emitter 210Po and that Fukushima-derived doses were three to four orders of magnitude below 210Po-derived doses. Doses to marine biota were about two orders of magnitude below the lowest benchmark protection level proposed for ecosystems (10 µGy⋅h−1). The additional dose from Fukushima radionuclides to humans consuming tainted PBFT in the United States was calculated to be 0.9 and 4.7 µSv for average consumers and subsistence fishermen, respectively. Such doses are comparable to, or less than, the dose all humans routinely obtain from naturally occurring radionuclides in many food items, medical treatments, air travel, or other background sources. Although uncertainties remain regarding the assessment of cancer risk at low doses of ionizing radiation to humans, the dose received from PBFT consumption by subsistence fishermen can be estimated to result in two additional fatal cancer cases per 10,000,000 similarly exposed people. PMID:23733934

  19. Collision-kerma conversion between dose-to-tissue and dose-to-water by photon energy-fluence corrections in low-energy brachytherapy

    NASA Astrophysics Data System (ADS)

    Giménez-Alventosa, Vicent; Antunes, Paula C. G.; Vijande, Javier; Ballester, Facundo; Pérez-Calatayud, José; Andreo, Pedro

    2017-01-01

    The AAPM TG-43 brachytherapy dosimetry formalism, introduced in 1995, has become a standard for brachytherapy dosimetry worldwide; it implicitly assumes that charged-particle equilibrium (CPE) exists for the determination of absorbed dose to water at different locations, except in the vicinity of the source capsule. Subsequent dosimetry developments, based on Monte Carlo calculations or analytical solutions of transport equations, do not rely on the CPE assumption and determine directly the dose to different tissues. At the time of relating dose to tissue and dose to water, or vice versa, it is usually assumed that the photon fluence in water and in tissues are practically identical, so that the absorbed dose in the two media can be related by their ratio of mass energy-absorption coefficients. In this work, an efficient way to correlate absorbed dose to water and absorbed dose to tissue in brachytherapy calculations at clinically relevant distances for low-energy photon emitting seeds is proposed. A correction is introduced that is based on the ratio of the water-to-tissue photon energy-fluences. State-of-the art Monte Carlo calculations are used to score photon fluence differential in energy in water and in various human tissues (muscle, adipose and bone), which in all cases include a realistic modelling of low-energy brachytherapy sources in order to benchmark the formalism proposed. The energy-fluence based corrections given in this work are able to correlate absorbed dose to tissue and absorbed dose to water with an accuracy better than 0.5% in the most critical cases (e.g. bone tissue).

  20. Collision-kerma conversion between dose-to-tissue and dose-to-water by photon energy-fluence corrections in low-energy brachytherapy.

    PubMed

    Giménez-Alventosa, Vicent; Antunes, Paula C G; Vijande, Javier; Ballester, Facundo; Pérez-Calatayud, José; Andreo, Pedro

    2017-01-07

    The AAPM TG-43 brachytherapy dosimetry formalism, introduced in 1995, has become a standard for brachytherapy dosimetry worldwide; it implicitly assumes that charged-particle equilibrium (CPE) exists for the determination of absorbed dose to water at different locations, except in the vicinity of the source capsule. Subsequent dosimetry developments, based on Monte Carlo calculations or analytical solutions of transport equations, do not rely on the CPE assumption and determine directly the dose to different tissues. At the time of relating dose to tissue and dose to water, or vice versa, it is usually assumed that the photon fluence in water and in tissues are practically identical, so that the absorbed dose in the two media can be related by their ratio of mass energy-absorption coefficients. In this work, an efficient way to correlate absorbed dose to water and absorbed dose to tissue in brachytherapy calculations at clinically relevant distances for low-energy photon emitting seeds is proposed. A correction is introduced that is based on the ratio of the water-to-tissue photon energy-fluences. State-of-the art Monte Carlo calculations are used to score photon fluence differential in energy in water and in various human tissues (muscle, adipose and bone), which in all cases include a realistic modelling of low-energy brachytherapy sources in order to benchmark the formalism proposed. The energy-fluence based corrections given in this work are able to correlate absorbed dose to tissue and absorbed dose to water with an accuracy better than 0.5% in the most critical cases (e.g. bone tissue).

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stathakis, S; Defoor, D; Saenz, D

    Purpose: Stereotactic radiosurgery (SRS) outcomes are related to the delivered dose to the target and to surrounding tissue. We have commissioned a Monte Carlo based dose calculation algorithm to recalculated the delivered dose planned using pencil beam calculation dose engine. Methods: Twenty consecutive previously treated patients have been selected for this study. All plans were generated using the iPlan treatment planning system (TPS) and calculated using the pencil beam algorithm. Each patient plan consisted of 1 to 3 targets and treated using dynamically conformal arcs or intensity modulated beams. Multi-target treatments were delivered using multiple isocenters, one for each target.more » These plans were recalculated for the purpose of this study using a single isocenter. The CT image sets along with the plan, doses and structures were DICOM exported to Monaco TPS and the dose was recalculated using the same voxel resolution and monitor units. Benchmark data was also generated prior to patient calculations to assess the accuracy of the two TPS against measurements using a micro ionization chamber in solid water. Results: Good agreement, within −0.4% for Monaco and +2.2% for iPlan were observed for measurements in water phantom. Doses in patient geometry revealed up to 9.6% differences for single target plans and 9.3% for multiple-target-multiple-isocenter plans. The average dose differences for multi-target-single-isocenter plans were approximately 1.4%. Similar differences were observed for the OARs and integral dose. Conclusion: Accuracy of the beam is crucial for the dose calculation especially in the case of small fields such as those used in SRS treatments. A superior dose calculation algorithm such as Monte Carlo, with properly commissioned beam models, which is unaffected by the lack of electronic equilibrium should be preferred for the calculation of small fields to improve accuracy.« less

  2. Using a knowledge-based planning solution to select patients for proton therapy.

    PubMed

    Delaney, Alexander R; Dahele, Max; Tol, Jim P; Kuijper, Ingrid T; Slotman, Ben J; Verbakel, Wilko F A R

    2017-08-01

    Patient selection for proton therapy by comparing proton/photon treatment plans is time-consuming and prone to bias. RapidPlan™, a knowledge-based-planning solution, uses plan-libraries to model and predict organ-at-risk (OAR) dose-volume-histograms (DVHs). We investigated whether RapidPlan, utilizing an algorithm based only on photon beam characteristics, could generate proton DVH-predictions and whether these could correctly identify patients for proton therapy. Model PROT and Model PHOT comprised 30 head-and-neck cancer proton and photon plans, respectively. Proton and photon knowledge-based-plans (KBPs) were made for ten evaluation-patients. DVH-prediction accuracy was analyzed by comparing predicted-vs-achieved mean OAR doses. KBPs and manual plans were compared using salivary gland and swallowing muscle mean doses. For illustration, patients were selected for protons if predicted Model PHOT mean dose minus predicted Model PROT mean dose (ΔPrediction) for combined OARs was ≥6Gy, and benchmarked using achieved KBP doses. Achieved and predicted Model PROT /Model PHOT mean dose R 2 was 0.95/0.98. Generally, achieved mean dose for Model PHOT /Model PROT KBPs was respectively lower/higher than predicted. Comparing Model PROT /Model PHOT KBPs with manual plans, salivary and swallowing mean doses increased/decreased by <2Gy, on average. ΔPrediction≥6Gy correctly selected 4 of 5 patients for protons. Knowledge-based DVH-predictions can provide efficient, patient-specific selection for protons. A proton-specific RapidPlan-solution could improve results. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Pse-Analysis: a python package for DNA/RNA and protein/ peptide sequence analysis based on pseudo components and kernel methods.

    PubMed

    Liu, Bin; Wu, Hao; Zhang, Deyuan; Wang, Xiaolong; Chou, Kuo-Chen

    2017-02-21

    To expedite the pace in conducting genome/proteome analysis, we have developed a Python package called Pse-Analysis. The powerful package can automatically complete the following five procedures: (1) sample feature extraction, (2) optimal parameter selection, (3) model training, (4) cross validation, and (5) evaluating prediction quality. All the work a user needs to do is to input a benchmark dataset along with the query biological sequences concerned. Based on the benchmark dataset, Pse-Analysis will automatically construct an ideal predictor, followed by yielding the predicted results for the submitted query samples. All the aforementioned tedious jobs can be automatically done by the computer. Moreover, the multiprocessing technique was adopted to enhance computational speed by about 6 folds. The Pse-Analysis Python package is freely accessible to the public at http://bioinformatics.hitsz.edu.cn/Pse-Analysis/, and can be directly run on Windows, Linux, and Unix.

  4. Practice Benchmarking in the Age of Targeted Auditing

    PubMed Central

    Langdale, Ryan P.; Holland, Ben F.

    2012-01-01

    The frequency and sophistication of health care reimbursement auditing has progressed rapidly in recent years, leaving many oncologists wondering whether their private practices would survive a full-scale Office of the Inspector General (OIG) investigation. The Medicare Part B claims database provides a rich source of information for physicians seeking to understand how their billing practices measure up to their peers, both locally and nationally. This database was dissected by a team of cancer specialists to uncover important benchmarks related to targeted auditing. All critical Medicare charges, payments, denials, and service ratios in this article were derived from the full 2010 Medicare Part B claims database. Relevant claims were limited by using Medicare provider specialty codes 83 (hematology/oncology) and 90 (medical oncology), with an emphasis on claims filed from the physician office place of service (11). All charges, denials, and payments were summarized at the Current Procedural Terminology code level to drive practice benchmarking standards. A careful analysis of this data set, combined with the published audit priorities of the OIG, produced germane benchmarks from which medical oncologists can monitor, measure and improve on common areas of billing fraud, waste or abuse in their practices. Part II of this series and analysis will focus on information pertinent to radiation oncologists. PMID:23598847

  5. Practice benchmarking in the age of targeted auditing.

    PubMed

    Langdale, Ryan P; Holland, Ben F

    2012-11-01

    The frequency and sophistication of health care reimbursement auditing has progressed rapidly in recent years, leaving many oncologists wondering whether their private practices would survive a full-scale Office of the Inspector General (OIG) investigation. The Medicare Part B claims database provides a rich source of information for physicians seeking to understand how their billing practices measure up to their peers, both locally and nationally. This database was dissected by a team of cancer specialists to uncover important benchmarks related to targeted auditing. All critical Medicare charges, payments, denials, and service ratios in this article were derived from the full 2010 Medicare Part B claims database. Relevant claims were limited by using Medicare provider specialty codes 83 (hematology/oncology) and 90 (medical oncology), with an emphasis on claims filed from the physician office place of service (11). All charges, denials, and payments were summarized at the Current Procedural Terminology code level to drive practice benchmarking standards. A careful analysis of this data set, combined with the published audit priorities of the OIG, produced germane benchmarks from which medical oncologists can monitor, measure and improve on common areas of billing fraud, waste or abuse in their practices. Part II of this series and analysis will focus on information pertinent to radiation oncologists.

  6. RETRANO3 benchmarks for Beaver Valley plant transients and FSAR analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beaumont, E.T.; Feltus, M.A.

    1993-01-01

    Any best-estimate code (e.g., RETRANO3) results must be validated against plant data and final safety analysis report (FSAR) predictions. The need for two independent means of benchmarking is necessary to ensure that the results were not biased toward a particular data set and to have a certain degree of accuracy. The code results need to be compared with previous results and show improvements over previous code results. Ideally, the two best means of benchmarking a thermal hydraulics code are comparing results from previous versions of the same code along with actual plant data. This paper describes RETRAN03 benchmarks against RETRAN02more » results, actual plant data, and FSAR predictions. RETRAN03, the Electric Power Research Institute's latest version of the RETRAN thermal-hydraulic analysis codes, offers several upgrades over its predecessor, RETRAN02 Mod5. RETRAN03 can use either implicit or semi-implicit numerics, whereas RETRAN02 Mod5 uses only semi-implicit numerics. Another major upgrade deals with slip model options. RETRAN03 added several new models, including a five-equation model for more accurate modeling of two-phase flow. RETPAN02 Mod5 should give similar but slightly more conservative results than RETRAN03 when executed with RETRAN02 Mod5 options.« less

  7. Dose audit for patients undergoing two common radiography examinations with digital radiology systems.

    PubMed

    İnal, Tolga; Ataç, Gökçe

    2014-01-01

    We aimed to determine the radiation doses delivered to patients undergoing general examinations using computed or digital radiography systems in Turkey. Radiographs of 20 patients undergoing posteroanterior chest X-ray and of 20 patients undergoing anteroposterior kidney-ureter-bladder radiography were evaluated in five X-ray rooms at four local hospitals in the Ankara region. Currently, almost all radiology departments in Turkey have switched from conventional radiography systems to computed radiography or digital radiography systems. Patient dose was measured for both systems. The results were compared with published diagnostic reference levels (DRLs) from the European Union and International Atomic Energy Agency. The average entrance surface doses (ESDs) for chest examinations exceeded established international DRLs at two of the X-ray rooms in a hospital with computed radiography. All of the other ESD measurements were approximately equal to or below the DRLs for both examinations in all of the remaining hospitals. Improper adjustment of the exposure parameters, uncalibrated automatic exposure control systems, and failure of the technologists to choose exposure parameters properly were problems we noticed during the study. This study is an initial attempt at establishing local DRL values for digital radiography systems, and will provide a benchmark so that the authorities can establish reference dose levels for diagnostic radiology in Turkey.

  8. Experimental verification of a CT-based Monte Carlo dose-calculation method in heterogeneous phantoms.

    PubMed

    Wang, L; Lovelock, M; Chui, C S

    1999-12-01

    To further validate the Monte Carlo dose-calculation method [Med. Phys. 25, 867-878 (1998)] developed at the Memorial Sloan-Kettering Cancer Center, we have performed experimental verification in various inhomogeneous phantoms. The phantom geometries included simple layered slabs, a simulated bone column, a simulated missing-tissue hemisphere, and an anthropomorphic head geometry (Alderson Rando Phantom). The densities of the inhomogeneity range from 0.14 to 1.86 g/cm3, simulating both clinically relevant lunglike and bonelike materials. The data are reported as central axis depth doses, dose profiles, dose values at points of interest, such as points at the interface of two different media and in the "nasopharynx" region of the Rando head. The dosimeters used in the measurement included dosimetry film, TLD chips, and rods. The measured data were compared to that of Monte Carlo calculations for the same geometrical configurations. In the case of the Rando head phantom, a CT scan of the phantom was used to define the calculation geometry and to locate the points of interest. The agreement between the calculation and measurement is generally within 2.5%. This work validates the accuracy of the Monte Carlo method. While Monte Carlo, at present, is still too slow for routine treatment planning, it can be used as a benchmark against which other dose calculation methods can be compared.

  9. Analysis of 2D Torus and Hub Topologies of 100Mb/s Ethernet for the Whitney Commodity Computing Testbed

    NASA Technical Reports Server (NTRS)

    Pedretti, Kevin T.; Fineberg, Samuel A.; Kutler, Paul (Technical Monitor)

    1997-01-01

    A variety of different network technologies and topologies are currently being evaluated as part of the Whitney Project. This paper reports on the implementation and performance of a Fast Ethernet network configured in a 4x4 2D torus topology in a testbed cluster of 'commodity' Pentium Pro PCs. Several benchmarks were used for performance evaluation: an MPI point to point message passing benchmark, an MPI collective communication benchmark, and the NAS Parallel Benchmarks version 2.2 (NPB2). Our results show that for point to point communication on an unloaded network, the hub and 1 hop routes on the torus have about the same bandwidth and latency. However, the bandwidth decreases and the latency increases on the torus for each additional route hop. Collective communication benchmarks show that the torus provides roughly four times more aggregate bandwidth and eight times faster MPI barrier synchronizations than a hub based network for 16 processor systems. Finally, the SOAPBOX benchmarks, which simulate real-world CFD applications, generally demonstrated substantially better performance on the torus than on the hub. In the few cases the hub was faster, the difference was negligible. In total, our experimental results lead to the conclusion that for Fast Ethernet networks, the torus topology has better performance and scales better than a hub based network.

  10. Modification and validation of an analytical source model for external beam radiotherapy Monte Carlo dose calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidson, Scott E., E-mail: sedavids@utmb.edu

    Purpose: A dose calculation tool, which combines the accuracy of the dose planning method (DPM) Monte Carlo code and the versatility of a practical analytical multisource model, which was previously reported has been improved and validated for the Varian 6 and 10 MV linear accelerators (linacs). The calculation tool can be used to calculate doses in advanced clinical application studies. One shortcoming of current clinical trials that report dose from patient plans is the lack of a standardized dose calculation methodology. Because commercial treatment planning systems (TPSs) have their own dose calculation algorithms and the clinical trial participant who usesmore » these systems is responsible for commissioning the beam model, variation exists in the reported calculated dose distributions. Today’s modern linac is manufactured to tight specifications so that variability within a linac model is quite low. The expectation is that a single dose calculation tool for a specific linac model can be used to accurately recalculate dose from patient plans that have been submitted to the clinical trial community from any institution. The calculation tool would provide for a more meaningful outcome analysis. Methods: The analytical source model was described by a primary point source, a secondary extra-focal source, and a contaminant electron source. Off-axis energy softening and fluence effects were also included. The additions of hyperbolic functions have been incorporated into the model to correct for the changes in output and in electron contamination with field size. A multileaf collimator (MLC) model is included to facilitate phantom and patient dose calculations. An offset to the MLC leaf positions was used to correct for the rudimentary assumed primary point source. Results: Dose calculations of the depth dose and profiles for field sizes 4 × 4 to 40 × 40 cm agree with measurement within 2% of the maximum dose or 2 mm distance to agreement (DTA) for 95% of the data points tested. The model was capable of predicting the depth of the maximum dose within 1 mm. Anthropomorphic phantom benchmark testing of modulated and patterned MLCs treatment plans showed agreement to measurement within 3% in target regions using thermoluminescent dosimeters (TLD). Using radiochromic film normalized to TLD, a gamma criteria of 3% of maximum dose and 2 mm DTA was applied with a pass rate of least 85% in the high dose, high gradient, and low dose regions. Finally, recalculations of patient plans using DPM showed good agreement relative to a commercial TPS when comparing dose volume histograms and 2D dose distributions. Conclusions: A unique analytical source model coupled to the dose planning method Monte Carlo dose calculation code has been modified and validated using basic beam data and anthropomorphic phantom measurement. While this tool can be applied in general use for a particular linac model, specifically it was developed to provide a singular methodology to independently assess treatment plan dose distributions from those clinical institutions participating in National Cancer Institute trials.« less

  11. Ontology for Semantic Data Integration in the Domain of IT Benchmarking.

    PubMed

    Pfaff, Matthias; Neubig, Stefan; Krcmar, Helmut

    2018-01-01

    A domain-specific ontology for IT benchmarking has been developed to bridge the gap between a systematic characterization of IT services and their data-based valuation. Since information is generally collected during a benchmark exercise using questionnaires on a broad range of topics, such as employee costs, software licensing costs, and quantities of hardware, it is commonly stored as natural language text; thus, this information is stored in an intrinsically unstructured form. Although these data form the basis for identifying potentials for IT cost reductions, neither a uniform description of any measured parameters nor the relationship between such parameters exists. Hence, this work proposes an ontology for the domain of IT benchmarking, available at https://w3id.org/bmontology. The design of this ontology is based on requirements mainly elicited from a domain analysis, which considers analyzing documents and interviews with representatives from Small- and Medium-Sized Enterprises and Information and Communications Technology companies over the last eight years. The development of the ontology and its main concepts is described in detail (i.e., the conceptualization of benchmarking events, questionnaires, IT services, indicators and their values) together with its alignment with the DOLCE-UltraLite foundational ontology.

  12. Encoding color information for visual tracking: Algorithms and benchmark.

    PubMed

    Liang, Pengpeng; Blasch, Erik; Ling, Haibin

    2015-12-01

    While color information is known to provide rich discriminative clues for visual inference, most modern visual trackers limit themselves to the grayscale realm. Despite recent efforts to integrate color in tracking, there is a lack of comprehensive understanding of the role color information can play. In this paper, we attack this problem by conducting a systematic study from both the algorithm and benchmark perspectives. On the algorithm side, we comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers. On the benchmark side, we compile a large set of 128 color sequences with ground truth and challenge factor annotations (e.g., occlusion). A thorough evaluation is conducted by running all the color-encoded trackers, together with two recently proposed color trackers. A further validation is conducted on an RGBD tracking benchmark. The results clearly show the benefit of encoding color information for tracking. We also perform detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. We expect the study to provide the guidance, motivation, and benchmark for future work on encoding color in visual tracking.

  13. Sensitivity of super-efficient data envelopment analysis results to individual decision-making units: an example of surgical workload by specialty.

    PubMed

    Dexter, Franklin; O'Neill, Liam; Xin, Lei; Ledolter, Johannes

    2008-12-01

    We use resampling of data to explore the basic statistical properties of super-efficient data envelopment analysis (DEA) when used as a benchmarking tool by the manager of a single decision-making unit. Our focus is the gaps in the outputs (i.e., slacks adjusted for upward bias), as they reveal which outputs can be increased. The numerical experiments show that the estimates of the gaps fail to exhibit asymptotic consistency, a property expected for standard statistical inference. Specifically, increased sample sizes were not always associated with more accurate forecasts of the output gaps. The baseline DEA's gaps equaled the mode of the jackknife and the mode of resampling with/without replacement from any subset of the population; usually, the baseline DEA's gaps also equaled the median. The quartile deviations of gaps were close to zero when few decision-making units were excluded from the sample and the study unit happened to have few other units contributing to its benchmark. The results for the quartile deviations can be explained in terms of the effective combinations of decision-making units that contribute to the DEA solution. The jackknife can provide all the combinations contributing to the quartile deviation and only needs to be performed for those units that are part of the benchmark set. These results show that there is a strong rationale for examining DEA results with a sensitivity analysis that excludes one benchmark hospital at a time. This analysis enhances the quality of decision support using DEA estimates for the potential ofa decision-making unit to grow one or more of its outputs.

  14. Benchmarking the evaluated proton differential cross sections suitable for the EBS analysis of natSi and 16O

    NASA Astrophysics Data System (ADS)

    Kokkoris, M.; Dede, S.; Kantre, K.; Lagoyannis, A.; Ntemou, E.; Paneta, V.; Preketes-Sigalas, K.; Provatas, G.; Vlastou, R.; Bogdanović-Radović, I.; Siketić, Z.; Obajdin, N.

    2017-08-01

    The evaluated proton differential cross sections suitable for the Elastic Backscattering Spectroscopy (EBS) analysis of natSi and 16O, as obtained from SigmaCalc 2.0, have been benchmarked over a wide energy and angular range at two different accelerator laboratories, namely at N.C.S.R. 'Demokritos', Athens, Greece and at Ruđer Bošković Institute (RBI), Zagreb, Croatia, using a variety of high-purity thick targets of known stoichiometry. The results are presented in graphical and tabular forms, while the observed discrepancies, as well as, the limits in accuracy of the benchmarking procedure, along with target related effects, are thoroughly discussed and analysed. In the case of oxygen the agreement between simulated and experimental spectra was generally good, while for silicon serious discrepancies were observed above Ep,lab = 2.5 MeV, suggesting that a further tuning of the appropriate nuclear model parameters in the evaluated differential cross-section datasets is required.

  15. High-Level Ab Initio Calculations of Intermolecular Interactions: Heavy Main-Group Element π-Interactions.

    PubMed

    Krasowska, Małgorzata; Schneider, Wolfgang B; Mehring, Michael; Auer, Alexander A

    2018-05-02

    This work reports high-level ab initio calculations and a detailed analysis on the nature of intermolecular interactions of heavy main-group element compounds and π systems. For this purpose we have chosen a set of benchmark molecules of the form MR 3 , in which M=As, Sb, or Bi, and R=CH 3 , OCH 3 , or Cl. Several methods for the description of weak intermolecular interactions are benchmarked including DFT-D, DFT-SAPT, MP2, and high-level coupled cluster methods in the DLPNO-CCSD(T) approximation. Using local energy decomposition (LED) and an analysis of the electron density, details of the nature of this interaction are unraveled. The results yield insight into the nature of dispersion and donor-acceptor interactions in this type of system, including systematic trends in the periodic table, and also provide a benchmark for dispersion interactions in heavy main-group element compounds. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. New features and improved uncertainty analysis in the NEA nuclear data sensitivity tool (NDaST)

    NASA Astrophysics Data System (ADS)

    Dyrda, J.; Soppera, N.; Hill, I.; Bossant, M.; Gulliford, J.

    2017-09-01

    Following the release and initial testing period of the NEA's Nuclear Data Sensitivity Tool [1], new features have been designed and implemented in order to expand its uncertainty analysis capabilities. The aim is to provide a free online tool for integral benchmark testing, that is both efficient and comprehensive, meeting the needs of the nuclear data and benchmark testing communities. New features include access to P1 sensitivities for neutron scattering angular distribution [2] and constrained Chi sensitivities for the prompt fission neutron energy sampling. Both of these are compatible with covariance data accessed via the JANIS nuclear data software, enabling propagation of the resultant uncertainties in keff to a large series of integral experiment benchmarks. These capabilities are available using a number of different covariance libraries e.g., ENDF/B, JEFF, JENDL and TENDL, allowing comparison of the broad range of results it is possible to obtain. The IRPhE database of reactor physics measurements is now also accessible within the tool in addition to the criticality benchmarks from ICSBEP. Other improvements include the ability to determine and visualise the energy dependence of a given calculated result in order to better identify specific regions of importance or high uncertainty contribution. Sorting and statistical analysis of the selected benchmark suite is now also provided. Examples of the plots generated by the software are included to illustrate such capabilities. Finally, a number of analytical expressions, for example Maxwellian and Watt fission spectra will be included. This will allow the analyst to determine the impact of varying such distributions within the data evaluation, either through adjustment of parameters within the expressions, or by comparison to a more general probability distribution fitted to measured data. The impact of such changes is verified through calculations which are compared to a `direct' measurement found by adjustment of the original ENDF format file.

  17. Evaluation of neutron skyshine from a cyclotron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huyashi, K.; Nakamura, T.

    1984-06-01

    The dose distribution and the spectrum variation of neutrons due to the skyshine effect have been measured with various detectors in the environment surrounding the cyclotron of the Institute for Nuclear Study, University of Tokyo. The source neutrons were produced by stopping a 52-MeV proton beam into a carbon beam stopper and were extracted upward from the opening in the concrete shield surrounding the cyclotron and then leaked into the atmosphere through the cyclotron building. The dose distribution and the spectrum of neutrons near the beam stopper were also measured in order to get information on the skyshine source. Themore » measured skyshine neutron spectra and dose distribution were analyzed with two codes, MMCR2 and SKYSHINE-II, with the result that the calculated results are in good agreement with the experiment. Valuable characteristics of this experiment are the determination of the energy spectrum and dose distribution of source neutron and the measurement of skyshine neutrons from an actual large-scale accelerator building to the exclusion of direct neutrons transported through the air. This experiment must be useful as a kind of benchmark experiment on the skyshine phenomenon.« less

  18. Effective Dose in Nuclear Medicine Studies and SPECT/CT: Dosimetry Survey Across Quebec Province.

    PubMed

    Charest, Mathieu; Asselin, Chantal

    2018-06-01

    The aims of the current study were to draw a portrait of the delivered dose in selected nuclear medicine studies in Québec province and to assess the degree of change between an earlier survey performed in 2010 and a later survey performed in 2014. Methods: Each surveyed nuclear medicine department had to complete 2 forms: the first, about the administered activity in selected nuclear medicine studies, and the second, about the CT parameters used in SPECT/CT imaging, if available. The administered activities were converted into effective doses using the most recent conversion factors. Diagnostic reference levels were computed for each imaging procedure to obtain a benchmark for comparison. Results: The distributions of administered activity in various nuclear medicine studies, along with the corresponding distribution of the effective doses, were determined. Excluding 131 I for thyroid studies, 67 Ga-citrate for infectious workups, and combined stress and rest myocardial perfusion studies, the remainder of the 99m Tc-based studies delivered average effective doses clustered below 10 mSv. Between the 2010 survey and the 2014 survey, there was a statistically significant decrease in delivered dose from 18.3 to 14.5 mSv. 67 Ga-citrate studies for infectious workups also showed a significant decrease in delivered dose from 31.0 to 26.2 mSv. The standardized CT portion of SPECT/CT studies yielded a mean effective dose 14 times lower than the radiopharmaceutical portion of the study. Conclusion: Between 2010 and 2014, there was a significant decrease in the delivered effective dose in myocardial perfusion and 67 Ga-citrate studies. The CT portions of the surveyed SPECT/CT studies contributed a relatively small fraction of the total delivered effective dose. © 2018 by the Society of Nuclear Medicine and Molecular Imaging.

  19. Space Weathering Experiments on Spacecraft Materials

    NASA Technical Reports Server (NTRS)

    Engelhart, D. P.; Cooper, R.; Cowardin, H.; Maxwell, J.; Plis, E.; Ferguson, D.; Barton, D.; Schiefer, S.; Hoffmann, R.

    2017-01-01

    A project to investigate space environment effects on specific materials with interest to remote sensing was initiated in 2016. The goal of the project is to better characterize changes in the optical properties of polymers found in multi-layered spacecraft insulation (MLI) induced by electron bombardment. Previous analysis shows that chemical bonds break and potentially reform when exposed to high energy electrons like those seen in orbit. These chemical changes have been shown to alter a material's optical reflectance, among other material properties. This paper presents the initial experimental results of MLI materials exposed to various fluences of high energy electrons, designed to simulate a portion of the geosynchronous Earth orbit (GEO) space environment. It is shown that the spectral reflectance of some of the tested materials changes as a function of electron dose. These results provide an experimental benchmark for analysis of aging effects on satellite systems which can be used to improve remote sensing and space situational awareness. They also provide preliminary analysis on those materials that are most likely to comprise the high area-to-mass ratio (HAMR) population of space debris in the geosynchronous orbit environment. Finally, the results presented in this paper serve as a proof of concept for simulated environmental aging of spacecraft polymers that should lead to more experiments using a larger subset of spacecraft materials.

  20. RNA-seq mixology: designing realistic control experiments to compare protocols and analysis methods

    PubMed Central

    Holik, Aliaksei Z.; Law, Charity W.; Liu, Ruijie; Wang, Zeya; Wang, Wenyi; Ahn, Jaeil; Asselin-Labat, Marie-Liesse; Smyth, Gordon K.

    2017-01-01

    Abstract Carefully designed control experiments provide a gold standard for benchmarking different genomics research tools. A shortcoming of many gene expression control studies is that replication involves profiling the same reference RNA sample multiple times. This leads to low, pure technical noise that is atypical of regular studies. To achieve a more realistic noise structure, we generated a RNA-sequencing mixture experiment using two cell lines of the same cancer type. Variability was added by extracting RNA from independent cell cultures and degrading particular samples. The systematic gene expression changes induced by this design allowed benchmarking of different library preparation kits (standard poly-A versus total RNA with Ribozero depletion) and analysis pipelines. Data generated using the total RNA kit had more signal for introns and various RNA classes (ncRNA, snRNA, snoRNA) and less variability after degradation. For differential expression analysis, voom with quality weights marginally outperformed other popular methods, while for differential splicing, DEXSeq was simultaneously the most sensitive and the most inconsistent method. For sample deconvolution analysis, DeMix outperformed IsoPure convincingly. Our RNA-sequencing data set provides a valuable resource for benchmarking different protocols and data pre-processing workflows. The extra noise mimics routine lab experiments more closely, ensuring any conclusions are widely applicable. PMID:27899618

  1. Precise Ages for the Benchmark Brown Dwarfs HD 19467 B and HD 4747 B

    NASA Astrophysics Data System (ADS)

    Wood, Charlotte; Boyajian, Tabetha; Crepp, Justin; von Braun, Kaspar; Brewer, John; Schaefer, Gail; Adams, Arthur; White, Tim

    2018-01-01

    Large uncertainty in the age of brown dwarfs, stemming from a mass-age degeneracy, makes it difficult to constrain substellar evolutionary models. To break the degeneracy, we need ''benchmark" brown dwarfs (found in binary systems) whose ages can be determined independent of their masses. HD~19467~B and HD~4747~B are two benchmark brown dwarfs detected through the TRENDS (TaRgeting bENchmark objects with Doppler Spectroscopy) high-contrast imaging program for which we have dynamical mass measurements. To constrain their ages independently through isochronal analysis, we measured the radii of the host stars with interferometry using the Center for High Angular Resolution Astronomy (CHARA) Array. Assuming the brown dwarfs have the same ages as their host stars, we use these results to distinguish between several substellar evolutionary models. In this poster, we present new age estimates for HD~19467 and HD~4747 that are more accurate and precise and show our preliminary comparisons to cooling models.

  2. A tiered asthma hazard characterization and exposure assessment approach for evaluation of consumer product ingredients.

    PubMed

    Maier, Andrew; Vincent, Melissa J; Parker, Ann; Gadagbui, Bernard K; Jayjock, Michael

    2015-12-01

    Asthma is a complex syndrome with significant consequences for those affected. The number of individuals affected is growing, although the reasons for the increase are uncertain. Ensuring the effective management of potential exposures follows from substantial evidence that exposure to some chemicals can increase the likelihood of asthma responses. We have developed a safety assessment approach tailored to the screening of asthma risks from residential consumer product ingredients as a proactive risk management tool. Several key features of the proposed approach advance the assessment resources often used for asthma issues. First, a quantitative health benchmark for asthma or related endpoints (irritation and sensitization) is provided that extends qualitative hazard classification methods. Second, a parallel structure is employed to include dose-response methods for asthma endpoints and methods for scenario specific exposure estimation. The two parallel tracks are integrated in a risk characterization step. Third, a tiered assessment structure is provided to accommodate different amounts of data for both the dose-response assessment (i.e., use of existing benchmarks, hazard banding, or the threshold of toxicological concern) and exposure estimation (i.e., use of empirical data, model estimates, or exposure categories). Tools building from traditional methods and resources have been adapted to address specific issues pertinent to asthma toxicology (e.g., mode-of-action and dose-response features) and the nature of residential consumer product use scenarios (e.g., product use patterns and exposure durations). A case study for acetic acid as used in various sentinel products and residential cleaning scenarios was developed to test the safety assessment methodology. In particular, the results were used to refine and verify relationships among tiered approaches such that each lower data tier in the approach provides a similar or greater margin of safety for a given scenario. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Practical application of the benchmarking technique to increase reliability and efficiency of power installations and main heat-mechanic equipment of thermal power plants

    NASA Astrophysics Data System (ADS)

    Rimov, A. A.; Chukanova, T. I.; Trofimov, Yu. V.

    2016-12-01

    Data on the comparative analysis variants of the quality of power installations (benchmarking) applied in the power industry is systematized. It is shown that the most efficient variant of implementation of the benchmarking technique is the analysis of statistical distributions of the indicators in the composed homogenous group of the uniform power installations. The benchmarking technique aimed at revealing the available reserves on improvement of the reliability and heat efficiency indicators of the power installations of the thermal power plants is developed in the furtherance of this approach. The technique provides a possibility of reliable comparison of the quality of the power installations in their homogenous group limited by the number and adoption of the adequate decision on improving some or other technical characteristics of this power installation. The technique provides structuring of the list of the comparison indicators and internal factors affecting them represented according to the requirements of the sectoral standards and taking into account the price formation characteristics in the Russian power industry. The mentioned structuring ensures traceability of the reasons of deviation of the internal influencing factors from the specified values. The starting point for further detail analysis of the delay of the certain power installation indicators from the best practice expressed in the specific money equivalent is positioning of this power installation on distribution of the key indicator being a convolution of the comparison indicators. The distribution of the key indicator is simulated by the Monte-Carlo method after receiving the actual distributions of the comparison indicators: specific lost profit due to the short supply of electric energy and short delivery of power, specific cost of losses due to the nonoptimal expenditures for repairs, and specific cost of excess fuel equivalent consumption. The quality loss indicators are developed facilitating the analysis of the benchmarking results permitting to represent the quality loss of this power installation in the form of the difference between the actual value of the key indicator or comparison indicator and the best quartile of the existing distribution. The uncertainty of the obtained values of the quality loss indicators was evaluated by transforming the standard uncertainties of the input values into the expanded uncertainties of the output values with the confidence level of 95%. The efficiency of the technique is demonstrated in terms of benchmarking of the main thermal and mechanical equipment of the extraction power-generating units T-250 and power installations of the thermal power plants with the main steam pressure 130 atm.

  4. An adaptive algorithm for the detection of microcalcifications in simulated low-dose mammography.

    PubMed

    Treiber, O; Wanninger, F; Führ, H; Panzer, W; Regulla, D; Winkler, G

    2003-02-21

    This paper uses the task of microcalcification detection as a benchmark problem to assess the potential for dose reduction in x-ray mammography. We present the results of a newly developed algorithm for detection of microcalcifications as a case study for a typical commercial film-screen system (Kodak Min-R 2000/2190). The first part of the paper deals with the simulation of dose reduction for film-screen mammography based on a physical model of the imaging process. Use of a more sensitive film-screen system is expected to result in additional smoothing of the image. We introduce two different models of that behaviour, called moderate and strong smoothing. We then present an adaptive, model-based microcalcification detection algorithm. Comparing detection results with ground-truth images obtained under the supervision of an expert radiologist allows us to establish the soundness of the detection algorithm. We measure the performance on the dose-reduced images in order to assess the loss of information due to dose reduction. It turns out that the smoothing behaviour has a strong influence on detection rates. For moderate smoothing. a dose reduction by 25% has no serious influence on the detection results. whereas a dose reduction by 50% already entails a marked deterioration of the performance. Strong smoothing generally leads to an unacceptable loss of image quality. The test results emphasize the impact of the more sensitive film-screen system and its characteristics on the problem of assessing the potential for dose reduction in film-screen mammography. The general approach presented in the paper can be adapted to fully digital mammography.

  5. An adaptive algorithm for the detection of microcalcifications in simulated low-dose mammography

    NASA Astrophysics Data System (ADS)

    Treiber, O.; Wanninger, F.; Führ, H.; Panzer, W.; Regulla, D.; Winkler, G.

    2003-02-01

    This paper uses the task of microcalcification detection as a benchmark problem to assess the potential for dose reduction in x-ray mammography. We present the results of a newly developed algorithm for detection of microcalcifications as a case study for a typical commercial film-screen system (Kodak Min-R 2000/2190). The first part of the paper deals with the simulation of dose reduction for film-screen mammography based on a physical model of the imaging process. Use of a more sensitive film-screen system is expected to result in additional smoothing of the image. We introduce two different models of that behaviour, called moderate and strong smoothing. We then present an adaptive, model-based microcalcification detection algorithm. Comparing detection results with ground-truth images obtained under the supervision of an expert radiologist allows us to establish the soundness of the detection algorithm. We measure the performance on the dose-reduced images in order to assess the loss of information due to dose reduction. It turns out that the smoothing behaviour has a strong influence on detection rates. For moderate smoothing, a dose reduction by 25% has no serious influence on the detection results, whereas a dose reduction by 50% already entails a marked deterioration of the performance. Strong smoothing generally leads to an unacceptable loss of image quality. The test results emphasize the impact of the more sensitive film-screen system and its characteristics on the problem of assessing the potential for dose reduction in film-screen mammography. The general approach presented in the paper can be adapted to fully digital mammography.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, R; Zhu, X; Li, S

    Purpose: High Dose Rate (HDR) brachytherapy forward planning is principally an iterative process; hence, plan quality is affected by planners’ experiences and limited planning time. Thus, this may lead to sporadic errors and inconsistencies in planning. A statistical tool based on previous approved clinical treatment plans would help to maintain the consistency of planning quality and improve the efficiency of second checking. Methods: An independent dose calculation tool was developed from commercial software. Thirty-three previously approved cervical HDR plans with the same prescription dose (550cGy), applicator type, and treatment protocol were examined, and ICRU defined reference point doses (bladder, vaginalmore » mucosa, rectum, and points A/B) along with dwell times were collected. Dose calculation tool then calculated appropriate range with a 95% confidence interval for each parameter obtained, which would be used as the benchmark for evaluation of those parameters in future HDR treatment plans. Model quality was verified using five randomly selected approved plans from the same dataset. Results: Dose variations appears to be larger at the reference point of bladder and mucosa as compared with rectum. Most reference point doses from verification plans fell between the predicted range, except the doses of two points of rectum and two points of reference position A (owing to rectal anatomical variations & clinical adjustment in prescription points, respectively). Similar results were obtained for tandem and ring dwell times despite relatively larger uncertainties. Conclusion: This statistical tool provides an insight into clinically acceptable range of cervical HDR plans, which could be useful in plan checking and identifying potential planning errors, thus improving the consistency of plan quality.« less

  7. Open Rotor - Analysis of Diagnostic Data

    NASA Technical Reports Server (NTRS)

    Envia, Edmane

    2011-01-01

    NASA is researching open rotor propulsion as part of its technology research and development plan for addressing the subsonic transport aircraft noise, emission and fuel burn goals. The low-speed wind tunnel test for investigating the aerodynamic and acoustic performance of a benchmark blade set at the approach and takeoff conditions has recently concluded. A high-speed wind tunnel diagnostic test campaign has begun to investigate the performance of this benchmark open rotor blade set at the cruise condition. Databases from both speed regimes will comprise a comprehensive collection of benchmark open rotor data for use in assessing/validating aerodynamic and noise prediction tools (component & system level) as well as providing insights into the physics of open rotors to help guide the development of quieter open rotors.

  8. ARCHERRT – A GPU-based and photon-electron coupled Monte Carlo dose computing engine for radiation therapy: Software development and application to helical tomotherapy

    PubMed Central

    Su, Lin; Yang, Youming; Bednarz, Bryan; Sterpin, Edmond; Du, Xining; Liu, Tianyu; Ji, Wei; Xu, X. George

    2014-01-01

    Purpose: Using the graphical processing units (GPU) hardware technology, an extremely fast Monte Carlo (MC) code ARCHERRT is developed for radiation dose calculations in radiation therapy. This paper describes the detailed software development and testing for three clinical TomoTherapy® cases: the prostate, lung, and head & neck. Methods: To obtain clinically relevant dose distributions, phase space files (PSFs) created from optimized radiation therapy treatment plan fluence maps were used as the input to ARCHERRT. Patient-specific phantoms were constructed from patient CT images. Batch simulations were employed to facilitate the time-consuming task of loading large PSFs, and to improve the estimation of statistical uncertainty. Furthermore, two different Woodcock tracking algorithms were implemented and their relative performance was compared. The dose curves of an Elekta accelerator PSF incident on a homogeneous water phantom were benchmarked against DOSXYZnrc. For each of the treatment cases, dose volume histograms and isodose maps were produced from ARCHERRT and the general-purpose code, GEANT4. The gamma index analysis was performed to evaluate the similarity of voxel doses obtained from these two codes. The hardware accelerators used in this study are one NVIDIA K20 GPU, one NVIDIA K40 GPU, and six NVIDIA M2090 GPUs. In addition, to make a fairer comparison of the CPU and GPU performance, a multithreaded CPU code was developed using OpenMP and tested on an Intel E5-2620 CPU. Results: For the water phantom, the depth dose curve and dose profiles from ARCHERRT agree well with DOSXYZnrc. For clinical cases, results from ARCHERRT are compared with those from GEANT4 and good agreement is observed. Gamma index test is performed for voxels whose dose is greater than 10% of maximum dose. For 2%/2mm criteria, the passing rates for the prostate, lung case, and head & neck cases are 99.7%, 98.5%, and 97.2%, respectively. Due to specific architecture of GPU, modified Woodcock tracking algorithm performed inferior to the original one. ARCHERRT achieves a fast speed for PSF-based dose calculations. With a single M2090 card, the simulations cost about 60, 50, 80 s for three cases, respectively, with the 1% statistical error in the PTV. Using the latest K40 card, the simulations are 1.7–1.8 times faster. More impressively, six M2090 cards could finish the simulations in 8.9–13.4 s. For comparison, the same simulations on Intel E5-2620 (12 hyperthreading) cost about 500–800 s. Conclusions: ARCHERRT was developed successfully to perform fast and accurate MC dose calculation for radiotherapy using PSFs and patient CT phantoms. PMID:24989378

  9. ARCHERRT - a GPU-based and photon-electron coupled Monte Carlo dose computing engine for radiation therapy: software development and application to helical tomotherapy.

    PubMed

    Su, Lin; Yang, Youming; Bednarz, Bryan; Sterpin, Edmond; Du, Xining; Liu, Tianyu; Ji, Wei; Xu, X George

    2014-07-01

    Using the graphical processing units (GPU) hardware technology, an extremely fast Monte Carlo (MC) code ARCHERRT is developed for radiation dose calculations in radiation therapy. This paper describes the detailed software development and testing for three clinical TomoTherapy® cases: the prostate, lung, and head & neck. To obtain clinically relevant dose distributions, phase space files (PSFs) created from optimized radiation therapy treatment plan fluence maps were used as the input to ARCHERRT. Patient-specific phantoms were constructed from patient CT images. Batch simulations were employed to facilitate the time-consuming task of loading large PSFs, and to improve the estimation of statistical uncertainty. Furthermore, two different Woodcock tracking algorithms were implemented and their relative performance was compared. The dose curves of an Elekta accelerator PSF incident on a homogeneous water phantom were benchmarked against DOSXYZnrc. For each of the treatment cases, dose volume histograms and isodose maps were produced from ARCHERRT and the general-purpose code, GEANT4. The gamma index analysis was performed to evaluate the similarity of voxel doses obtained from these two codes. The hardware accelerators used in this study are one NVIDIA K20 GPU, one NVIDIA K40 GPU, and six NVIDIA M2090 GPUs. In addition, to make a fairer comparison of the CPU and GPU performance, a multithreaded CPU code was developed using OpenMP and tested on an Intel E5-2620 CPU. For the water phantom, the depth dose curve and dose profiles from ARCHERRT agree well with DOSXYZnrc. For clinical cases, results from ARCHERRT are compared with those from GEANT4 and good agreement is observed. Gamma index test is performed for voxels whose dose is greater than 10% of maximum dose. For 2%/2mm criteria, the passing rates for the prostate, lung case, and head & neck cases are 99.7%, 98.5%, and 97.2%, respectively. Due to specific architecture of GPU, modified Woodcock tracking algorithm performed inferior to the original one. ARCHERRT achieves a fast speed for PSF-based dose calculations. With a single M2090 card, the simulations cost about 60, 50, 80 s for three cases, respectively, with the 1% statistical error in the PTV. Using the latest K40 card, the simulations are 1.7-1.8 times faster. More impressively, six M2090 cards could finish the simulations in 8.9-13.4 s. For comparison, the same simulations on Intel E5-2620 (12 hyperthreading) cost about 500-800 s. ARCHERRT was developed successfully to perform fast and accurate MC dose calculation for radiotherapy using PSFs and patient CT phantoms.

  10. A benchmark for comparison of dental radiography analysis algorithms.

    PubMed

    Wang, Ching-Wei; Huang, Cheng-Ta; Lee, Jia-Hong; Li, Chung-Hsing; Chang, Sheng-Wei; Siao, Ming-Jhih; Lai, Tat-Ming; Ibragimov, Bulat; Vrtovec, Tomaž; Ronneberger, Olaf; Fischer, Philipp; Cootes, Tim F; Lindner, Claudia

    2016-07-01

    Dental radiography plays an important role in clinical diagnosis, treatment and surgery. In recent years, efforts have been made on developing computerized dental X-ray image analysis systems for clinical usages. A novel framework for objective evaluation of automatic dental radiography analysis algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2015 Bitewing Radiography Caries Detection Challenge and Cephalometric X-ray Image Analysis Challenge. In this article, we present the datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark. The main contributions of the challenge include the creation of the dental anatomy data repository of bitewing radiographs, the creation of the anatomical abnormality classification data repository of cephalometric radiographs, and the definition of objective quantitative evaluation for comparison and ranking of the algorithms. With this benchmark, seven automatic methods for analysing cephalometric X-ray image and two automatic methods for detecting bitewing radiography caries have been compared, and detailed quantitative evaluation results are presented in this paper. Based on the quantitative evaluation results, we believe automatic dental radiography analysis is still a challenging and unsolved problem. The datasets and the evaluation software will be made available to the research community, further encouraging future developments in this field. (http://www-o.ntust.edu.tw/~cweiwang/ISBI2015/). Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  11. Sensitivity and Uncertainty Analysis of the GFR MOX Fuel Subassembly

    NASA Astrophysics Data System (ADS)

    Lüley, J.; Vrban, B.; Čerba, Š.; Haščík, J.; Nečas, V.; Pelloni, S.

    2014-04-01

    We performed sensitivity and uncertainty analysis as well as benchmark similarity assessment of the MOX fuel subassembly designed for the Gas-Cooled Fast Reactor (GFR) as a representative material of the core. Material composition was defined for each assembly ring separately allowing us to decompose the sensitivities not only for isotopes and reactions but also for spatial regions. This approach was confirmed by direct perturbation calculations for chosen materials and isotopes. Similarity assessment identified only ten partly comparable benchmark experiments that can be utilized in the field of GFR development. Based on the determined uncertainties, we also identified main contributors to the calculation bias.

  12. Data processing has major impact on the outcome of quantitative label-free LC-MS analysis.

    PubMed

    Chawade, Aakash; Sandin, Marianne; Teleman, Johan; Malmström, Johan; Levander, Fredrik

    2015-02-06

    High-throughput multiplexed protein quantification using mass spectrometry is steadily increasing in popularity, with the two major techniques being data-dependent acquisition (DDA) and targeted acquisition using selected reaction monitoring (SRM). However, both techniques involve extensive data processing, which can be performed by a multitude of different software solutions. Analysis of quantitative LC-MS/MS data is mainly performed in three major steps: processing of raw data, normalization, and statistical analysis. To evaluate the impact of data processing steps, we developed two new benchmark data sets, one each for DDA and SRM, with samples consisting of a long-range dilution series of synthetic peptides spiked in a total cell protein digest. The generated data were processed by eight different software workflows and three postprocessing steps. The results show that the choice of the raw data processing software and the postprocessing steps play an important role in the final outcome. Also, the linear dynamic range of the DDA data could be extended by an order of magnitude through feature alignment and a charge state merging algorithm proposed here. Furthermore, the benchmark data sets are made publicly available for further benchmarking and software developments.

  13. Determination of a site-specific reference dose for methylmercury for fish-eating populations.

    PubMed

    Shipp, A M; Gentry, P R; Lawrence, G; Van Landingham, C; Covington, T; Clewell, H J; Gribben, K; Crump, K

    2000-11-01

    Environmental risk-management decisions in the U.S. involving potential exposures to methylmercury currently use a reference dose (RfD) developed by the U.S. Environmental Protection Agency (USEPA). This RfD is based on retrospective studies of an acute poisoning incident in Iraq in which grain contaminated with a methylmercury fungicide was inadvertently used in the baking of bread. The exposures, which were relatively high but lasted only a few months, were associated with neurological effects in both adults (primarily paresthesia) and infants (late walking, late talking, etc.). It is generally believed that the developing fetus represents a particularly sensitive subpopulation for the neurological effects of methylmercury. The USEPA derived an RfD of 0.1 microg/kg/day based on benchmark dose (BMD) modeling of the combined neurological endpoints reported for children exposed in utero. This RfD included an uncertainty factor of 10 to consider human pharmacokinetic variability and database limitations (lack of data on multigeneration effects or possible long-term sequelae of perinatal exposure). Alcoa signed an Administrative Order of Consent for the conduct of a remedial investigation/feasibility study (RI/FS) at their Point Comfort Operations and the adjacent Lavaca Bay in Texas to address the effects of historical discharges of mercury-containing wastewater. In cooperation with the Texas Natural Resource Conservation Commission and USEPA Region VI, Alcoa conducted a baseline risk assessment to assess potential risk to human health and the environment. As a part of this assessment. Alcoa pursued the development of a site-specific RfD for methylmercury to specifically address the potential human health effects associated with the ingestion of contaminated finfish and shellfish from Lavaca Bay. Application of the published USEPA RfD to this site is problematic; while the study underlying the RfD represented acute exposure to relatively high concentrations of methylmercury, the exposures of concern for the Point Comfort site are from the chronic consumption of relatively low concentrations of methylmercury in fish. Since the publication of the USEPA RfD, several analyses of chronic exposure to methylmercury in fish-eating populations have been reported. The purpose of the analysis reported here was to evaluate the possibility of deriving an RfD for methylmercury, specifically for the case of fish ingestion, on the basis of these new studies. In order to better support the risk-management decisions associated with developing a remediation approach for the site in question, the analysis was designed to provide information on the distribution of acceptable ingestion rates across a population, which could reasonably be expected to be consistent with the results of the epidemiological studies of other fish-eating populations. Based on a review of the available literature on the effects of methylmercury, a study conducted with a population in the Seychelles Islands was selected as the critical study for this analysis. The exposures to methylmercury in this population result from chronic, multigenerational ingestion of contaminated fish. This prospective study was carefully conducted and analyzed, included a large cohort of mother-infant pairs, and was relatively free of confounding factors. The results of this study are essentially negative, and a no-observed-adverse-effect level (NOAEL) derived from the estimated exposures has recently been used by the Agency for Toxic Substances and Disease Registry (ATSDR) as the basis for a chronic oral minimal risk level (MRL) for methylmercury. In spite of the fact that no statistically significant effects were observed in this study, the data as reported are suitable for dose-response analysis using the BMD method. Evaluation of the BMD method used in this analysis, as well as in the current USEPA RfD, has demonstrated that the resulting 95% lower bound on the 10% benchmark dose (BMDL) represents a conservative estimate of the traditional NOAEL, and that it is superior to the use of "average" or "grouped" exposure estimates when dose-response information is available, as is the case for the Seychelles study. A more recent study in the Faroe Islands, which did report statistically significant associations between methylmercury exposure and neurological effects, could not be used for dose-response modeling due to inadequate reporting of the data and confounding from co-exposure to polychlorinated biphenyls (PCBs). BMD modeling over the wide range of neurological endpoints reported in the Seychelles study yielded a lowest BMDL for methylmercury in maternal hair of 21 ppm. This BMDL was then converted to an expected distribution of daily ingestion rates across a population using Monte Carlo analysis with a physiologically based pharmacokinetic (PBPK) model to evaluate the impact of interindividual variability. The resulting distribution of ingestion rates at the BMDL had a geometric mean of 1.60 microg/kg/day with a geometric standard deviation of 1.33; the 1st, 5th, and 10th percentiles of the distribution were 0.86, 1.04, and 1.15 microg/kg/day. In place of the use of an uncertainty factor of 3 for pharmacokinetic variability, as is done in the current RfD, one of these lower percentiles of the daily ingestion rate distribution provides a scientifically based, conservative basis for taking into consideration the impact of pharmacokinetic variability across the population. On the other hand, it was felt that an uncertainty factor of 3 for database limitations should be used in the current analysis. Although there can be high confidence in the benchmark-estimated NOAEL of 21 ppm in the Seychelles study, some results in the New Zealand and Faroe Islands studies could be construed to suggest the possibility of effects at maternal hair concentrations below 10 ppm. In addition, while concerns regarding the possibility of chronic sequelae are not supported by the available data, neither can they be absolutely ruled out. The use of an uncertainty factor of 3 is equivalent to using a NOAEL of 7 ppm in maternal hair, which provides additional protection against the possibility that effects could occur at lower concentrations in some populations. Based on the analysis described above, the distribution of acceptable daily ingestion rates (RfDs) recommended to serve as the basis for site-specific risk-management decisions at Alcoa's Point Comfort Operations ranges from approximately 0.3 to 1.1 microg/kg/day, with a population median (50th percentile) of 0.5 microg/kg/day. By analogy with USEPA guidelines for the use of percentiles in applications of distributions in exposure assessments, the 10th percentile provides a reasonably conservative measure. On this basis, a site-specific RfD of 0.4 microg/kg/day is recommended.

  14. A comprehensive analysis of sodium levels in the Canadian packaged food supply

    PubMed Central

    Arcand, JoAnne; Au, Jennifer T.C.; Schermel, Alyssa; L’Abbe, Mary R.

    2016-01-01

    Background Population-wide sodium reduction strategies aim to reduce the cardiovascular burden of excess dietary sodium. Lowering sodium in packaged foods, which contribute the most sodium to the diet, is an important intervention to lower population intakes. Purpose To determine sodium levels in Canadian packaged foods and evaluate the proportion of foods meeting sodium benchmark targets set by Health Canada. Methods A cross-sectional analysis of 7234 packaged foods available in Canada in 2010–11. Sodium values were obtained from the Nutrition Facts table. Results Overall, 51.4% of foods met one of the sodium benchmark levels: 11.5% met Phase 1, 11.1% met Phase 2, and 28.7% met 2016 goal (Phase 3) benchmarks. Food groups with the greatest proportion meeting goal benchmarks were dairy (52.0%) and breakfast cereals (42.2%). Overall 48.6% of foods did not meet any benchmark level and 25% of all products exceeded maximum levels. Meats (61.2%) and canned vegetables/legumes and legumes (29.6%) had the most products exceeding maximum levels. There was large variability in the range of sodium within and between food categories. Food categories highest in sodium (mg/serving) were dry, condensed and ready-to-serve soups (834 ± 256, 754 ± 163, and 636 ± 173, respectively), oriental noodles (783 ± 433), broth (642 ± 239), and frozen appetizers/sides (642 ± 292). Conclusion These data provide a critical baseline assessment for monitoring sodium levels in Canadian foods. While some segments of the market are making progress towards sodium reduction, all sectors need encouragement to continue to reduce the amount of sodium added during food processing. PMID:24842740

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mackillop, William J., E-mail: william.mackillop@krcc.on.ca; Department of Public Health Sciences, Queen's University, Kingston, Ontario; Department of Oncology, Queen's University, Kingston, Ontario

    Purpose: Palliative radiation therapy (PRT) benefits many patients with incurable cancer, but the overall need for PRT is unknown. Our primary objective was to estimate the appropriate rate of use of PRT in Ontario. Methods and Materials: The Ontario Cancer Registry identified patients who died of cancer in Ontario between 2006 and 2010. Comprehensive RT records were linked to the registry. Multivariate analysis identified social and health system-related factors affecting the use of PRT, enabling us to define a benchmark population of patients with unimpeded access to PRT. The proportion of cases treated at any time (PRT{sub lifetime}), the proportionmore » of cases treated in the last 2 years of life (PRT{sub 2y}), and number of courses of PRT per thousand cancer deaths were measured in the benchmark population. These benchmarks were standardized to the characteristics of the overall population, and province-wide PRT rates were then compared to benchmarks. Results: Cases diagnosed at hospitals with no RT on-site and residents of poorer communities and those who lived farther from an RT center, were significantly less likely than others to receive PRT. However, availability of RT at the diagnosing hospital was the dominant factor. Neither socioeconomic status nor distance from home to nearest RT center had a significant effect on the use of PRT in patients diagnosed at a hospital with RT facilities. The benchmark population therefore consisted of patients diagnosed at a hospital with RT facilities. The standardized benchmark for PRT{sub lifetime} was 33.9%, and the corresponding province-wide rate was 28.5%. The standardized benchmark for PRT{sub 2y} was 32.4%, and the corresponding province-wide rate was 27.0%. The standardized benchmark for the number of courses of PRT per thousand cancer deaths was 652, and the corresponding province-wide rate was 542. Conclusions: Approximately one-third of patients who die of cancer in Ontario need PRT, but many of them are never treated.« less

  16. SU-E-T-556: Monte Carlo Generated Dose Distributions for Orbital Irradiation Using a Single Anterior-Posterior Electron Beam and a Hanging Lens Shield

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duwel, D; Lamba, M; Elson, H

    Purpose: Various cancers of the eye are successfully treated with radiotherapy utilizing one anterior-posterior (A/P) beam that encompasses the entire content of the orbit. In such cases, a hanging lens shield can be used to spare dose to the radiosensitive lens of the eye to prevent cataracts. Methods: This research focused on Monte Carlo characterization of dose distributions resulting from a single A-P field to the orbit with a hanging shield in place. Monte Carlo codes were developed which calculated dose distributions for various electron radiation energies, hanging lens shield radii, shield heights above the eye, and beam spoiler configurations.more » Film dosimetry was used to benchmark the coding to ensure it was calculating relative dose accurately. Results: The Monte Carlo dose calculations indicated that lateral and depth dose profiles are insensitive to changes in shield height and electron beam energy. Dose deposition was sensitive to shield radius and beam spoiler composition and height above the eye. Conclusion: The use of a single A/P electron beam to treat cancers of the eye while maintaining adequate lens sparing is feasible. Shield radius should be customized to have the same radius as the patient’s lens. A beam spoiler should be used if it is desired to substantially dose the eye tissues lying posterior to the lens in the shadow of the lens shield. The compromise between lens sparing and dose to diseased tissues surrounding the lens can be modulated by varying the beam spoiler thickness, spoiler material composition, and spoiler height above the eye. The sparing ratio is a metric that can be used to evaluate the compromise between lens sparing and dose to surrounding tissues. The higher the ratio, the more dose received by the tissues immediately posterior to the lens relative to the dose received by the lens.« less

  17. Providing Nuclear Criticality Safety Analysis Education through Benchmark Experiment Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess; J. Blair Briggs; David W. Nigg

    2009-11-01

    One of the challenges that today's new workforce of nuclear criticality safety engineers face is the opportunity to provide assessment of nuclear systems and establish safety guidelines without having received significant experience or hands-on training prior to graduation. Participation in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and/or the International Reactor Physics Experiment Evaluation Project (IRPhEP) provides students and young professionals the opportunity to gain experience and enhance critical engineering skills.

  18. Track structure in radiation biology: theory and applications.

    PubMed

    Nikjoo, H; Uehara, S; Wilson, W E; Hoshi, M; Goodhead, D T

    1998-04-01

    A brief review is presented of the basic concepts in track structure and the relative merit of various theoretical approaches adopted in Monte-Carlo track-structure codes are examined. In the second part of the paper, a formal cluster analysis is introduced to calculate cluster-distance distributions. Total experimental ionization cross-sections were least-square fitted and compared with the calculation by various theoretical methods. Monte-Carlo track-structure code Kurbuc was used to examine and compare the spectrum of the secondary electrons generated by using functions given by Born-Bethe, Jain-Khare, Gryzinsky, Kim-Rudd, Mott and Vriens' theories. The cluster analysis in track structure was carried out using the k-means method and Hartigan algorithm. Data are presented on experimental and calculated total ionization cross-sections: inverse mean free path (IMFP) as a function of electron energy used in Monte-Carlo track-structure codes; the spectrum of secondary electrons generated by different functions for 500 eV primary electrons; cluster analysis for 4 MeV and 20 MeV alpha-particles in terms of the frequency of total cluster energy to the root-mean-square (rms) radius of the cluster and differential distance distributions for a pair of clusters; and finally relative frequency distribution for energy deposited in DNA, single-strand break and double-strand breaks for 10MeV/u protons, alpha-particles and carbon ions. There are a number of Monte-Carlo track-structure codes that have been developed independently and the bench-marking presented in this paper allows a better choice of the theoretical method adopted in a track-structure code to be made. A systematic bench-marking of cross-sections and spectra of the secondary electrons shows differences between the codes at atomic level, but such differences are not significant in biophysical modelling at the macromolecular level. Clustered-damage evaluation shows: that a substantial proportion of dose ( 30%) is deposited by low-energy electrons; the majority of DNA damage lesions are of simple type; the complexity of damage increases with increased LET, while the total yield of strand breaks remains constant; and at high LET values nearly 70% of all double-strand breaks are of complex type.

  19. Marginal iodide deficiency and thyroid function: dose-response analysis for quantitative pharmacokinetic modeling.

    PubMed

    Gilbert, M E; McLanahan, E D; Hedge, J; Crofton, K M; Fisher, J W; Valentín-Blasini, L; Blount, B C

    2011-04-28

    Severe iodine deficiency (ID) results in adverse health outcomes and remains a benchmark for understanding the effects of developmental hypothyroidism. The implications of marginal ID, however, remain less well known. The current study examined the relationship between graded levels of ID in rats and serum thyroid hormones, thyroid iodine content, and urinary iodide excretion. The goals of this study were to provide parametric and dose-response information for development of a quantitative model of the thyroid axis. Female Long Evans rats were fed casein-based diets containing varying iodine (I) concentrations for 8 weeks. Diets were created by adding 975, 200, 125, 25, or 0 μg/kg I to the base diet (~25 μg I/kg chow) to produce 5 nominal I levels, ranging from excess (basal+added I, Treatment 1: 1000 μg I/kg chow) to deficient (Treatment 5: 25 μg I/kg chow). Food intake and body weight were monitored throughout and on 2 consecutive days each week over the 8-week exposure period, animals were placed in metabolism cages to capture urine. Food, water intake, and body weight gain did not differ among treatment groups. Serum T4 was dose-dependently reduced relative to Treatment 1 with significant declines (19 and 48%) at the two lowest I groups, and no significant changes in serum T3 or TSH were detected. Increases in thyroid weight and decreases in thyroidal and urinary iodide content were observed as a function of decreasing I in the diet. Data were compared with predictions from a recently published biologically based dose-response (BBDR) model for ID. Relative to model predictions, female Long Evans rats under the conditions of this study appeared more resilient to low I intake. These results challenge existing models and provide essential information for development of quantitative BBDR models for ID during pregnancy and lactation. Published by Elsevier Ireland Ltd.

  20. Toxicity of Pb-contaminated soil to Japanese quail (Coturnix japonica) and the use of the blood-dietary Pb slope in risk assessment

    USGS Publications Warehouse

    Beyer, W. Nelson; Chen, Yu; Henry, Paula; May, Thomas; Mosby, David; Rattner, Barnett A.; Shearn-Bochsler, Valerie I.; Sprague, Daniel; Weber, John

    2014-01-01

    This study relates tissue concentrations and toxic effects of Pb in Japanese quail (Coturnix japonica) to the dietary exposure of soil-borne Pb associated with mining and smelting. From 0% to 12% contaminated soil, by weight, was added to 5 experimental diets (0.12 to 382 mg Pb/kg, dry wt) and fed to the quail for 6 weeks. Benchmark doses associated with a 50% reduction in delta-aminolevulinic acid dehydratase activity were 0.62 mg Pb/kg in the blood, dry wt, and 27 mg Pb/kg in the diet. Benchmark doses associated with a 20% increase in the concentration of erythrocyte protoporphyrin were 2.7 mg Pb/kg in the blood and 152 mg Pb/kg in the diet. The quail showed no other signs of toxicity (histopathological lesions, alterations in plasma–testosterone concentration, and body and organ weights). The relation of the blood Pb concentration to the soil Pb concentration was linear, with a slope of 0.013 mg Pb/kg of blood (dry wt) divided by mg Pb/kg of diet. We suggest that this slope is potentially useful in ecological risk assessments on birds in the same way that the intake slope factor is an important parameter in risk assessments of children exposed to Pb. The slope may also be used in a tissue-residue approach as an additional line of evidence in ecological risk assessment, supplementary to an estimate of hazard based on dietary toxicity reference values.

  1. WE-FG-BRA-03: Oxygen Interplay in Hypofractionated Radiotherapy: A Hidden Opportunity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kissick, M; Campos, D; Desai, V

    2016-06-15

    Purpose: Local oxygen during a radiotherapy fraction has been shown to change over a full range of the oxygen enhancement ratio (OER) during the same time scale as the treatment fraction. Interplay with local oxygen is then likely a concern, especially for hypofractionation. Our experiments that show a strong role for metabolic dynamics suggesting one could manipulate this interplay for more efficacious treatments. Methods: Two published experiments are presented with the same human head and neck cancer cell line (UM-SCC-22B). One is a cell-specific in vitro prompt response to a 10 Gy dose of orthovotage radiation using fluorescence lifetime imagingmore » (FLIM), benchmarked with a Seahorse assay. The other in vivo study uses autocorrelation analysis with blood oxygen level dependent magnetic resonance imaging (MRI-BOLD) on xenografts. In vivo results are verified with diffuse optics using spectra fitting and photoacoustic measurements. All these measurements are at high time resolution: sampling is one per minute. Results: Interplay happens when the radiosensitivity modulates at the same time scale as the radiation. These results show dynamics at these time scales. 1. The dominant time scale of the acute hypoxia in cell line xenografts is shown to be on the order of minutes to tens of minutes: similar to a metabolic oscillation known as the ‘glycolytic oscillator.’ 2. The radiation dose itself alters metabolism within minutes to tens of minutes also. Conclusion: These results vary with cell type. There is a possibility that special timing and dose levels could be used for radiation. Gating could be used for maximal oxygen during treatment. There is an analogy to the interplay discussions with tumor motion, except that an oxygen interplay could more likely be patient-specific at a more fundamental level.« less

  2. Development of biomonitoring equivalents for barium in urine and plasma for interpreting human biomonitoring data.

    PubMed

    Poddalgoda, Devika; Macey, Kristin; Assad, Henry; Krishnan, Kannan

    2017-06-01

    The objectives of the present work were: (1) to assemble population-level biomonitoring data to identify the concentrations of urinary and plasma barium across the general population; and (2) to derive biomonitoring equivalents (BEs) for barium in urine and plasma in order to facilitate the interpretation of barium concentrations in the biological matrices. In population level biomonitoring studies, barium has been measured in urine in the U.S. (NHANES study), but no such data on plasma barium levels were identified. The BE values for plasma and urine were derived from U.S. EPA's reference dose (RfD) of 0.2 mg/kg bw/d, based on a lower confidence limit on the benchmark dose (BMDL 05 ) of 63 mg/kg bw/d. The plasma BE (9 μg Ba/L) was derived by regression analysis of the near-steady-state plasma concentrations associated with the administered doses in animals exposed to barium chloride dihydrate in drinking water for 2-years in a NTP study. Using a human urinary excretion fraction of 0.023, a BE for urinary barium (0.19 mg/L or 0.25 mg/g creatinine) was derived for US EPA's RfD. The median and the 95 th percentile barium urine concentrations of the general population in U.S. are below the BE determined in this study, indicating that the population exposure to inorganic barium is expected to be below the exposure guidance value of 0.2 mg/kg bw/d. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  3. Dose-dependent transitions in Nrf2-mediated adaptive response and related stress responses to hypochlorous acid in mouse macrophages

    PubMed Central

    Woods, Courtney G.; Fu, Jingqi; Xue, Peng; Hou, Yongyong; Pluta, Linda J.; Yang, Longlong; Zhang, Qiang; Thomas, Russell S.; Andersen, Melvin E.; Pi, Jingbo

    2009-01-01

    Hypochlorous acid (HOCl) is potentially an important source of cellular oxidative stress. Human HOCl exposure can occur from chlorine gas inhalation or from endogenous sources of HOCl, such as respiratory burst by phagocytes. Transcription factor Nrf2 is a key regulator of cellular redox status and serves as a primary source of defense against oxidative stress. We recently demonstrated that HOCl activates Nrf2-mediated antioxidant response in cultured mouse macrophages in a biphasic manner. In an effort to determine whether Nrf2 pathways overlap with other stress pathways, gene expression profiling was performed in RAW 264.7 macrophages exposed to HOCl using whole genome mouse microarrays. Benchmark dose (BMD) analysis on gene expression data revealed that Nrf2-mediated antioxidant response and protein ubiquitination were the most sensitive biological pathways that were activated in response to low concentrations of HOCl (< 0.35 mM). Genes involved in chromatin architecture maintenance and DNA-dependent transcription were also sensitive to very low doses. Moderate concentrations of HOCl (0.35 to 1.4 mM) caused maximal activation of the Nrf2-pathway and innate immune response genes, such as IL-1β, IL-6, IL-10 and chemokines. At even higher concentrations of HOCl (2.8 to 3.5 mM) there was a loss of Nrf2-target gene expression with increased expression of numerous heat shock and histone cluster genes, AP-1-family genes, cFos and Fra1 and DNA damage-inducible Gadd45 genes. These findings confirm an Nrf2-centric mechanism of action of HOCl in mouse macrophages and provide evidence of interactions between Nrf2, inflammatory, and other stress pathways. PMID:19376150

  4. Fisk-based criteria to support validation of detection methods for drinking water and air.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacDonell, M.; Bhattacharyya, M.; Finster, M.

    2009-02-18

    This report was prepared to support the validation of analytical methods for threat contaminants under the U.S. Environmental Protection Agency (EPA) National Homeland Security Research Center (NHSRC) program. It is designed to serve as a resource for certain applications of benchmark and fate information for homeland security threat contaminants. The report identifies risk-based criteria from existing health benchmarks for drinking water and air for potential use as validation targets. The focus is on benchmarks for chronic public exposures. The priority sources are standard EPA concentration limits for drinking water and air, along with oral and inhalation toxicity values. Many contaminantsmore » identified as homeland security threats to drinking water or air would convert to other chemicals within minutes to hours of being released. For this reason, a fate analysis has been performed to identify potential transformation products and removal half-lives in air and water so appropriate forms can be targeted for detection over time. The risk-based criteria presented in this report to frame method validation are expected to be lower than actual operational targets based on realistic exposures following a release. Note that many target criteria provided in this report are taken from available benchmarks without assessing the underlying toxicological details. That is, although the relevance of the chemical form and analogues are evaluated, the toxicological interpretations and extrapolations conducted by the authoring organizations are not. It is also important to emphasize that such targets in the current analysis are not health-based advisory levels to guide homeland security responses. This integrated evaluation of chronic public benchmarks and contaminant fate has identified more than 200 risk-based criteria as method validation targets across numerous contaminants and fate products in drinking water and air combined. The gap in directly applicable values is considerable across the full set of threat contaminants, so preliminary indicators were developed from other well-documented benchmarks to serve as a starting point for validation efforts. By this approach, at least preliminary context is available for water or air, and sometimes both, for all chemicals on the NHSRC list that was provided for this evaluation. This means that a number of concentrations presented in this report represent indirect measures derived from related benchmarks or surrogate chemicals, as described within the many results tables provided in this report.« less

  5. TRIPOLI-4® - MCNP5 ITER A-lite neutronic model benchmarking

    NASA Astrophysics Data System (ADS)

    Jaboulay, J.-C.; Cayla, P.-Y.; Fausser, C.; Lee, Y.-K.; Trama, J.-C.; Li-Puma, A.

    2014-06-01

    The aim of this paper is to present the capability of TRIPOLI-4®, the CEA Monte Carlo code, to model a large-scale fusion reactor with complex neutron source and geometry. In the past, numerous benchmarks were conducted for TRIPOLI-4® assessment on fusion applications. Experiments (KANT, OKTAVIAN, FNG) analysis and numerical benchmarks (between TRIPOLI-4® and MCNP5) on the HCLL DEMO2007 and ITER models were carried out successively. In this previous ITER benchmark, nevertheless, only the neutron wall loading was analyzed, its main purpose was to present MCAM (the FDS Team CAD import tool) extension for TRIPOLI-4®. Starting from this work a more extended benchmark has been performed about the estimation of neutron flux, nuclear heating in the shielding blankets and tritium production rate in the European TBMs (HCLL and HCPB) and it is presented in this paper. The methodology to build the TRIPOLI-4® A-lite model is based on MCAM and the MCNP A-lite model (version 4.1). Simplified TBMs (from KIT) have been integrated in the equatorial-port. Comparisons of neutron wall loading, flux, nuclear heating and tritium production rate show a good agreement between the two codes. Discrepancies are mainly included in the Monte Carlo codes statistical error.

  6. Development of Benchmark Examples for Delamination Onset and Fatigue Growth Prediction

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2011-01-01

    An approach for assessing the delamination propagation and growth capabilities in commercial finite element codes was developed and demonstrated for the Virtual Crack Closure Technique (VCCT) implementations in ABAQUS. The Double Cantilever Beam (DCB) specimen was chosen as an example. First, benchmark results to assess delamination propagation capabilities under static loading were created using models simulating specimens with different delamination lengths. For each delamination length modeled, the load and displacement at the load point were monitored. The mixed-mode strain energy release rate components were calculated along the delamination front across the width of the specimen. A failure index was calculated by correlating the results with the mixed-mode failure criterion of the graphite/epoxy material. The calculated critical loads and critical displacements for delamination onset for each delamination length modeled were used as a benchmark. The load/displacement relationship computed during automatic propagation should closely match the benchmark case. Second, starting from an initially straight front, the delamination was allowed to propagate based on the algorithms implemented in the commercial finite element software. The load-displacement relationship obtained from the propagation analysis results and the benchmark results were compared. Good agreements could be achieved by selecting the appropriate input parameters, which were determined in an iterative procedure.

  7. A benchmarking tool to evaluate computer tomography perfusion infarct core predictions against a DWI standard.

    PubMed

    Cereda, Carlo W; Christensen, Søren; Campbell, Bruce Cv; Mishra, Nishant K; Mlynash, Michael; Levi, Christopher; Straka, Matus; Wintermark, Max; Bammer, Roland; Albers, Gregory W; Parsons, Mark W; Lansberg, Maarten G

    2016-10-01

    Differences in research methodology have hampered the optimization of Computer Tomography Perfusion (CTP) for identification of the ischemic core. We aim to optimize CTP core identification using a novel benchmarking tool. The benchmarking tool consists of an imaging library and a statistical analysis algorithm to evaluate the performance of CTP. The tool was used to optimize and evaluate an in-house developed CTP-software algorithm. Imaging data of 103 acute stroke patients were included in the benchmarking tool. Median time from stroke onset to CT was 185 min (IQR 180-238), and the median time between completion of CT and start of MRI was 36 min (IQR 25-79). Volumetric accuracy of the CTP-ROIs was optimal at an rCBF threshold of <38%; at this threshold, the mean difference was 0.3 ml (SD 19.8 ml), the mean absolute difference was 14.3 (SD 13.7) ml, and CTP was 67% sensitive and 87% specific for identification of DWI positive tissue voxels. The benchmarking tool can play an important role in optimizing CTP software as it provides investigators with a novel method to directly compare the performance of alternative CTP software packages. © The Author(s) 2015.

  8. A general theory of effect size, and its consequences for defining the benchmark response (BMR) for continuous endpoints.

    PubMed

    Slob, Wout

    2017-04-01

    A general theory on effect size for continuous data predicts a relationship between maximum response and within-group variation of biological parameters, which is empirically confirmed by results from dose-response analyses of 27 different biological parameters. The theory shows how effect sizes observed in distinct biological parameters can be compared and provides a basis for a generic definition of small, intermediate and large effects. While the theory is useful for experimental science in general, it has specific consequences for risk assessment: it solves the current debate on the appropriate metric for the Benchmark response in continuous data. The theory shows that scaling the BMR expressed as a percent change in means to the maximum response (in the way specified) automatically takes "natural variability" into account. Thus, the theory supports the underlying rationale of the BMR 1 SD. For various reasons, it is, however, recommended to use a BMR in terms of a percent change that is scaled to maximum response and/or within group variation (averaged over studies), as a single harmonized approach.

  9. Whole-body to tissue concentration ratios for use in biota dose assessments for animals.

    PubMed

    Yankovich, Tamara L; Beresford, Nicholas A; Wood, Michael D; Aono, Tasuo; Andersson, Pål; Barnett, Catherine L; Bennett, Pamela; Brown, Justin E; Fesenko, Sergey; Fesenko, J; Hosseini, Ali; Howard, Brenda J; Johansen, Mathew P; Phaneuf, Marcel M; Tagami, Keiko; Takata, Hyoe; Twining, John R; Uchida, Shigeo

    2010-11-01

    Environmental monitoring programs often measure contaminant concentrations in animal tissues consumed by humans (e.g., muscle). By comparison, demonstration of the protection of biota from the potential effects of radionuclides involves a comparison of whole-body doses to radiological dose benchmarks. Consequently, methods for deriving whole-body concentration ratios based on tissue-specific data are required to make best use of the available information. This paper provides a series of look-up tables with whole-body:tissue-specific concentration ratios for non-human biota. Focus was placed on relatively broad animal categories (including molluscs, crustaceans, freshwater fishes, marine fishes, amphibians, reptiles, birds and mammals) and commonly measured tissues (specifically, bone, muscle, liver and kidney). Depending upon organism, whole-body to tissue concentration ratios were derived for between 12 and 47 elements. The whole-body to tissue concentration ratios can be used to estimate whole-body concentrations from tissue-specific measurements. However, we recommend that any given whole-body to tissue concentration ratio should not be used if the value falls between 0.75 and 1.5. Instead, a value of one should be assumed.

  10. Estimating human-equivalent no observed adverse-effect levels for VOCs (volatile organic compounds) based on minimal knowledge of physiological parameters. Technical paper

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Overton, J.H.; Jarabek, A.M.

    1989-01-01

    The U.S. EPA advocates the assessment of health-effects data and calculation of inhaled reference doses as benchmark values for gauging systemic toxicity to inhaled gases. The assessment often requires an inter- or intra-species dose extrapolation from no observed adverse effect level (NOAEL) exposure concentrations in animals to human equivalent NOAEL exposure concentrations. To achieve this, a dosimetric extrapolation procedure was developed based on the form or type of equations that describe the uptake and disposition of inhaled volatile organic compounds (VOCs) in physiologically-based pharmacokinetic (PB-PK) models. The procedure assumes allometric scaling of most physiological parameters and that the value ofmore » the time-integrated human arterial-blood concentration must be limited to no more than to that of experimental animals. The scaling assumption replaces the need for most parameter values and allows the derivation of a simple formula for dose extrapolation of VOCs that gives equivalent or more-conservative exposure concentrations values than those that would be obtained using a PB-PK model in which scaling was assumed.« less

  11. RF transient analysis and stabilization of the phase and energy of the proposed PIP-II LINAC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edelen, J. P.; Chase, B. E.

    This paper describes a recent effort to develop and benchmark a simulation tool for the analysis of RF transients and their compensation in an H- linear accelerator. Existing tools in this area either focus on electron LINACs or lack fundamental details about the LLRF system that are necessary to provide realistic performance estimates. In our paper we begin with a discussion of our computational models followed by benchmarking with existing beam-dynamics codes and measured data. We then analyze the effect of RF transients and their compensation in the PIP-II LINAC, followed by an analysis of calibration errors and how amore » Newton’s Method based feedback scheme can be used to regulate the beam energy to within the specified limits.« less

  12. Analysis of b quark pair production signal from neutral 2HDM Higgs bosons at future linear colliders

    NASA Astrophysics Data System (ADS)

    Hashemi, Majid; MahdaviKhorrami, Mostafa

    2018-06-01

    In this paper, the b quark pair production events are analyzed as a source of neutral Higgs bosons of the two Higgs doublet model type I at linear colliders. The production mechanism is e+e- → Z^{(*)} → HA → b{\\bar{b}}b{\\bar{b}} assuming a fully hadronic final state. The analysis aim is to identify both CP-even and CP-odd Higgs bosons in different benchmark points accommodating moderate boson masses. Due to pair production of Higgs bosons, the analysis is most suitable for a linear collider operating at √{s} = 1 TeV. Results show that in selected benchmark points, signal peaks are observable in the b-jet pair invariant mass distributions at integrated luminosity of 500 fb^{-1}.

  13. Benthic algae of benchmark streams in agricultural areas of eastern Wisconsin

    USGS Publications Warehouse

    Scudder, Barbara C.; Stewart, Jana S.

    2001-01-01

    Multivariate analyses indicated multiple scales of environmental factors affect algae. Although two-way indicator species analysis (TWINSPAN), detrended correspondence analysis (DCA), and canonical correspondence analysis (CCA) generally separated sites according to RHU, only DCA ordination indicated a separation of sites according to ecoregion. Environmental variables con-elated with DCA axes 1 and 2 and therefore indicated as important explanatory factors for algal distribution and abundance were factors related to stream size, basin land use/cover, geomorphology, hydrogeology, and riparian disturbance. CCA analyses with a more limited set of environmental variables indicated that pH, average width of natural riparian vegetation (segment scale), basin land use/cover and Q/Q2 were the most important variables affecting the distribution and relative abundance of benthic algae at the 20 benchmark streams,

  14. Analysis of 100Mb/s Ethernet for the Whitney Commodity Computing Testbed

    NASA Technical Reports Server (NTRS)

    Fineberg, Samuel A.; Pedretti, Kevin T.; Kutler, Paul (Technical Monitor)

    1997-01-01

    We evaluate the performance of a Fast Ethernet network configured with a single large switch, a single hub, and a 4x4 2D torus topology in a testbed cluster of "commodity" Pentium Pro PCs. We also evaluated a mixed network composed of ethernet hubs and switches. An MPI collective communication benchmark, and the NAS Parallel Benchmarks version 2.2 (NPB2) show that the torus network performs best for all sizes that we were able to test (up to 16 nodes). For larger networks the ethernet switch outperforms the hub, though its performance is far less than peak. The hub/switch combination tests indicate that the NAS parallel benchmarks are relatively insensitive to hub densities of less than 7 nodes per hub.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petasecca, M., E-mail: marcop@uow.edu.au; Newall, M. K.; Aldosari, A. H.

    Purpose: Spatial and temporal resolutions are two of the most important features for quality assurance instrumentation of motion adaptive radiotherapy modalities. The goal of this work is to characterize the performance of the 2D high spatial resolution monolithic silicon diode array named “MagicPlate-512” for quality assurance of stereotactic body radiation therapy (SBRT) and stereotactic radiosurgery (SRS) combined with a dynamic multileaf collimator (MLC) tracking technique for motion compensation. Methods: MagicPlate-512 is used in combination with the movable platform HexaMotion and a research version of radiofrequency tracking system Calypso driving MLC tracking software. The authors reconstruct 2D dose distributions of smallmore » field square beams in three modalities: in static conditions, mimicking the temporal movement pattern of a lung tumor and tracking the moving target while the MLC compensates almost instantaneously for the tumor displacement. Use of Calypso in combination with MagicPlate-512 requires a proper radiofrequency interference shielding. Impact of the shielding on dosimetry has been simulated by GEANT4 and verified experimentally. Temporal and spatial resolutions of the dosimetry system allow also for accurate verification of segments of complex stereotactic radiotherapy plans with identification of the instant and location where a certain dose is delivered. This feature allows for retrospective temporal reconstruction of the delivery process and easy identification of error in the tracking or the multileaf collimator driving systems. A sliding MLC wedge combined with the lung motion pattern has been measured. The ability of the MagicPlate-512 (MP512) in 2D dose mapping in all three modes of operation was benchmarked by EBT3 film. Results: Full width at half maximum and penumbra of the moving and stationary dose profiles measured by EBT3 film and MagicPlate-512 confirm that motion has a significant impact on the dose distribution. Motion, no motion, and motion with MLC tracking profiles agreed within 1 and 0.4 mm, respectively, for all field sizes tested. Use of electromagnetic tracking system generates a fluctuation of the detector baseline up to 10% of the full scale signal requiring a proper shielding strategy. MagicPlate-512 is also able to reconstruct the dose variation pulse-by-pulse in each pixel of the detector. An analysis of the dose transients with motion and motion with tracking shows that the tracking feedback algorithm used for this experiment can compensate effectively only the effect of the slower transient components. The fast changing components of the organ motion can contribute only to discrepancy of the order of 15% in penumbral region while the slower components can change the dose profile up to 75% of the expected dose. Conclusions: MagicPlate-512 is shown to be, potentially, a valid alternative to film or 2D ionizing chambers for quality assurance dosimetry in SRS or SBRT. Its high spatial and temporal resolutions allow for accurate reconstruction of the profile in any conditions with motion and with tracking of the motion. It shows excellent performance to reconstruct the dose deposition in real time or retrospectively as a function of time for detailed analysis of the effect of motion in a specific pixel or area of interest.« less

  16. SU-E-J-08: Comparison of Unintended Radiation Doses to Organs at Risk Resulting From the Out-Of-Field Therapeutic Beams and From Image-Guidance X-Ray Procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, G; Wang, L

    Purpose: The unintended radiation dose to organs at risk (OAR) can be contributed from imaging guidance procedures as well as from leakage and scatter of therapeutic beams. This study compares the imaging dose with the unintended out-of-field therapeutic dose to patient sensitive organs. Methods: The Monte Carlo EGSnrc user codes, BEAMnrc and DOSXYZnrc, were used to simulate kV X-ray sources from imaging devices as well as the therapeutic IMRT/VMAT beams and to calculate doses to target and OARs on patient treatment planning CT images. The accuracy of the Monte Carlo simulations was benchmarked against measurements in phantoms. The dose-volume histogrammore » was utilized in analyzing the patient organ doses. Results: The dose resulting from Standard Head kV-CBCT scans to bone and soft tissues ranges from 0.7 to 1.1 cGy and from 0.03 to 0.3 cGy, respectively. The dose resulting from Thorax scans on the chest to bone and soft tissues ranges from 1.1 to 1.8 cGy and from 0.3 to 0.6 cGy, respectively. The dose resulting from Pelvis scans on the abdomen to bone and soft tissues range from 3.2 to 4.2 cGy and from 1.2 to 2.2 cGy, respectively. The out-of-field doses to OAR are sensitive to the distance between the treated target and the OAR. For a typical Head-and-Neck IMRT/VMAT treatment the out-of-field doses to eyes are 1–3% of the target dose, or 2–6 cGy per fraction. Conclusion: The imaging doses to OAR are predictable based on the imaging protocols used when OARs are within the imaged volume and can be estimated and accounted for by using tabulated values. The unintended out-of-field doses are proportional to the target dose, strongly depend on the distance between the treated target and OAR, and are generally higher comparing to the imaging dose. This work was partially supported by Varian research grant VUMC40590.« less

  17. Simultaneous delivery time and aperture shape optimization for the volumetric-modulated arc therapy (VMAT) treatment planning problem

    NASA Astrophysics Data System (ADS)

    Mahnam, Mehdi; Gendreau, Michel; Lahrichi, Nadia; Rousseau, Louis-Martin

    2017-07-01

    In this paper, we propose a novel heuristic algorithm for the volumetric-modulated arc therapy treatment planning problem, optimizing the trade-off between delivery time and treatment quality. We present a new mixed integer programming model in which the multi-leaf collimator leaf positions, gantry speed, and dose rate are determined simultaneously. Our heuristic is based on column generation; the aperture configuration is modeled in the columns and the dose distribution and time restriction in the rows. To reduce the number of voxels and increase the efficiency of the master model, we aggregate similar voxels using a clustering technique. The efficiency of the algorithm and the treatment quality are evaluated on a benchmark clinical prostate cancer case. The computational results show that a high-quality treatment is achievable using a four-thread CPU. Finally, we analyze the effects of the various parameters and two leaf-motion strategies.

  18. Occurrence of 210Po in marine macroalgae inhabiting a coastal nuclear zone, southeast coast of India.

    PubMed

    Praveen Pole, R P; Feroz Khan, M; Godwin Wesley, S

    2017-04-01

    The activity concentration of 210 Po in 26 species of marine macroalgae found along coast near to a nuclear installation in southeast coast of India was studied. Phaeophytes were found to accumulate the maximum 210 Po concentration and chlorophytes the minimum. The average 210 Po activity concentration values in the three groups were 6.2 ± 2.5 Bq kg -1 (Chlorophyta), 14.4 ± 5.2 Bq kg -1 (Phaeophyta) and 11.3 ± 3.9 Bq kg -1 (Rhodophyta). A statistically significant variation in accumulation was found between groups (p < 0.05). The un-weighted dose rate to these algae due to 210 Po was calculated to be well below the benchmark dose limit of 10 μGy h -1 . Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Combining uncertainty factors in deriving human exposure levels of noncarcinogenic toxicants.

    PubMed

    Kodell, R L; Gaylor, D W

    1999-01-01

    Acceptable levels of human exposure to noncarcinogenic toxicants in environmental and occupational settings generally are derived by reducing experimental no-observed-adverse-effect levels (NOAELs) or benchmark doses (BDs) by a product of uncertainty factors (Barnes and Dourson, Ref. 1). These factors are presumed to ensure safety by accounting for uncertainty in dose extrapolation, uncertainty in duration extrapolation, differential sensitivity between humans and animals, and differential sensitivity among humans. The common default value for each uncertainty factor is 10. This paper shows how estimates of means and standard deviations of the approximately log-normal distributions of individual uncertainty factors can be used to estimate percentiles of the distribution of the product of uncertainty factors. An appropriately selected upper percentile, for example, 95th or 99th, of the distribution of the product can be used as a combined uncertainty factor to replace the conventional product of default factors.

  20. A flexible Monte Carlo tool for patient or phantom specific calculations: comparison with preliminary validation measurements

    NASA Astrophysics Data System (ADS)

    Davidson, S.; Cui, J.; Followill, D.; Ibbott, G.; Deasy, J.

    2008-02-01

    The Dose Planning Method (DPM) is one of several 'fast' Monte Carlo (MC) computer codes designed to produce an accurate dose calculation for advanced clinical applications. We have developed a flexible machine modeling process and validation tests for open-field and IMRT calculations. To complement the DPM code, a practical and versatile source model has been developed, whose parameters are derived from a standard set of planning system commissioning measurements. The primary photon spectrum and the spectrum resulting from the flattening filter are modeled by a Fatigue function, cut-off by a multiplying Fermi function, which effectively regularizes the difficult energy spectrum determination process. Commonly-used functions are applied to represent the off-axis softening, increasing primary fluence with increasing angle ('the horn effect'), and electron contamination. The patient dependent aspect of the MC dose calculation utilizes the multi-leaf collimator (MLC) leaf sequence file exported from the treatment planning system DICOM output, coupled with the source model, to derive the particle transport. This model has been commissioned for Varian 2100C 6 MV and 18 MV photon beams using percent depth dose, dose profiles, and output factors. A 3-D conformal plan and an IMRT plan delivered to an anthropomorphic thorax phantom were used to benchmark the model. The calculated results were compared to Pinnacle v7.6c results and measurements made using radiochromic film and thermoluminescent detectors (TLD).

  1. Dose audit for patients undergoing two common radiography examinations with digital radiology systems

    PubMed Central

    İnal, Tolga; Ataç, Gökçe

    2014-01-01

    PURPOSE We aimed to determine the radiation doses delivered to patients undergoing general examinations using computed or digital radiography systems in Turkey. MATERIALS AND METHODS Radiographs of 20 patients undergoing posteroanterior chest X-ray and of 20 patients undergoing anteroposterior kidney-ureter-bladder radiography were evaluated in five X-ray rooms at four local hospitals in the Ankara region. Currently, almost all radiology departments in Turkey have switched from conventional radiography systems to computed radiography or digital radiography systems. Patient dose was measured for both systems. The results were compared with published diagnostic reference levels (DRLs) from the European Union and International Atomic Energy Agency. RESULTS The average entrance surface doses (ESDs) for chest examinations exceeded established international DRLs at two of the X-ray rooms in a hospital with computed radiography. All of the other ESD measurements were approximately equal to or below the DRLs for both examinations in all of the remaining hospitals. Improper adjustment of the exposure parameters, uncalibrated automatic exposure control systems, and failure of the technologists to choose exposure parameters properly were problems we noticed during the study. CONCLUSION This study is an initial attempt at establishing local DRL values for digital radiography systems, and will provide a benchmark so that the authorities can establish reference dose levels for diagnostic radiology in Turkey. PMID:24317331

  2. A modified method for measuring antibiotic use in healthcare settings: implications for antibiotic stewardship and benchmarking.

    PubMed

    Aldeyab, Mamoon A; McElnay, James C; Scott, Michael G; Lattyak, William J; Darwish Elhajji, Feras W; Aldiab, Motasem A; Magee, Fidelma A; Conlon, Geraldine; Kearney, Mary P

    2014-04-01

    To determine whether adjusting the denominator of the common hospital antibiotic use measurement unit (defined daily doses/100 bed-days) by including age-adjusted comorbidity score (100 bed-days/age-adjusted comorbidity score) would result in more accurate and meaningful assessment of hospital antibiotic use. The association between the monthly sum of age-adjusted comorbidity and monthly antibiotic use was measured using time-series analysis (January 2008 to June 2012). For the purposes of conducting internal benchmarking, two antibiotic usage datasets were constructed, i.e. 2004-07 (first study period) and 2008-11 (second study period). Monthly antibiotic use was normalized per 100 bed-days and per 100 bed-days/age-adjusted comorbidity score. Results showed that antibiotic use had significant positive relationships with the sum of age-adjusted comorbidity score (P = 0.0004). The results also showed that there was a negative relationship between antibiotic use and (i) alcohol-based hand rub use (P = 0.0370) and (ii) clinical pharmacist activity (P = 0.0031). Normalizing antibiotic use per 100 bed-days contributed to a comparative usage rate of 1.31, i.e. the average antibiotic use during the second period was 31% higher than during the first period. However, normalizing antibiotic use per 100 bed-days per age-adjusted comorbidity score resulted in a comparative usage rate of 0.98, i.e. the average antibiotic use was 2% lower in the second study period. Importantly, the latter comparative usage rate is independent of differences in patient density and case mix characteristics between the two studied populations. The proposed modified antibiotic measure provides an innovative approach to compare variations in antibiotic prescribing while taking account of patient case mix effects.

  3. Simulation Studies for Inspection of the Benchmark Test with PATRASH

    NASA Astrophysics Data System (ADS)

    Shimosaki, Y.; Igarashi, S.; Machida, S.; Shirakata, M.; Takayama, K.; Noda, F.; Shigaki, K.

    2002-12-01

    In order to delineate the halo-formation mechanisms in a typical FODO lattice, a 2-D simulation code PATRASH (PArticle TRAcking in a Synchrotron for Halo analysis) has been developed. The electric field originating from the space charge is calculated by the Hybrid Tree code method. Benchmark tests utilizing three simulation codes of ACCSIM, PATRASH and SIMPSONS were carried out. These results have been confirmed to be fairly in agreement with each other. The details of PATRASH simulation are discussed with some examples.

  4. Benchmarking Customer Service Practices of Air Cargo Carriers: A Case Study Approach

    DTIC Science & Technology

    1994-09-01

    customer toll free hotlines, comment and complaint analysis, and consumer advisory panels (Zemke and Schaaf, 1989:31-34). The correct use of any or all of... customer service criteria. The research also provides a host of customer service criteria that the researchers find important to most consumers . Bhote...AD-A285 014 DTIC ELECI’E SEP 2 9 1994 kOF4 * BENCHMARKING CUSTOMER SERVICE -, PRACTICES OF AIR CARGO CARRIERS: A CASE STUDY APPROACH THESIS Patrick D

  5. Benchmarking reference services: step by step.

    PubMed

    Buchanan, H S; Marshall, J G

    1996-01-01

    This article is a companion to an introductory article on benchmarking published in an earlier issue of Medical Reference Services Quarterly. Librarians interested in benchmarking often ask the following questions: How do I determine what to benchmark; how do I form a benchmarking team; how do I identify benchmarking partners; what's the best way to collect and analyze benchmarking information; and what will I do with the data? Careful planning is a critical success factor of any benchmarking project, and these questions must be answered before embarking on a benchmarking study. This article summarizes the steps necessary to conduct benchmarking research. Relevant examples of each benchmarking step are provided.

  6. Deepthi Vaidhynathan | NREL

    Science.gov Websites

    Complex Systems Simulation and Optimization Group on performance analysis and benchmarking latest . Research Interests High Performance Computing|Embedded System |Microprocessors & Microcontrollers

  7. Constructing Benchmark Databases and Protocols for Medical Image Analysis: Diabetic Retinopathy

    PubMed Central

    Kauppi, Tomi; Kämäräinen, Joni-Kristian; Kalesnykiene, Valentina; Sorri, Iiris; Uusitalo, Hannu; Kälviäinen, Heikki

    2013-01-01

    We address the performance evaluation practices for developing medical image analysis methods, in particular, how to establish and share databases of medical images with verified ground truth and solid evaluation protocols. Such databases support the development of better algorithms, execution of profound method comparisons, and, consequently, technology transfer from research laboratories to clinical practice. For this purpose, we propose a framework consisting of reusable methods and tools for the laborious task of constructing a benchmark database. We provide a software tool for medical image annotation helping to collect class label, spatial span, and expert's confidence on lesions and a method to appropriately combine the manual segmentations from multiple experts. The tool and all necessary functionality for method evaluation are provided as public software packages. As a case study, we utilized the framework and tools to establish the DiaRetDB1 V2.1 database for benchmarking diabetic retinopathy detection algorithms. The database contains a set of retinal images, ground truth based on information from multiple experts, and a baseline algorithm for the detection of retinopathy lesions. PMID:23956787

  8. A frontier analysis approach for benchmarking hospital performance in the treatment of acute myocardial infarction.

    PubMed

    Stanford, Robert E

    2004-05-01

    This paper uses a non-parametric frontier model and adaptations of the concepts of cross-efficiency and peer-appraisal to develop a formal methodology for benchmarking provider performance in the treatment of Acute Myocardial Infarction (AMI). Parameters used in the benchmarking process are the rates of proper recognition of indications of six standard treatment processes for AMI; the decision making units (DMUs) to be compared are the Medicare eligible hospitals of a particular state; the analysis produces an ordinal ranking of individual hospital performance scores. The cross-efficiency/peer-appraisal calculation process is constructed to accommodate DMUs that experience no patients in some of the treatment categories. While continuing to rate highly the performances of DMUs which are efficient in the Pareto-optimal sense, our model produces individual DMU performance scores that correlate significantly with good overall performance, as determined by a comparison of the sums of the individual DMU recognition rates for the six standard treatment processes. The methodology is applied to data collected from 107 state Medicare hospitals.

  9. Progression-free survival as primary endpoint in randomized clinical trials of targeted agents for advanced renal cell carcinoma. Correlation with overall survival, benchmarking and power analysis.

    PubMed

    Bria, Emilio; Massari, Francesco; Maines, Francesca; Pilotto, Sara; Bonomi, Maria; Porta, Camillo; Bracarda, Sergio; Heng, Daniel; Santini, Daniele; Sperduti, Isabella; Giannarelli, Diana; Cognetti, Francesco; Tortora, Giampaolo; Milella, Michele

    2015-01-01

    A correlation, power and benchmarking analysis between progression-free and overall survival (PFS, OS) of randomized trials with targeted agents or immunotherapy for advanced renal cell carcinoma (RCC) was performed to provide a practical tool for clinical trial design. For 1st-line of treatment, a significant correlation was observed between 6-month PFS and 12-month OS, between 3-month PFS and 9-month OS and between the distributions of the cumulative PFS and OS estimates. According to the regression equation derived for 1st-line targeted agents, 7859, 2873, 712, and 190 patients would be required to determine a 3%, 5%, 10% and 20% PFS advantage at 6 months, corresponding to an absolute increase in 12-month OS rates of 2%, 3%, 6% and 11%, respectively. These data support PFS as a reliable endpoint for advanced RCC receiving up-front therapies. Benchmarking and power analyses, on the basis of the updated survival expectations, may represent practical tools for future trial' design. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  10. The Filament Sensor for Near Real-Time Detection of Cytoskeletal Fiber Structures

    PubMed Central

    Eltzner, Benjamin; Wollnik, Carina; Gottschlich, Carsten; Huckemann, Stephan; Rehfeldt, Florian

    2015-01-01

    A reliable extraction of filament data from microscopic images is of high interest in the analysis of acto-myosin structures as early morphological markers in mechanically guided differentiation of human mesenchymal stem cells and the understanding of the underlying fiber arrangement processes. In this paper, we propose the filament sensor (FS), a fast and robust processing sequence which detects and records location, orientation, length, and width for each single filament of an image, and thus allows for the above described analysis. The extraction of these features has previously not been possible with existing methods. We evaluate the performance of the proposed FS in terms of accuracy and speed in comparison to three existing methods with respect to their limited output. Further, we provide a benchmark dataset of real cell images along with filaments manually marked by a human expert as well as simulated benchmark images. The FS clearly outperforms existing methods in terms of computational runtime and filament extraction accuracy. The implementation of the FS and the benchmark database are available as open source. PMID:25996921

  11. Evolutionary Optimization of a Geometrically Refined Truss

    NASA Technical Reports Server (NTRS)

    Hull, P. V.; Tinker, M. L.; Dozier, G. V.

    2007-01-01

    Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.

  12. Investigating the Transonic Flutter Boundary of the Benchmark Supercritical Wing

    NASA Technical Reports Server (NTRS)

    Heeg, Jennifer; Chwalowski, Pawel

    2017-01-01

    This paper builds on the computational aeroelastic results published previously and generated in support of the second Aeroelastic Prediction Workshop for the NASA Benchmark Supercritical Wing configuration. The computational results are obtained using FUN3D, an unstructured grid Reynolds-Averaged Navier-Stokes solver developed at the NASA Langley Research Center. The analysis results focus on understanding the dip in the transonic flutter boundary at a single Mach number (0.74), exploring an angle of attack range of ??1 to 8 and dynamic pressures from wind off to beyond flutter onset. The rigid analysis results are examined for insights into the behavior of the aeroelastic system. Both static and dynamic aeroelastic simulation results are also examined.

  13. Children's Lead Exposure: A Multimedia Modeling Analysis to Guide Public Health Decision-Making.

    PubMed

    Zartarian, Valerie; Xue, Jianping; Tornero-Velez, Rogelio; Brown, James

    2017-09-12

    Drinking water and other sources for lead are the subject of public health concerns around the Flint, Michigan, drinking water and East Chicago, Indiana, lead in soil crises. In 2015, the U.S. Environmental Protection Agency (EPA)'s National Drinking Water Advisory Council (NDWAC) recommended establishment of a "health-based, household action level" for lead in drinking water based on children's exposure. The primary objective was to develop a coupled exposure-dose modeling approach that can be used to determine what drinking water lead concentrations keep children's blood lead levels (BLLs) below specified values, considering exposures from water, soil, dust, food, and air. Related objectives were to evaluate the coupled model estimates using real-world blood lead data, to quantify relative contributions by the various media, and to identify key model inputs. A modeling approach using the EPA's Stochastic Human Exposure and Dose Simulation (SHEDS)-Multimedia and Integrated Exposure Uptake and Biokinetic (IEUBK) models was developed using available data. This analysis for the U.S. population of young children probabilistically simulated multimedia exposures and estimated relative contributions of media to BLLs across all population percentiles for several age groups. Modeled BLLs compared well with nationally representative BLLs (0-23% relative error). Analyses revealed relative importance of soil and dust ingestion exposure pathways and associated Pb intake rates; water ingestion was also a main pathway, especially for infants. This methodology advances scientific understanding of the relationship between lead concentrations in drinking water and BLLs in children. It can guide national health-based benchmarks for lead and related community public health decisions. https://doi.org/10.1289/EHP1605.

  14. Children’s Lead Exposure: A Multimedia Modeling Analysis to Guide Public Health Decision-Making

    PubMed Central

    Xue, Jianping; Tornero-Velez, Rogelio; Brown, James

    2017-01-01

    Background: Drinking water and other sources for lead are the subject of public health concerns around the Flint, Michigan, drinking water and East Chicago, Indiana, lead in soil crises. In 2015, the U.S. Environmental Protection Agency (EPA)’s National Drinking Water Advisory Council (NDWAC) recommended establishment of a “health-based, household action level” for lead in drinking water based on children’s exposure. Objectives: The primary objective was to develop a coupled exposure–dose modeling approach that can be used to determine what drinking water lead concentrations keep children’s blood lead levels (BLLs) below specified values, considering exposures from water, soil, dust, food, and air. Related objectives were to evaluate the coupled model estimates using real-world blood lead data, to quantify relative contributions by the various media, and to identify key model inputs. Methods: A modeling approach using the EPA’s Stochastic Human Exposure and Dose Simulation (SHEDS)-Multimedia and Integrated Exposure Uptake and Biokinetic (IEUBK) models was developed using available data. This analysis for the U.S. population of young children probabilistically simulated multimedia exposures and estimated relative contributions of media to BLLs across all population percentiles for several age groups. Results: Modeled BLLs compared well with nationally representative BLLs (0–23% relative error). Analyses revealed relative importance of soil and dust ingestion exposure pathways and associated Pb intake rates; water ingestion was also a main pathway, especially for infants. Conclusions: This methodology advances scientific understanding of the relationship between lead concentrations in drinking water and BLLs in children. It can guide national health-based benchmarks for lead and related community public health decisions. https://doi.org/10.1289/EHP1605 PMID:28934096

  15. A comprehensive assessment of somatic mutation detection in cancer using whole-genome sequencing

    PubMed Central

    Alioto, Tyler S.; Buchhalter, Ivo; Derdak, Sophia; Hutter, Barbara; Eldridge, Matthew D.; Hovig, Eivind; Heisler, Lawrence E.; Beck, Timothy A.; Simpson, Jared T.; Tonon, Laurie; Sertier, Anne-Sophie; Patch, Ann-Marie; Jäger, Natalie; Ginsbach, Philip; Drews, Ruben; Paramasivam, Nagarajan; Kabbe, Rolf; Chotewutmontri, Sasithorn; Diessl, Nicolle; Previti, Christopher; Schmidt, Sabine; Brors, Benedikt; Feuerbach, Lars; Heinold, Michael; Gröbner, Susanne; Korshunov, Andrey; Tarpey, Patrick S.; Butler, Adam P.; Hinton, Jonathan; Jones, David; Menzies, Andrew; Raine, Keiran; Shepherd, Rebecca; Stebbings, Lucy; Teague, Jon W.; Ribeca, Paolo; Giner, Francesc Castro; Beltran, Sergi; Raineri, Emanuele; Dabad, Marc; Heath, Simon C.; Gut, Marta; Denroche, Robert E.; Harding, Nicholas J.; Yamaguchi, Takafumi N.; Fujimoto, Akihiro; Nakagawa, Hidewaki; Quesada, Víctor; Valdés-Mas, Rafael; Nakken, Sigve; Vodák, Daniel; Bower, Lawrence; Lynch, Andrew G.; Anderson, Charlotte L.; Waddell, Nicola; Pearson, John V.; Grimmond, Sean M.; Peto, Myron; Spellman, Paul; He, Minghui; Kandoth, Cyriac; Lee, Semin; Zhang, John; Létourneau, Louis; Ma, Singer; Seth, Sahil; Torrents, David; Xi, Liu; Wheeler, David A.; López-Otín, Carlos; Campo, Elías; Campbell, Peter J.; Boutros, Paul C.; Puente, Xose S.; Gerhard, Daniela S.; Pfister, Stefan M.; McPherson, John D.; Hudson, Thomas J.; Schlesner, Matthias; Lichter, Peter; Eils, Roland; Jones, David T. W.; Gut, Ivo G.

    2015-01-01

    As whole-genome sequencing for cancer genome analysis becomes a clinical tool, a full understanding of the variables affecting sequencing analysis output is required. Here using tumour-normal sample pairs from two different types of cancer, chronic lymphocytic leukaemia and medulloblastoma, we conduct a benchmarking exercise within the context of the International Cancer Genome Consortium. We compare sequencing methods, analysis pipelines and validation methods. We show that using PCR-free methods and increasing sequencing depth to ∼100 × shows benefits, as long as the tumour:control coverage ratio remains balanced. We observe widely varying mutation call rates and low concordance among analysis pipelines, reflecting the artefact-prone nature of the raw data and lack of standards for dealing with the artefacts. However, we show that, using the benchmark mutation set we have created, many issues are in fact easy to remedy and have an immediate positive impact on mutation detection accuracy. PMID:26647970

  16. Full cost accounting in the analysis of separated waste collection efficiency: A methodological proposal.

    PubMed

    D'Onza, Giuseppe; Greco, Giulio; Allegrini, Marco

    2016-02-01

    Recycling implies additional costs for separated municipal solid waste (MSW) collection. The aim of the present study is to propose and implement a management tool - the full cost accounting (FCA) method - to calculate the full collection costs of different types of waste. Our analysis aims for a better understanding of the difficulties of putting FCA into practice in the MSW sector. We propose a FCA methodology that uses standard cost and actual quantities to calculate the collection costs of separate and undifferentiated waste. Our methodology allows cost efficiency analysis and benchmarking, overcoming problems related to firm-specific accounting choices, earnings management policies and purchase policies. Our methodology allows benchmarking and variance analysis that can be used to identify the causes of off-standards performance and guide managers to deploy resources more efficiently. Our methodology can be implemented by companies lacking a sophisticated management accounting system. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Performance of two commercial electron beam algorithms over regions close to the lung-mediastinum interface, against Monte Carlo simulation and point dosimetry in virtual and anthropomorphic phantoms.

    PubMed

    Ojala, J; Hyödynmaa, S; Barańczyk, R; Góra, E; Waligórski, M P R

    2014-03-01

    Electron radiotherapy is applied to treat the chest wall close to the mediastinum. The performance of the GGPB and eMC algorithms implemented in the Varian Eclipse treatment planning system (TPS) was studied in this region for 9 and 16 MeV beams, against Monte Carlo (MC) simulations, point dosimetry in a water phantom and dose distributions calculated in virtual phantoms. For the 16 MeV beam, the accuracy of these algorithms was also compared over the lung-mediastinum interface region of an anthropomorphic phantom, against MC calculations and thermoluminescence dosimetry (TLD). In the phantom with a lung-equivalent slab the results were generally congruent, the eMC results for the 9 MeV beam slightly overestimating the lung dose, and the GGPB results for the 16 MeV beam underestimating the lung dose. Over the lung-mediastinum interface, for 9 and 16 MeV beams, the GGPB code underestimated the lung dose and overestimated the dose in water close to the lung, compared to the congruent eMC and MC results. In the anthropomorphic phantom, results of TLD measurements and MC and eMC calculations agreed, while the GGPB code underestimated the lung dose. Good agreement between TLD measurements and MC calculations attests to the accuracy of "full" MC simulations as a reference for benchmarking TPS codes. Application of the GGPB code in chest wall radiotherapy may result in significant underestimation of the lung dose and overestimation of dose to the mediastinum, affecting plan optimization over volumes close to the lung-mediastinum interface, such as the lung or heart. Copyright © 2013 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  18. A preliminary study of in-house Monte Carlo simulations: an integrated Monte Carlo verification system.

    PubMed

    Mukumoto, Nobutaka; Tsujii, Katsutomo; Saito, Susumu; Yasunaga, Masayoshi; Takegawa, Hideki; Yamamoto, Tokihiro; Numasaki, Hodaka; Teshima, Teruki

    2009-10-01

    To develop an infrastructure for the integrated Monte Carlo verification system (MCVS) to verify the accuracy of conventional dose calculations, which often fail to accurately predict dose distributions, mainly due to inhomogeneities in the patient's anatomy, for example, in lung and bone. The MCVS consists of the graphical user interface (GUI) based on a computational environment for radiotherapy research (CERR) with MATLAB language. The MCVS GUI acts as an interface between the MCVS and a commercial treatment planning system to import the treatment plan, create MC input files, and analyze MC output dose files. The MCVS consists of the EGSnrc MC codes, which include EGSnrc/BEAMnrc to simulate the treatment head and EGSnrc/DOSXYZnrc to calculate the dose distributions in the patient/phantom. In order to improve computation time without approximations, an in-house cluster system was constructed. The phase-space data of a 6-MV photon beam from a Varian Clinac unit was developed and used to establish several benchmarks under homogeneous conditions. The MC results agreed with the ionization chamber measurements to within 1%. The MCVS GUI could import and display the radiotherapy treatment plan created by the MC method and various treatment planning systems, such as RTOG and DICOM-RT formats. Dose distributions could be analyzed by using dose profiles and dose volume histograms and compared on the same platform. With the cluster system, calculation time was improved in line with the increase in the number of central processing units (CPUs) at a computation efficiency of more than 98%. Development of the MCVS was successful for performing MC simulations and analyzing dose distributions.

  19. Critical Assessment of Metagenome Interpretation – a benchmark of computational metagenomics software

    PubMed Central

    Sczyrba, Alexander; Hofmann, Peter; Belmann, Peter; Koslicki, David; Janssen, Stefan; Dröge, Johannes; Gregor, Ivan; Majda, Stephan; Fiedler, Jessika; Dahms, Eik; Bremges, Andreas; Fritz, Adrian; Garrido-Oter, Ruben; Jørgensen, Tue Sparholt; Shapiro, Nicole; Blood, Philip D.; Gurevich, Alexey; Bai, Yang; Turaev, Dmitrij; DeMaere, Matthew Z.; Chikhi, Rayan; Nagarajan, Niranjan; Quince, Christopher; Meyer, Fernando; Balvočiūtė, Monika; Hansen, Lars Hestbjerg; Sørensen, Søren J.; Chia, Burton K. H.; Denis, Bertrand; Froula, Jeff L.; Wang, Zhong; Egan, Robert; Kang, Dongwan Don; Cook, Jeffrey J.; Deltel, Charles; Beckstette, Michael; Lemaitre, Claire; Peterlongo, Pierre; Rizk, Guillaume; Lavenier, Dominique; Wu, Yu-Wei; Singer, Steven W.; Jain, Chirag; Strous, Marc; Klingenberg, Heiner; Meinicke, Peter; Barton, Michael; Lingner, Thomas; Lin, Hsin-Hung; Liao, Yu-Chieh; Silva, Genivaldo Gueiros Z.; Cuevas, Daniel A.; Edwards, Robert A.; Saha, Surya; Piro, Vitor C.; Renard, Bernhard Y.; Pop, Mihai; Klenk, Hans-Peter; Göker, Markus; Kyrpides, Nikos C.; Woyke, Tanja; Vorholt, Julia A.; Schulze-Lefert, Paul; Rubin, Edward M.; Darling, Aaron E.; Rattei, Thomas; McHardy, Alice C.

    2018-01-01

    In metagenome analysis, computational methods for assembly, taxonomic profiling and binning are key components facilitating downstream biological data interpretation. However, a lack of consensus about benchmarking datasets and evaluation metrics complicates proper performance assessment. The Critical Assessment of Metagenome Interpretation (CAMI) challenge has engaged the global developer community to benchmark their programs on datasets of unprecedented complexity and realism. Benchmark metagenomes were generated from ~700 newly sequenced microorganisms and ~600 novel viruses and plasmids, including genomes with varying degrees of relatedness to each other and to publicly available ones and representing common experimental setups. Across all datasets, assembly and genome binning programs performed well for species represented by individual genomes, while performance was substantially affected by the presence of related strains. Taxonomic profiling and binning programs were proficient at high taxonomic ranks, with a notable performance decrease below the family level. Parameter settings substantially impacted performances, underscoring the importance of program reproducibility. While highlighting current challenges in computational metagenomics, the CAMI results provide a roadmap for software selection to answer specific research questions. PMID:28967888

  20. Optimization of a solid-state electron spin qubit using Gate Set Tomography

    DOE PAGES

    Dehollain, Juan P.; Muhonen, Juha T.; Blume-Kohout, Robin J.; ...

    2016-10-13

    Here, state of the art qubit systems are reaching the gate fidelities required for scalable quantum computation architectures. Further improvements in the fidelity of quantum gates demands characterization and benchmarking protocols that are efficient, reliable and extremely accurate. Ideally, a benchmarking protocol should also provide information on how to rectify residual errors. Gate Set Tomography (GST) is one such protocol designed to give detailed characterization of as-built qubits. We implemented GST on a high-fidelity electron-spin qubit confined by a single 31P atom in 28Si. The results reveal systematic errors that a randomized benchmarking analysis could measure but not identify, whereasmore » GST indicated the need for improved calibration of the length of the control pulses. After introducing this modification, we measured a new benchmark average gate fidelity of 99.942(8)%, an improvement on the previous value of 99.90(2)%. Furthermore, GST revealed high levels of non-Markovian noise in the system, which will need to be understood and addressed when the qubit is used within a fault-tolerant quantum computation scheme.« less

  1. SU-F-BRD-07: Fast Monte Carlo-Based Biological Optimization of Proton Therapy Treatment Plans for Thyroid Tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan Chan Tseung, H; Ma, J; Ma, D

    2015-06-15

    Purpose: To demonstrate the feasibility of fast Monte Carlo (MC) based biological planning for the treatment of thyroid tumors in spot-scanning proton therapy. Methods: Recently, we developed a fast and accurate GPU-based MC simulation of proton transport that was benchmarked against Geant4.9.6 and used as the dose calculation engine in a clinically-applicable GPU-accelerated IMPT optimizer. Besides dose, it can simultaneously score the dose-averaged LET (LETd), which makes fast biological dose (BD) estimates possible. To convert from LETd to BD, we used a linear relation based on cellular irradiation data. Given a thyroid patient with a 93cc tumor volume, we createdmore » a 2-field IMPT plan in Eclipse (Varian Medical Systems). This plan was re-calculated with our MC to obtain the BD distribution. A second 5-field plan was made with our in-house optimizer, using pre-generated MC dose and LETd maps. Constraints were placed to maintain the target dose to within 25% of the prescription, while maximizing the BD. The plan optimization and calculation of dose and LETd maps were performed on a GPU cluster. The conventional IMPT and biologically-optimized plans were compared. Results: The mean target physical and biological doses from our biologically-optimized plan were, respectively, 5% and 14% higher than those from the MC re-calculation of the IMPT plan. Dose sparing to critical structures in our plan was also improved. The biological optimization, including the initial dose and LETd map calculations, can be completed in a clinically viable time (∼30 minutes) on a cluster of 25 GPUs. Conclusion: Taking advantage of GPU acceleration, we created a MC-based, biologically optimized treatment plan for a thyroid patient. Compared to a standard IMPT plan, a 5% increase in the target’s physical dose resulted in ∼3 times as much increase in the BD. Biological planning was thus effective in escalating the target BD.« less

  2. Maximal Unbiased Benchmarking Data Sets for Human Chemokine Receptors and Comparative Analysis.

    PubMed

    Xia, Jie; Reid, Terry-Elinor; Wu, Song; Zhang, Liangren; Wang, Xiang Simon

    2018-05-29

    Chemokine receptors (CRs) have long been druggable targets for the treatment of inflammatory diseases and HIV-1 infection. As a powerful technique, virtual screening (VS) has been widely applied to identifying small molecule leads for modern drug targets including CRs. For rational selection of a wide variety of VS approaches, ligand enrichment assessment based on a benchmarking data set has become an indispensable practice. However, the lack of versatile benchmarking sets for the whole CRs family that are able to unbiasedly evaluate every single approach including both structure- and ligand-based VS somewhat hinders modern drug discovery efforts. To address this issue, we constructed Maximal Unbiased Benchmarking Data sets for human Chemokine Receptors (MUBD-hCRs) using our recently developed tools of MUBD-DecoyMaker. The MUBD-hCRs encompasses 13 subtypes out of 20 chemokine receptors, composed of 404 ligands and 15756 decoys so far and is readily expandable in the future. It had been thoroughly validated that MUBD-hCRs ligands are chemically diverse while its decoys are maximal unbiased in terms of "artificial enrichment", "analogue bias". In addition, we studied the performance of MUBD-hCRs, in particular CXCR4 and CCR5 data sets, in ligand enrichment assessments of both structure- and ligand-based VS approaches in comparison with other benchmarking data sets available in the public domain and demonstrated that MUBD-hCRs is very capable of designating the optimal VS approach. MUBD-hCRs is a unique and maximal unbiased benchmarking set that covers major CRs subtypes so far.

  3. Dosimetric accuracy of a treatment planning system for actively scanned proton beams and small target volumes: Monte Carlo and experimental validation

    NASA Astrophysics Data System (ADS)

    Magro, G.; Molinelli, S.; Mairani, A.; Mirandola, A.; Panizza, D.; Russo, S.; Ferrari, A.; Valvo, F.; Fossati, P.; Ciocca, M.

    2015-09-01

    This study was performed to evaluate the accuracy of a commercial treatment planning system (TPS), in optimising proton pencil beam dose distributions for small targets of different sizes (5-30 mm side) located at increasing depths in water. The TPS analytical algorithm was benchmarked against experimental data and the FLUKA Monte Carlo (MC) code, previously validated for the selected beam-line. We tested the Siemens syngo® TPS plan optimisation module for water cubes fixing the configurable parameters at clinical standards, with homogeneous target coverage to a 2 Gy (RBE) dose prescription as unique goal. Plans were delivered and the dose at each volume centre was measured in water with a calibrated PTW Advanced Markus® chamber. An EBT3® film was also positioned at the phantom entrance window for the acquisition of 2D dose maps. Discrepancies between TPS calculated and MC simulated values were mainly due to the different lateral spread modeling and resulted in being related to the field-to-spot size ratio. The accuracy of the TPS was proved to be clinically acceptable in all cases but very small and shallow volumes. In this contest, the use of MC to validate TPS results proved to be a reliable procedure for pre-treatment plan verification.

  4. Dosimetric accuracy of a treatment planning system for actively scanned proton beams and small target volumes: Monte Carlo and experimental validation.

    PubMed

    Magro, G; Molinelli, S; Mairani, A; Mirandola, A; Panizza, D; Russo, S; Ferrari, A; Valvo, F; Fossati, P; Ciocca, M

    2015-09-07

    This study was performed to evaluate the accuracy of a commercial treatment planning system (TPS), in optimising proton pencil beam dose distributions for small targets of different sizes (5-30 mm side) located at increasing depths in water. The TPS analytical algorithm was benchmarked against experimental data and the FLUKA Monte Carlo (MC) code, previously validated for the selected beam-line. We tested the Siemens syngo(®) TPS plan optimisation module for water cubes fixing the configurable parameters at clinical standards, with homogeneous target coverage to a 2 Gy (RBE) dose prescription as unique goal. Plans were delivered and the dose at each volume centre was measured in water with a calibrated PTW Advanced Markus(®) chamber. An EBT3(®) film was also positioned at the phantom entrance window for the acquisition of 2D dose maps. Discrepancies between TPS calculated and MC simulated values were mainly due to the different lateral spread modeling and resulted in being related to the field-to-spot size ratio. The accuracy of the TPS was proved to be clinically acceptable in all cases but very small and shallow volumes. In this contest, the use of MC to validate TPS results proved to be a reliable procedure for pre-treatment plan verification.

  5. Assessment of radiation shield integrity of DD/DT fusion neutron generator facilities by Monte Carlo and experimental methods

    NASA Astrophysics Data System (ADS)

    Srinivasan, P.; Priya, S.; Patel, Tarun; Gopalakrishnan, R. K.; Sharma, D. N.

    2015-01-01

    DD/DT fusion neutron generators are used as sources of 2.5 MeV/14.1 MeV neutrons in experimental laboratories for various applications. Detailed knowledge of the radiation dose rates around the neutron generators are essential for ensuring radiological protection of the personnel involved with the operation. This work describes the experimental and Monte Carlo studies carried out in the Purnima Neutron Generator facility of the Bhabha Atomic Research Center (BARC), Mumbai. Verification and validation of the shielding adequacy was carried out by measuring the neutron and gamma dose-rates at various locations inside and outside the neutron generator hall during different operational conditions both for 2.5-MeV and 14.1-MeV neutrons and comparing with theoretical simulations. The calculated and experimental dose rates were found to agree with a maximum deviation of 20% at certain locations. This study has served in benchmarking the Monte Carlo simulation methods adopted for shield design of such facilities. This has also helped in augmenting the existing shield thickness to reduce the neutron and associated gamma dose rates for radiological protection of personnel during operation of the generators at higher source neutron yields up to 1 × 1010 n/s.

  6. Modeling antimicrobial tolerance and treatment of heterogeneous biofilms.

    PubMed

    Zhao, Jia; Seeluangsawat, Paisa; Wang, Qi

    2016-12-01

    A multiphasic, hydrodynamic model for spatially heterogeneous biofilms based on the phase field formulation is developed and applied to analyze antimicrobial tolerance of biofilms by acknowledging the existence of persistent and susceptible cells in the total population of bacteria. The model implements a new conversion rate between persistent and susceptible cells and its homogeneous dynamics is bench-marked against a known experiment quantitatively. It is then discretized and solved on graphic processing units (GPUs) in 3-D space and time. With the model, biofilm development and antimicrobial treatment of biofilms in a flow cell are investigated numerically. Model predictions agree qualitatively well with available experimental observations. Specifically, numerical results demonstrate that: (i) in a flow cell, nutrient, diffused in solvent and transported by hydrodynamics, has an apparent impact on persister formation, thereby antimicrobial persistence of biofilms; (ii) dosing antimicrobial agents inside biofilms is more effective than dosing through diffusion in solvent; (iii) periodic dosing is less effective in antimicrobial treatment of biofilms in a nutrient deficient environment than in a nutrient sufficient environment. This model provides us with a simulation tool to analyze mechanisms of biofilm tolerance to antimicrobial agents and to derive potentially optimal dosing strategies for biofilm control and treatment. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Customized recommendations for production management clusters of North American automatic milking systems.

    PubMed

    Tremblay, Marlène; Hess, Justin P; Christenson, Brock M; McIntyre, Kolby K; Smink, Ben; van der Kamp, Arjen J; de Jong, Lisanne G; Döpfer, Dörte

    2016-07-01

    Automatic milking systems (AMS) are implemented in a variety of situations and environments. Consequently, there is a need to characterize individual farming practices and regional challenges to streamline management advice and objectives for producers. Benchmarking is often used in the dairy industry to compare farms by computing percentile ranks of the production values of groups of farms. Grouping for conventional benchmarking is commonly limited to the use of a few factors such as farms' geographic region or breed of cattle. We hypothesized that herds' production data and management information could be clustered in a meaningful way using cluster analysis and that this clustering approach would yield better peer groups of farms than benchmarking methods based on criteria such as country, region, breed, or breed and region. By applying mixed latent-class model-based cluster analysis to 529 North American AMS dairy farms with respect to 18 significant risk factors, 6 clusters were identified. Each cluster (i.e., peer group) represented unique management styles, challenges, and production patterns. When compared with peer groups based on criteria similar to the conventional benchmarking standards, the 6 clusters better predicted milk produced (kilograms) per robot per day. Each cluster represented a unique management and production pattern that requires specialized advice. For example, cluster 1 farms were those that recently installed AMS robots, whereas cluster 3 farms (the most northern farms) fed high amounts of concentrates through the robot to compensate for low-energy feed in the bunk. In addition to general recommendations for farms within a cluster, individual farms can generate their own specific goals by comparing themselves to farms within their cluster. This is very comparable to benchmarking but adds the specific characteristics of the peer group, resulting in better farm management advice. The improvement that cluster analysis allows for is characterized by the multivariable approach and the fact that comparisons between production units can be accomplished within a cluster and between clusters as a choice. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  8. Understanding and benchmarking health service achievement of policy goals for chronic disease

    PubMed Central

    2012-01-01

    Background Key challenges in benchmarking health service achievement of policy goals in areas such as chronic disease are: 1) developing indicators and understanding how policy goals might work as indicators of service performance; 2) developing methods for economically collecting and reporting stakeholder perceptions; 3) combining and sharing data about the performance of organizations; 4) interpreting outcome measures; 5) obtaining actionable benchmarking information. This study aimed to explore how a new Boolean-based small-N method from the social sciences—Qualitative Comparative Analysis or QCA—could contribute to meeting these internationally shared challenges. Methods A ‘multi-value QCA’ (MVQCA) analysis was conducted of data from 24 senior staff at 17 randomly selected services for chronic disease, who provided perceptions of 1) whether government health services were improving their achievement of a set of statewide policy goals for chronic disease and 2) the efficacy of state health office actions in influencing this improvement. The analysis produced summaries of configurations of perceived service improvements. Results Most respondents observed improvements in most areas but uniformly good improvements across services were not perceived as happening (regardless of whether respondents identified a state health office contribution to that improvement). The sentinel policy goal of using evidence to develop service practice was not achieved at all in four services and appears to be reliant on other kinds of service improvements happening. Conclusions The QCA method suggested theoretically plausible findings and an approach that with further development could help meet the five benchmarking challenges. In particular, it suggests that achievement of one policy goal may be reliant on achievement of another goal in complex ways that the literature has not yet fully accommodated but which could help prioritize policy goals. The weaknesses of QCA can be found wherever traditional big-N statistical methods are needed and possible, and in its more complex and therefore difficult to empirically validate findings. It should be considered a potentially valuable adjunct method for benchmarking complex health policy goals such as those for chronic disease. PMID:23020943

  9. SU-C-BRC-01: A Monte Carlo Study of Out-Of-Field Doses From Cobalt-60 Teletherapy Units Intended for Historical Correlations of Dose to Normal Tissue

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petroccia, H; Olguin, E; Culberson, W

    2016-06-15

    Purpose: Innovations in radiotherapy treatments, such as dynamic IMRT, VMAT, and SBRT/SRS, result in larger proportions of low-dose regions where normal tissues are exposed to low doses levels. Low doses of radiation have been linked to secondary cancers and cardiac toxicities. The AAPM TG Committee No.158 entitled, ‘Measurements and Calculations of Doses outside the Treatment Volume from External-Beam Radiation Therapy’, has been formed to review the dosimetry of non-target and out-of-field exposures using experimental and computational approaches. Studies on historical patients can provide comprehensive information about secondary effects from out-of-field doses when combined with long-term patient follow-up, thus providing significantmore » insight into projecting future outcomes of patients undergoing modern-day treatments. Methods: We present a Monte Carlo model of a Theratron-1000 cobalt-60 teletherapy unit, which historically treated patients at the University of Florida, as a means of determining doses located outside the primary beam. Experimental data for a similar Theratron-1000 was obtained at the University of Wisconsin’s ADCL to benchmark the model for out-of-field dosimetry. An Exradin A12 ion chamber and TLD100 chips were used to measure doses in an extended water phantom to 60 cm outside the primary field at 5 and 10 cm depths. Results: Comparison between simulated and experimental measurements of PDDs and lateral profiles show good agreement for in-field and out-of-field doses. At 10 cm away from the edge of a 6×6, 10×10, and 20×20 cm2 field, relative out-of-field doses were measured in the range of 0.5% to 3% of the dose measured at 5 cm depth along the CAX. Conclusion: Out-of-field doses can be as high as 90 to 180 cGy assuming historical prescription doses of 30 to 60 Gy and should be considered when correlating late effects with normal tissue dose.« less

  10. Root cause analysis of laboratory turnaround times for patients in the emergency department.

    PubMed

    Fernandes, Christopher M B; Worster, Andrew; Hill, Stephen; McCallum, Catherine; Eva, Kevin

    2004-03-01

    Laboratory investigations are essential to patient care and are conducted routinely in emergency departments (EDs). This study reports the turnaround times at an academic, tertiary care ED, using root cause analysis to identify potential areas of improvement. Our objectives were to compare the laboratory turnaround times with established benchmarks and identify root causes for delays. Turnaround and process event times for a consecutive sample of hemoglobin and potassium measurements were recorded during an 8-day study period using synchronized time stamps. A log transformation (ln [minutes + 1]) was performed to normalize the time data, which were then compared with established benchmarks using one-sample t tests. The turnaround time for hemoglobin was significantly less than the established benchmark (n = 140, t = -5.69, p < 0.001) and that of potassium was significantly greater (n = 121, t = 12.65, p < 0.001). The hemolysis rate was 5.8%, with 0.017% of samples needing recollection. Causes of delays included order-processing time, a high proportion (43%) of tests performed on patients who had been admitted but were still in the ED waiting for a bed, and excessive laboratory process times for potassium. The turnaround time for hemoglobin (18 min) met the established benchmark, but that for potassium (49 min) did not. Root causes for delay were order-processing time, excessive queue and instrument times for potassium and volume of tests for admitted patients. Further study of these identified causes of delays is required to see whether laboratory TATs can be reduced.

  11. 2015/2016 Quality Risk Management Benchmarking Survey.

    PubMed

    Waldron, Kelly; Ramnarine, Emma; Hartman, Jeffrey

    2017-01-01

    This paper investigates the concept of quality risk management (QRM) maturity as it applies to the pharmaceutical and biopharmaceutical industries, using the results and analysis from a QRM benchmarking survey conducted in 2015 and 2016. QRM maturity can be defined as the effectiveness and efficiency of a quality risk management program, moving beyond "check-the-box" compliance with guidelines such as ICH Q9 Quality Risk Management , to explore the value QRM brings to business and quality operations. While significant progress has been made towards full adoption of QRM principles and practices across industry, the full benefits of QRM have not yet been fully realized. The results of the QRM Benchmarking Survey indicate that the pharmaceutical and biopharmaceutical industries are approximately halfway along the journey towards full QRM maturity. LAY ABSTRACT: The management of risks associated with medicinal product quality and patient safety are an important focus for the pharmaceutical and biopharmaceutical industries. These risks are identified, analyzed, and controlled through a defined process called quality risk management (QRM), which seeks to protect the patient from potential quality-related risks. This paper summarizes the outcomes of a comprehensive survey of industry practitioners performed in 2015 and 2016 that aimed to benchmark the level of maturity with regard to the application of QRM. The survey results and subsequent analysis revealed that the pharmaceutical and biopharmaceutical industries have made significant progress in the management of quality risks over the last ten years, and they are roughly halfway towards reaching full maturity of QRM. © PDA, Inc. 2017.

  12. Scale-4 Analysis of Pressurized Water Reactor Critical Configurations: Volume 2-Sequoyah Unit 2 Cycle 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowman, S.M.

    1995-01-01

    The requirements of ANSI/ANS 8.1 specify that calculational methods for away-from-reactor criticality safety analyses be validated against experimental measurements. If credit for the negative reactivity of the depleted (or spent) fuel isotopics is desired, it is necessary to benchmark computational methods against spent fuel critical configurations. This report summarizes a portion of the ongoing effort to benchmark away-from-reactor criticality analysis methods using critical configurations from commercial pressurized-water reactors. The analysis methodology selected for all the calculations reported herein is based on the codes and data provided in the SCALE-4 code system. The isotopic densities for the spent fuel assemblies inmore » the critical configurations were calculated using the SAS2H analytical sequence of the SCALE-4 system. The sources of data and the procedures for deriving SAS2H input parameters are described in detail. The SNIKR code module was used to extract the necessary isotopic densities from the SAS2H results and provide the data in the format required by the SCALE criticality analysis modules. The CSASN analytical sequence in SCALE-4 was used to perform resonance processing of the cross sections. The KENO V.a module of SCALE-4 was used to calculate the effective multiplication factor (k{sub eff}) of each case. The SCALE-4 27-group burnup library containing ENDF/B-IV (actinides) and ENDF/B-V (fission products) data was used for all the calculations. This volume of the report documents the SCALE system analysis of three reactor critical configurations for the Sequoyah Unit 2 Cycle 3. This unit and cycle were chosen because of the relevance in spent fuel benchmark applications: (1) the unit had a significantly long downtime of 2.7 years during the middle of cycle (MOC) 3, and (2) the core consisted entirely of burned fuel at the MOC restart. The first benchmark critical calculation was the MOC restart at hot, full-power (HFP) critical conditions. The other two benchmark critical calculations were the beginning-of-cycle (BOC) startup at both hot, zero-power (HZP) and HFP critical conditions. These latter calculations were used to check for consistency in the calculated results for different burnups and downtimes. The k{sub eff} results were in the range of 1.00014 to 1.00259 with a standard deviation of less than 0.001.« less

  13. Limitations of Community College Benchmarking and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy H.

    2006-01-01

    This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.

  14. Analysis of dosimetry from the H.B. Robinson unit 2 pressure vessel benchmark using RAPTOR-M3G and ALPAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, G.A.

    2011-07-01

    Document available in abstract form only, full text of document follows: The dosimetry from the H. B. Robinson Unit 2 Pressure Vessel Benchmark is analyzed with a suite of Westinghouse-developed codes and data libraries. The radiation transport from the reactor core to the surveillance capsule and ex-vessel locations is performed by RAPTOR-M3G, a parallel deterministic radiation transport code that calculates high-resolution neutron flux information in three dimensions. The cross-section library used in this analysis is the ALPAN library, an Evaluated Nuclear Data File (ENDF)/B-VII.0-based library designed for reactor dosimetry and fluence analysis applications. Dosimetry is evaluated with the industry-standard SNLRMLmore » reactor dosimetry cross-section data library. (authors)« less

  15. EnergyIQ

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MILLS, EVAN; MATTHE, PAUL; STOUFER, MARTIN

    2016-10-06

    EnergyIQ-the first "action-oriented" benchmarking tool for non-residential buildings-provides a standardized opportunity assessment based on benchmarking results. along with decision-support information to help refine action plans. EnergyIQ offers a wide array of benchmark metrics, with visuall as well as tabular display. These include energy, costs, greenhouse-gas emissions, and a large array of characteristics (e.g. building components or operational strategies). The tool supports cross-sectional benchmarking for comparing the user's building to it's peers at one point in time, as well as longitudinal benchmarking for tracking the performance of an individual building or enterprise portfolio over time. Based on user inputs, the toolmore » generates a list of opportunities and recommended actions. Users can then explore the "Decision Support" module for helpful information on how to refine action plans, create design-intent documentation, and implement improvements. This includes information on best practices, links to other energy analysis tools and more. The variety of databases are available within EnergyIQ from which users can specify peer groups for comparison. Using the tool, this data can be visually browsed and used as a backdrop against which to view a variety of energy benchmarking metrics for the user's own building. User can save their project information and return at a later date to continue their exploration. The initial database is the CA Commercial End-Use Survey (CEUS), which provides details on energy use and characteristics for about 2800 buildings (and 62 building types). CEUS is likely the most thorough survey of its kind every conducted. The tool is built as a web service. The EnergyIQ web application is written in JSP with pervasive us of JavaScript and CSS2. EnergyIQ also supports a SOAP based web service to allow the flow of queries and data to occur with non-browser implementations. Data are stored in an Oracle 10g database. References: Mills, Mathew, Brook and Piette. 2008. "Action Oriented Benchmarking: Concepts and Tools." Energy Engineering, Vol.105, No. 4, pp 21-40. LBNL-358E; Mathew, Mills, Bourassa, Brook. 2008. "Action-Oriented Benchmarking: Using the CEUS Database to Benchmark Commercial Buildings in California." Energy Engineering, Vol 105, No. 5, pp 6-18. LBNL-502E.« less

  16. Does Global Progress on Sanitation Really Lag behind Water? An Analysis of Global Progress on Community- and Household-Level Access to Safe Water and Sanitation

    PubMed Central

    Cumming, Oliver; Elliott, Mark; Overbo, Alycia; Bartram, Jamie

    2014-01-01

    Safe drinking water and sanitation are important determinants of human health and wellbeing and have recently been declared human rights by the international community. Increased access to both were included in the Millennium Development Goals under a single dedicated target for 2015. This target was reached in 2010 for water but sanitation will fall short; however, there is an important difference in the benchmarks used for assessing global access. For drinking water the benchmark is community-level access whilst for sanitation it is household-level access, so a pit latrine shared between households does not count toward the Millennium Development Goal (MDG) target. We estimated global progress for water and sanitation under two scenarios: with equivalent household- and community-level benchmarks. Our results demonstrate that the “sanitation deficit” is apparent only when household-level sanitation access is contrasted with community-level water access. When equivalent benchmarks are used for water and sanitation, the global deficit is as great for water as it is for sanitation, and sanitation progress in the MDG-period (1990–2015) outstrips that in water. As both drinking water and sanitation access yield greater benefits at the household-level than at the community-level, we conclude that any post–2015 goals should consider a household-level benchmark for both. PMID:25502659

  17. Cloud-Based Evaluation of Anatomical Structure Segmentation and Landmark Detection Algorithms: VISCERAL Anatomy Benchmarks.

    PubMed

    Jimenez-Del-Toro, Oscar; Muller, Henning; Krenn, Markus; Gruenberg, Katharina; Taha, Abdel Aziz; Winterstein, Marianne; Eggel, Ivan; Foncubierta-Rodriguez, Antonio; Goksel, Orcun; Jakab, Andras; Kontokotsios, Georgios; Langs, Georg; Menze, Bjoern H; Salas Fernandez, Tomas; Schaer, Roger; Walleyo, Anna; Weber, Marc-Andre; Dicente Cid, Yashin; Gass, Tobias; Heinrich, Mattias; Jia, Fucang; Kahl, Fredrik; Kechichian, Razmig; Mai, Dominic; Spanier, Assaf B; Vincent, Graham; Wang, Chunliang; Wyeth, Daniel; Hanbury, Allan

    2016-11-01

    Variations in the shape and appearance of anatomical structures in medical images are often relevant radiological signs of disease. Automatic tools can help automate parts of this manual process. A cloud-based evaluation framework is presented in this paper including results of benchmarking current state-of-the-art medical imaging algorithms for anatomical structure segmentation and landmark detection: the VISCERAL Anatomy benchmarks. The algorithms are implemented in virtual machines in the cloud where participants can only access the training data and can be run privately by the benchmark administrators to objectively compare their performance in an unseen common test set. Overall, 120 computed tomography and magnetic resonance patient volumes were manually annotated to create a standard Gold Corpus containing a total of 1295 structures and 1760 landmarks. Ten participants contributed with automatic algorithms for the organ segmentation task, and three for the landmark localization task. Different algorithms obtained the best scores in the four available imaging modalities and for subsets of anatomical structures. The annotation framework, resulting data set, evaluation setup, results and performance analysis from the three VISCERAL Anatomy benchmarks are presented in this article. Both the VISCERAL data set and Silver Corpus generated with the fusion of the participant algorithms on a larger set of non-manually-annotated medical images are available to the research community.

  18. Does global progress on sanitation really lag behind water? An analysis of global progress on community- and household-level access to safe water and sanitation.

    PubMed

    Cumming, Oliver; Elliott, Mark; Overbo, Alycia; Bartram, Jamie

    2014-01-01

    Safe drinking water and sanitation are important determinants of human health and wellbeing and have recently been declared human rights by the international community. Increased access to both were included in the Millennium Development Goals under a single dedicated target for 2015. This target was reached in 2010 for water but sanitation will fall short; however, there is an important difference in the benchmarks used for assessing global access. For drinking water the benchmark is community-level access whilst for sanitation it is household-level access, so a pit latrine shared between households does not count toward the Millennium Development Goal (MDG) target. We estimated global progress for water and sanitation under two scenarios: with equivalent household- and community-level benchmarks. Our results demonstrate that the "sanitation deficit" is apparent only when household-level sanitation access is contrasted with community-level water access. When equivalent benchmarks are used for water and sanitation, the global deficit is as great for water as it is for sanitation, and sanitation progress in the MDG-period (1990-2015) outstrips that in water. As both drinking water and sanitation access yield greater benefits at the household-level than at the community-level, we conclude that any post-2015 goals should consider a household-level benchmark for both.

  19. ARCHER{sub RT} – A GPU-based and photon-electron coupled Monte Carlo dose computing engine for radiation therapy: Software development and application to helical tomotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su, Lin; Du, Xining; Liu, Tianyu

    Purpose: Using the graphical processing units (GPU) hardware technology, an extremely fast Monte Carlo (MC) code ARCHER{sub RT} is developed for radiation dose calculations in radiation therapy. This paper describes the detailed software development and testing for three clinical TomoTherapy® cases: the prostate, lung, and head and neck. Methods: To obtain clinically relevant dose distributions, phase space files (PSFs) created from optimized radiation therapy treatment plan fluence maps were used as the input to ARCHER{sub RT}. Patient-specific phantoms were constructed from patient CT images. Batch simulations were employed to facilitate the time-consuming task of loading large PSFs, and to improvemore » the estimation of statistical uncertainty. Furthermore, two different Woodcock tracking algorithms were implemented and their relative performance was compared. The dose curves of an Elekta accelerator PSF incident on a homogeneous water phantom were benchmarked against DOSXYZnrc. For each of the treatment cases, dose volume histograms and isodose maps were produced from ARCHER{sub RT} and the general-purpose code, GEANT4. The gamma index analysis was performed to evaluate the similarity of voxel doses obtained from these two codes. The hardware accelerators used in this study are one NVIDIA K20 GPU, one NVIDIA K40 GPU, and six NVIDIA M2090 GPUs. In addition, to make a fairer comparison of the CPU and GPU performance, a multithreaded CPU code was developed using OpenMP and tested on an Intel E5-2620 CPU. Results: For the water phantom, the depth dose curve and dose profiles from ARCHER{sub RT} agree well with DOSXYZnrc. For clinical cases, results from ARCHER{sub RT} are compared with those from GEANT4 and good agreement is observed. Gamma index test is performed for voxels whose dose is greater than 10% of maximum dose. For 2%/2mm criteria, the passing rates for the prostate, lung case, and head and neck cases are 99.7%, 98.5%, and 97.2%, respectively. Due to specific architecture of GPU, modified Woodcock tracking algorithm performed inferior to the original one. ARCHER{sub RT} achieves a fast speed for PSF-based dose calculations. With a single M2090 card, the simulations cost about 60, 50, 80 s for three cases, respectively, with the 1% statistical error in the PTV. Using the latest K40 card, the simulations are 1.7–1.8 times faster. More impressively, six M2090 cards could finish the simulations in 8.9–13.4 s. For comparison, the same simulations on Intel E5-2620 (12 hyperthreading) cost about 500–800 s. Conclusions: ARCHER{sub RT} was developed successfully to perform fast and accurate MC dose calculation for radiotherapy using PSFs and patient CT phantoms.« less

  20. Patient-specific IMRT verification using independent fluence-based dose calculation software: experimental benchmarking and initial clinical experience.

    PubMed

    Georg, Dietmar; Stock, Markus; Kroupa, Bernhard; Olofsson, Jörgen; Nyholm, Tufve; Ahnesjö, Anders; Karlsson, Mikael

    2007-08-21

    Experimental methods are commonly used for patient-specific intensity-modulated radiotherapy (IMRT) verification. The purpose of this study was to investigate the accuracy and performance of independent dose calculation software (denoted as 'MUV' (monitor unit verification)) for patient-specific quality assurance (QA). 52 patients receiving step-and-shoot IMRT were considered. IMRT plans were recalculated by the treatment planning systems (TPS) in a dedicated QA phantom, in which an experimental 1D and 2D verification (0.3 cm(3) ionization chamber; films) was performed. Additionally, an independent dose calculation was performed. The fluence-based algorithm of MUV accounts for collimator transmission, rounded leaf ends, tongue-and-groove effect, backscatter to the monitor chamber and scatter from the flattening filter. The dose calculation utilizes a pencil beam model based on a beam quality index. DICOM RT files from patient plans, exported from the TPS, were directly used as patient-specific input data in MUV. For composite IMRT plans, average deviations in the high dose region between ionization chamber measurements and point dose calculations performed with the TPS and MUV were 1.6 +/- 1.2% and 0.5 +/- 1.1% (1 S.D.). The dose deviations between MUV and TPS slightly depended on the distance from the isocentre position. For individual intensity-modulated beams (total 367), an average deviation of 1.1 +/- 2.9% was determined between calculations performed with the TPS and with MUV, with maximum deviations up to 14%. However, absolute dose deviations were mostly less than 3 cGy. Based on the current results, we aim to apply a confidence limit of 3% (with respect to the prescribed dose) or 6 cGy for routine IMRT verification. For off-axis points at distances larger than 5 cm and for low dose regions, we consider 5% dose deviation or 10 cGy acceptable. The time needed for an independent calculation compares very favourably with the net time for an experimental approach. The physical effects modelled in the dose calculation software MUV allow accurate dose calculations in individual verification points. Independent calculations may be used to replace experimental dose verification once the IMRT programme is mature.

  1. A clinical study of lung cancer dose calculation accuracy with Monte Carlo simulation.

    PubMed

    Zhao, Yanqun; Qi, Guohai; Yin, Gang; Wang, Xianliang; Wang, Pei; Li, Jian; Xiao, Mingyong; Li, Jie; Kang, Shengwei; Liao, Xiongfei

    2014-12-16

    The accuracy of dose calculation is crucial to the quality of treatment planning and, consequently, to the dose delivered to patients undergoing radiation therapy. Current general calculation algorithms such as Pencil Beam Convolution (PBC) and Collapsed Cone Convolution (CCC) have shortcomings in regard to severe inhomogeneities, particularly in those regions where charged particle equilibrium does not hold. The aim of this study was to evaluate the accuracy of the PBC and CCC algorithms in lung cancer radiotherapy using Monte Carlo (MC) technology. Four treatment plans were designed using Oncentra Masterplan TPS for each patient. Two intensity-modulated radiation therapy (IMRT) plans were developed using the PBC and CCC algorithms, and two three-dimensional conformal therapy (3DCRT) plans were developed using the PBC and CCC algorithms. The DICOM-RT files of the treatment plans were exported to the Monte Carlo system to recalculate. The dose distributions of GTV, PTV and ipsilateral lung calculated by the TPS and MC were compared. For 3DCRT and IMRT plans, the mean dose differences for GTV between the CCC and MC increased with decreasing of the GTV volume. For IMRT, the mean dose differences were found to be higher than that of 3DCRT. The CCC algorithm overestimated the GTV mean dose by approximately 3% for IMRT. For 3DCRT plans, when the volume of the GTV was greater than 100 cm(3), the mean doses calculated by CCC and MC almost have no difference. PBC shows large deviations from the MC algorithm. For the dose to the ipsilateral lung, the CCC algorithm overestimated the dose to the entire lung, and the PBC algorithm overestimated V20 but underestimated V5; the difference in V10 was not statistically significant. PBC substantially overestimates the dose to the tumour, but the CCC is similar to the MC simulation. It is recommended that the treatment plans for lung cancer be developed using an advanced dose calculation algorithm other than PBC. MC can accurately calculate the dose distribution in lung cancer and can provide a notably effective tool for benchmarking the performance of other dose calculation algorithms within patients.

  2. Maximum unbiased validation (MUV) data sets for virtual screening based on PubChem bioactivity data.

    PubMed

    Rohrer, Sebastian G; Baumann, Knut

    2009-02-01

    Refined nearest neighbor analysis was recently introduced for the analysis of virtual screening benchmark data sets. It constitutes a technique from the field of spatial statistics and provides a mathematical framework for the nonparametric analysis of mapped point patterns. Here, refined nearest neighbor analysis is used to design benchmark data sets for virtual screening based on PubChem bioactivity data. A workflow is devised that purges data sets of compounds active against pharmaceutically relevant targets from unselective hits. Topological optimization using experimental design strategies monitored by refined nearest neighbor analysis functions is applied to generate corresponding data sets of actives and decoys that are unbiased with regard to analogue bias and artificial enrichment. These data sets provide a tool for Maximum Unbiased Validation (MUV) of virtual screening methods. The data sets and a software package implementing the MUV design workflow are freely available at http://www.pharmchem.tu-bs.de/lehre/baumann/MUV.html.

  3. Indicators of AEI applied to the Delaware Estuary.

    PubMed

    Barnthouse, Lawrence W; Heimbuch, Douglas G; Anthony, Vaughn C; Hilborn, Ray W; Myers, Ransom A

    2002-05-18

    We evaluated the impacts of entrainment and impingement at the Salem Generating Station on fish populations and communities in the Delaware Estuary. In the absence of an agreed-upon regulatory definition of "adverse environmental impact" (AEI), we developed three independent benchmarks of AEI based on observed or predicted changes that could threaten the sustainability of a population or the integrity of a community. Our benchmarks of AEI included: (1) disruption of the balanced indigenous community of fish in the vicinity of Salem (the "BIC" analysis); (2) a continued downward trend in the abundance of one or more susceptible fish species (the "Trends" analysis); and (3) occurrence of entrainment/impingement mortality sufficient, in combination with fishing mortality, to jeopardize the future sustainability of one or more populations (the "Stock Jeopardy" analysis). The BIC analysis utilized nearly 30 years of species presence/absence data collected in the immediate vicinity of Salem. The Trends analysis examined three independent data sets that document trends in the abundance of juvenile fish throughout the estuary over the past 20 years. The Stock Jeopardy analysis used two different assessment models to quantify potential long-term impacts of entrainment and impingement on susceptible fish populations. For one of these models, the compensatory capacities of the modeled species were quantified through meta-analysis of spawner-recruit data available for several hundred fish stocks. All three analyses indicated that the fish populations and communities of the Delaware Estuary are healthy and show no evidence of an adverse impact due to Salem. Although the specific models and analyses used at Salem are not applicable to every facility, we believe that a weight of evidence approach that evaluates multiple benchmarks of AEI using both retrospective and predictive methods is the best approach for assessing entrainment and impingement impacts at existing facilities.

  4. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    PubMed Central

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2018-01-01

    The goal of this study is to develop a generalized source model (GSM) for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology. PMID:28079526

  5. Evaluation of radiochromic gel dosimetry and polymer gel dosimetry in a clinical dose verification

    NASA Astrophysics Data System (ADS)

    Vandecasteele, Jan; De Deene, Yves

    2013-09-01

    A quantitative comparison of two full three-dimensional (3D) gel dosimetry techniques was assessed in a clinical setting: radiochromic gel dosimetry with an in-house developed optical laser CT scanner and polymer gel dosimetry with magnetic resonance imaging (MRI). To benchmark both gel dosimeters, they were exposed to a 6 MV photon beam and the depth dose was compared against a diamond detector measurement that served as golden standard. Both gel dosimeters were found accurate within 4% accuracy. In the 3D dose matrix of the radiochromic gel, hotspot dose deviations up to 8% were observed which are attributed to the fabrication procedure. The polymer gel readout was shown to be sensitive to B0 field and B1 field non-uniformities as well as temperature variations during scanning. The performance of the two gel dosimeters was also evaluated for a brain tumour IMRT treatment. Both gel measured dose distributions were compared against treatment planning system predicted dose maps which were validated independently with ion chamber measurements and portal dosimetry. In the radiochromic gel measurement, two sources of deviations could be identified. Firstly, the dose in a cluster of voxels near the edge of the phantom deviated from the planned dose. Secondly, the presence of dose hotspots in the order of 10% related to inhomogeneities in the gel limit the clinical acceptance of this dosimetry technique. Based on the results of the micelle gel dosimeter prototype presented here, chemical optimization will be subject of future work. Polymer gel dosimetry is capable of measuring the absolute dose in the whole 3D volume within 5% accuracy. A temperature stabilization technique is incorporated to increase the accuracy during short measurements, however keeping the temperature stable during long measurement times in both calibration phantoms and the volumetric phantom is more challenging. The sensitivity of MRI readout to minimal temperature fluctuations is demonstrated which proves the need for adequate compensation strategies.

  6. Toxicological profile of ultrapure 2,2',3,4,4',5,5'-heptachlorbiphenyl (PCB 180) in adult rats.

    PubMed

    Viluksela, Matti; Heikkinen, Päivi; van der Ven, Leo T M; Rendel, Filip; Roos, Robert; Esteban, Javier; Korkalainen, Merja; Lensu, Sanna; Miettinen, Hanna M; Savolainen, Kari; Sankari, Satu; Lilienthal, Hellmuth; Adamsson, Annika; Toppari, Jorma; Herlin, Maria; Finnilä, Mikko; Tuukkanen, Juha; Leslie, Heather A; Hamers, Timo; Hamscher, Gerd; Al-Anati, Lauy; Stenius, Ulla; Dervola, Kine-Susann; Bogen, Inger-Lise; Fonnum, Frode; Andersson, Patrik L; Schrenk, Dieter; Halldin, Krister; Håkansson, Helen

    2014-01-01

    PCB 180 is a persistent non-dioxin-like polychlorinated biphenyl (NDL-PCB) abundantly present in food and the environment. Risk characterization of NDL-PCBs is confounded by the presence of highly potent dioxin-like impurities. We used ultrapure PCB 180 to characterize its toxicity profile in a 28-day repeat dose toxicity study in young adult rats extended to cover endocrine and behavioral effects. Using a loading dose/maintenance dose regimen, groups of 5 males and 5 females were given total doses of 0, 3, 10, 30, 100, 300, 1000 or 1700 mg PCB 180/kg body weight by gavage. Dose-responses were analyzed using benchmark dose modeling based on dose and adipose tissue PCB concentrations. Body weight gain was retarded at 1700 mg/kg during loading dosing, but recovered thereafter. The most sensitive endpoint of toxicity that was used for risk characterization was altered open field behavior in females; i.e. increased activity and distance moved in the inner zone of an open field suggesting altered emotional responses to unfamiliar environment and impaired behavioral inhibition. Other dose-dependent changes included decreased serum thyroid hormones with associated histopathological changes, altered tissue retinoid levels, decreased hematocrit and hemoglobin, decreased follicle stimulating hormone and luteinizing hormone levels in males and increased expression of DNA damage markers in liver of females. Dose-dependent hypertrophy of zona fasciculata cells was observed in adrenals suggesting activation of cortex. There were gender differences in sensitivity and toxicity profiles were partly different in males and females. PCB 180 adipose tissue concentrations were clearly above the general human population levels, but close to the levels in highly exposed populations. The results demonstrate a distinct toxicological profile of PCB 180 with lack of dioxin-like properties required for assignment of WHO toxic equivalency factor. However, PCB 180 shares several toxicological targets with dioxin-like compounds emphasizing the potential for interactions.

  7. Toxicological Profile of Ultrapure 2,2′,3,4,4′,5,5′-Heptachlorbiphenyl (PCB 180) in Adult Rats

    PubMed Central

    Viluksela, Matti; Heikkinen, Päivi; van der Ven, Leo T. M.; Rendel, Filip; Roos, Robert; Esteban, Javier; Korkalainen, Merja; Lensu, Sanna; Miettinen, Hanna M.; Savolainen, Kari; Sankari, Satu; Lilienthal, Hellmuth; Adamsson, Annika; Toppari, Jorma; Herlin, Maria; Finnilä, Mikko; Tuukkanen, Juha; Leslie, Heather A.; Hamers, Timo; Hamscher, Gerd; Al-Anati, Lauy; Stenius, Ulla; Dervola, Kine-Susann; Bogen, Inger-Lise; Fonnum, Frode; Andersson, Patrik L.; Schrenk, Dieter; Halldin, Krister; Håkansson, Helen

    2014-01-01

    PCB 180 is a persistent non-dioxin-like polychlorinated biphenyl (NDL-PCB) abundantly present in food and the environment. Risk characterization of NDL-PCBs is confounded by the presence of highly potent dioxin-like impurities. We used ultrapure PCB 180 to characterize its toxicity profile in a 28-day repeat dose toxicity study in young adult rats extended to cover endocrine and behavioral effects. Using a loading dose/maintenance dose regimen, groups of 5 males and 5 females were given total doses of 0, 3, 10, 30, 100, 300, 1000 or 1700 mg PCB 180/kg body weight by gavage. Dose-responses were analyzed using benchmark dose modeling based on dose and adipose tissue PCB concentrations. Body weight gain was retarded at 1700 mg/kg during loading dosing, but recovered thereafter. The most sensitive endpoint of toxicity that was used for risk characterization was altered open field behavior in females; i.e. increased activity and distance moved in the inner zone of an open field suggesting altered emotional responses to unfamiliar environment and impaired behavioral inhibition. Other dose-dependent changes included decreased serum thyroid hormones with associated histopathological changes, altered tissue retinoid levels, decreased hematocrit and hemoglobin, decreased follicle stimulating hormone and luteinizing hormone levels in males and increased expression of DNA damage markers in liver of females. Dose-dependent hypertrophy of zona fasciculata cells was observed in adrenals suggesting activation of cortex. There were gender differences in sensitivity and toxicity profiles were partly different in males and females. PCB 180 adipose tissue concentrations were clearly above the general human population levels, but close to the levels in highly exposed populations. The results demonstrate a distinct toxicological profile of PCB 180 with lack of dioxin-like properties required for assignment of WHO toxic equivalency factor. However, PCB 180 shares several toxicological targets with dioxin-like compounds emphasizing the potential for interactions. PMID:25137063

  8. Development of a Monte Carlo multiple source model for inclusion in a dose calculation auditing tool.

    PubMed

    Faught, Austin M; Davidson, Scott E; Fontenot, Jonas; Kry, Stephen F; Etzel, Carol; Ibbott, Geoffrey S; Followill, David S

    2017-09-01

    The Imaging and Radiation Oncology Core Houston (IROC-H) (formerly the Radiological Physics Center) has reported varying levels of agreement in their anthropomorphic phantom audits. There is reason to believe one source of error in this observed disagreement is the accuracy of the dose calculation algorithms and heterogeneity corrections used. To audit this component of the radiotherapy treatment process, an independent dose calculation tool is needed. Monte Carlo multiple source models for Elekta 6 MV and 10 MV therapeutic x-ray beams were commissioned based on measurement of central axis depth dose data for a 10 × 10 cm 2 field size and dose profiles for a 40 × 40 cm 2 field size. The models were validated against open field measurements consisting of depth dose data and dose profiles for field sizes ranging from 3 × 3 cm 2 to 30 × 30 cm 2 . The models were then benchmarked against measurements in IROC-H's anthropomorphic head and neck and lung phantoms. Validation results showed 97.9% and 96.8% of depth dose data passed a ±2% Van Dyk criterion for 6 MV and 10 MV models respectively. Dose profile comparisons showed an average agreement using a ±2%/2 mm criterion of 98.0% and 99.0% for 6 MV and 10 MV models respectively. Phantom plan comparisons were evaluated using ±3%/2 mm gamma criterion, and averaged passing rates between Monte Carlo and measurements were 87.4% and 89.9% for 6 MV and 10 MV models respectively. Accurate multiple source models for Elekta 6 MV and 10 MV x-ray beams have been developed for inclusion in an independent dose calculation tool for use in clinical trial audits. © 2017 American Association of Physicists in Medicine.

  9. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    NASA Astrophysics Data System (ADS)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-01

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  10. Translating an AI application from Lisp to Ada: A case study

    NASA Technical Reports Server (NTRS)

    Davis, Gloria J.

    1991-01-01

    A set of benchmarks was developed to test the performance of a newly designed computer executing both Lisp and Ada. Among these was AutoClassII -- a large Artificial Intelligence (AI) application written in Common Lisp. The extraction of a representative subset of this complex application was aided by a Lisp Code Analyzer (LCA). The LCA enabled rapid analysis of the code, putting it in a concise and functionally readable form. An equivalent benchmark was created in Ada through manual translation of the Lisp version. A comparison of the execution results of both programs across a variety of compiler-machine combinations indicate that line-by-line translation coupled with analysis of the initial code can produce relatively efficient and reusable target code.

  11. Performance modeling & simulation of complex systems (A systems engineering design & analysis approach)

    NASA Technical Reports Server (NTRS)

    Hall, Laverne

    1995-01-01

    Modeling of the Multi-mission Image Processing System (MIPS) will be described as an example of the use of a modeling tool to design a distributed system that supports multiple application scenarios. This paper examines: (a) modeling tool selection, capabilities, and operation (namely NETWORK 2.5 by CACl), (b) pointers for building or constructing a model and how the MIPS model was developed, (c) the importance of benchmarking or testing the performance of equipment/subsystems being considered for incorporation the design/architecture, (d) the essential step of model validation and/or calibration using the benchmark results, (e) sample simulation results from the MIPS model, and (f) how modeling and simulation analysis affected the MIPS design process by having a supportive and informative impact.

  12. Benchmarking specialty hospitals, a scoping review on theory and practice.

    PubMed

    Wind, A; van Harten, W H

    2017-04-04

    Although benchmarking may improve hospital processes, research on this subject is limited. The aim of this study was to provide an overview of publications on benchmarking in specialty hospitals and a description of study characteristics. We searched PubMed and EMBASE for articles published in English in the last 10 years. Eligible articles described a project stating benchmarking as its objective and involving a specialty hospital or specific patient category; or those dealing with the methodology or evaluation of benchmarking. Of 1,817 articles identified in total, 24 were included in the study. Articles were categorized into: pathway benchmarking, institutional benchmarking, articles on benchmark methodology or -evaluation and benchmarking using a patient registry. There was a large degree of variability:(1) study designs were mostly descriptive and retrospective; (2) not all studies generated and showed data in sufficient detail; and (3) there was variety in whether a benchmarking model was just described or if quality improvement as a consequence of the benchmark was reported upon. Most of the studies that described a benchmark model described the use of benchmarking partners from the same industry category, sometimes from all over the world. Benchmarking seems to be more developed in eye hospitals, emergency departments and oncology specialty hospitals. Some studies showed promising improvement effects. However, the majority of the articles lacked a structured design, and did not report on benchmark outcomes. In order to evaluate the effectiveness of benchmarking to improve quality in specialty hospitals, robust and structured designs are needed including a follow up to check whether the benchmark study has led to improvements.

  13. Engine dynamic analysis with general nonlinear finite element codes. Part 2: Bearing element implementation overall numerical characteristics and benchmaking

    NASA Technical Reports Server (NTRS)

    Padovan, J.; Adams, M.; Fertis, J.; Zeid, I.; Lam, P.

    1982-01-01

    Finite element codes are used in modelling rotor-bearing-stator structure common to the turbine industry. Engine dynamic simulation is used by developing strategies which enable the use of available finite element codes. benchmarking the elements developed are benchmarked by incorporation into a general purpose code (ADINA); the numerical characteristics of finite element type rotor-bearing-stator simulations are evaluated through the use of various types of explicit/implicit numerical integration operators. Improving the overall numerical efficiency of the procedure is improved.

  14. High exposure to inorganic arsenic by food: the need for risk reduction.

    PubMed

    Gundert-Remy, Ursula; Damm, Georg; Foth, Heidi; Freyberger, Alexius; Gebel, Thomas; Golka, Klaus; Röhl, Claudia; Schupp, Thomas; Wollin, Klaus-Michael; Hengstler, Jan Georg

    2015-12-01

    Arsenic is a human carcinogen that occurs ubiquitously in soil and water. Based on epidemiological studies, a benchmark dose (lower/higher bound estimate) between 0.3 and 8 μg/kg bw/day was estimated to cause a 1 % increased risk of lung, skin and bladder cancer. A recently published study by EFSA on dietary exposure to inorganic arsenic in the European population reported 95th percentiles (lower bound min to upper bound max) for different age groups in the same range as the benchmark dose. For toddlers, a highly exposed group, the highest values ranged between 0.61 and 2.09 µg arsenic/kg bw/day. For all other age classes, the margin of exposure is also small. This scenario calls for regulatory action to reduce arsenic exposure. One priority measure should be to reduce arsenic in food categories that contribute most to exposure. In the EFSA study the food categories 'milk and dairy products,' 'drinking water' and 'food for infants' represent major sources of inorganic arsenic for infants and also rice is an important source. Long-term strategies are required to reduce inorganic arsenic in these food groups. The reduced consumption of rice and rice products which has been recommended may be helpful for a minority of individuals consuming unusually high amounts of rice. However, it is only of limited value for the general European population, because the food categories 'grain-based processed products (non rice-based)' or 'milk and dairy products' contribute more to the exposure with inorganic arsenic than the food category 'rice.' A balanced regulatory activity focusing on the most relevant food categories is required. In conclusion, exposure to inorganic arsenic represents a risk to the health of the European population, particularly to young children. Regulatory measures to reduce exposure are urgently required.

  15. Developing chemical criteria for wildlife: The benchmark dose versus NOAEL approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linder, G.

    1995-12-31

    Wildlife may be exposed to a wide variety of chemicals in their environment, and various strategies for evaluating wildlife risk for these chemicals have been developed. One, a ``no-observable-adverse-effects-level`` or NOAEL-approach has increasingly been applied to develop chemical criteria for wildlife. In this approach, the NOAEL represents the highest experimental concentration at which there is no statistically significant change in some toxicity endpoint relative to a control. Another, the ``benchmark dose`` or BMD-approach relies on the lower confidence limit for a concentration that corresponds to a small, but statistically significant, change in effect over some reference condition. Rather than correspondingmore » to a single experimental concentration as does the NOAEL, the BMD-approach considers the full concentration response curve for derivation of the BMD. Here, using a variety of vertebrates and an assortment of chemicals (including carbofuran, paraquat, methylmercury, cadmium, zinc, and copper), the NOAEL-approach will be critically evaluated relative to the BMD approach. Statistical models used in the BMD approach suggest these methods are potentially available for eliminating safety factors in risk calculations. A reluctance to recommend this, however, stems from the uncertainty associated with the shape of concentration-response curves at low concentrations. Also, with existing data the derivation of BMDs has shortcomings when sample size is small (10 or fewer animals per treatment). The success of BMD models clearly depends upon the continued collection of wildlife data in the field and laboratory, the design of toxicity studies sufficient for BMD calculations, and complete reporting of these results in the literature. Overall, the BMD approach for developing chemical criteria for wildlife should be given further consideration, since it more fully evaluates concentration-response data.« less

  16. Towards accurate modeling of noncovalent interactions for protein rigidity analysis.

    PubMed

    Fox, Naomi; Streinu, Ileana

    2013-01-01

    Protein rigidity analysis is an efficient computational method for extracting flexibility information from static, X-ray crystallography protein data. Atoms and bonds are modeled as a mechanical structure and analyzed with a fast graph-based algorithm, producing a decomposition of the flexible molecule into interconnected rigid clusters. The result depends critically on noncovalent atomic interactions, primarily on how hydrogen bonds and hydrophobic interactions are computed and modeled. Ongoing research points to the stringent need for benchmarking rigidity analysis software systems, towards the goal of increasing their accuracy and validating their results, either against each other and against biologically relevant (functional) parameters. We propose two new methods for modeling hydrogen bonds and hydrophobic interactions that more accurately reflect a mechanical model, without being computationally more intensive. We evaluate them using a novel scoring method, based on the B-cubed score from the information retrieval literature, which measures how well two cluster decompositions match. To evaluate the modeling accuracy of KINARI, our pebble-game rigidity analysis system, we use a benchmark data set of 20 proteins, each with multiple distinct conformations deposited in the Protein Data Bank. Cluster decompositions for them were previously determined with the RigidFinder method from Gerstein's lab and validated against experimental data. When KINARI's default tuning parameters are used, an improvement of the B-cubed score over a crude baseline is observed in 30% of this data. With our new modeling options, improvements were observed in over 70% of the proteins in this data set. We investigate the sensitivity of the cluster decomposition score with case studies on pyruvate phosphate dikinase and calmodulin. To substantially improve the accuracy of protein rigidity analysis systems, thorough benchmarking must be performed on all current systems and future extensions. We have measured the gain in performance by comparing different modeling methods for noncovalent interactions. We showed that new criteria for modeling hydrogen bonds and hydrophobic interactions can significantly improve the results. The two new methods proposed here have been implemented and made publicly available in the current version of KINARI (v1.3), together with the benchmarking tools, which can be downloaded from our software's website, http://kinari.cs.umass.edu.

  17. Towards accurate modeling of noncovalent interactions for protein rigidity analysis

    PubMed Central

    2013-01-01

    Background Protein rigidity analysis is an efficient computational method for extracting flexibility information from static, X-ray crystallography protein data. Atoms and bonds are modeled as a mechanical structure and analyzed with a fast graph-based algorithm, producing a decomposition of the flexible molecule into interconnected rigid clusters. The result depends critically on noncovalent atomic interactions, primarily on how hydrogen bonds and hydrophobic interactions are computed and modeled. Ongoing research points to the stringent need for benchmarking rigidity analysis software systems, towards the goal of increasing their accuracy and validating their results, either against each other and against biologically relevant (functional) parameters. We propose two new methods for modeling hydrogen bonds and hydrophobic interactions that more accurately reflect a mechanical model, without being computationally more intensive. We evaluate them using a novel scoring method, based on the B-cubed score from the information retrieval literature, which measures how well two cluster decompositions match. Results To evaluate the modeling accuracy of KINARI, our pebble-game rigidity analysis system, we use a benchmark data set of 20 proteins, each with multiple distinct conformations deposited in the Protein Data Bank. Cluster decompositions for them were previously determined with the RigidFinder method from Gerstein's lab and validated against experimental data. When KINARI's default tuning parameters are used, an improvement of the B-cubed score over a crude baseline is observed in 30% of this data. With our new modeling options, improvements were observed in over 70% of the proteins in this data set. We investigate the sensitivity of the cluster decomposition score with case studies on pyruvate phosphate dikinase and calmodulin. Conclusion To substantially improve the accuracy of protein rigidity analysis systems, thorough benchmarking must be performed on all current systems and future extensions. We have measured the gain in performance by comparing different modeling methods for noncovalent interactions. We showed that new criteria for modeling hydrogen bonds and hydrophobic interactions can significantly improve the results. The two new methods proposed here have been implemented and made publicly available in the current version of KINARI (v1.3), together with the benchmarking tools, which can be downloaded from our software's website, http://kinari.cs.umass.edu. PMID:24564209

  18. All inclusive benchmarking.

    PubMed

    Ellis, Judith

    2006-07-01

    The aim of this article is to review published descriptions of benchmarking activity and synthesize benchmarking principles to encourage the acceptance and use of Essence of Care as a new benchmarking approach to continuous quality improvement, and to promote its acceptance as an integral and effective part of benchmarking activity in health services. The Essence of Care, was launched by the Department of Health in England in 2001 to provide a benchmarking tool kit to support continuous improvement in the quality of fundamental aspects of health care, for example, privacy and dignity, nutrition and hygiene. The tool kit is now being effectively used by some frontline staff. However, use is inconsistent, with the value of the tool kit, or the support clinical practice benchmarking requires to be effective, not always recognized or provided by National Health Service managers, who are absorbed with the use of quantitative benchmarking approaches and measurability of comparative performance data. This review of published benchmarking literature, was obtained through an ever-narrowing search strategy commencing from benchmarking within quality improvement literature through to benchmarking activity in health services and including access to not only published examples of benchmarking approaches and models used but the actual consideration of web-based benchmarking data. This supported identification of how benchmarking approaches have developed and been used, remaining true to the basic benchmarking principles of continuous improvement through comparison and sharing (Camp 1989). Descriptions of models and exemplars of quantitative and specifically performance benchmarking activity in industry abound (Camp 1998), with far fewer examples of more qualitative and process benchmarking approaches in use in the public services and then applied to the health service (Bullivant 1998). The literature is also in the main descriptive in its support of the effectiveness of benchmarking activity and although this does not seem to have restricted its popularity in quantitative activity, reticence about the value of the more qualitative approaches, for example Essence of Care, needs to be overcome in order to improve the quality of patient care and experiences. The perceived immeasurability and subjectivity of Essence of Care and clinical practice benchmarks means that these benchmarking approaches are not always accepted or supported by health service organizations as valid benchmarking activity. In conclusion, Essence of Care benchmarking is a sophisticated clinical practice benchmarking approach which needs to be accepted as an integral part of health service benchmarking activity to support improvement in the quality of patient care and experiences.

  19. History of dose specification in Brachytherapy: From Threshold Erythema Dose to Computational Dosimetry

    NASA Astrophysics Data System (ADS)

    Williamson, Jeffrey F.

    2006-09-01

    This paper briefly reviews the evolution of brachytherapy dosimetry from 1900 to the present. Dosimetric practices in brachytherapy fall into three distinct eras: During the era of biological dosimetry (1900-1938), radium pioneers could only specify Ra-226 and Rn-222 implants in terms of the mass of radium encapsulated within the implanted sources. Due to the high energy of its emitted gamma rays and the long range of its secondary electrons in air, free-air chambers could not be used to quantify the output of Ra-226 sources in terms of exposure. Biological dosimetry, most prominently the threshold erythema dose, gained currency as a means of intercomparing radium treatments with exposure-calibrated orthovoltage x-ray units. The classical dosimetry era (1940-1980) began with successful exposure standardization of Ra-226 sources by Bragg-Gray cavity chambers. Classical dose-computation algorithms, based upon 1-D buildup factor measurements and point-source superposition computational algorithms, were able to accommodate artificial radionuclides such as Co-60, Ir-192, and Cs-137. The quantitative dosimetry era (1980- ) arose in response to the increasing utilization of low energy K-capture radionuclides such as I-125 and Pd-103 for which classical approaches could not be expected to estimate accurate correct doses. This led to intensive development of both experimental (largely TLD-100 dosimetry) and Monte Carlo dosimetry techniques along with more accurate air-kerma strength standards. As a result of extensive benchmarking and intercomparison of these different methods, single-seed low-energy radionuclide dose distributions are now known with a total uncertainty of 3%-5%.

  20. Poster – 13: Evaluation of an in-house CCD camera film dosimetry imaging system for small field deliveries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lalonde, Michel; Alexander, Kevin; Olding, Tim

    Purpose: Radiochromic film dosimetry is a standard technique used in clinics to verify modern conformal radiation therapy delivery, and sometimes in research to validate other dosimeters. We are using film as a standard for comparison as we improve high-resolution three-dimensional gel systems for small field dosimetry; however, precise film dosimetry can be technically challenging. We report here measurements for fractionated stereotactic radiation therapy (FSRT) delivered using volumetric modulated arc therapy (VMAT) to investigate the accuracy and reproducibility of film measurements with a novel in-house readout system. We show that radiochromic film can accurately and reproducibly validate FSRT deliveries and alsomore » benchmark our gel dosimetry work. Methods: VMAT FSRT plans for metastases alone (PTV{sub MET}) and whole brain plus metastases (WB+PTV{sub MET}) were delivered onto a multi-configurational phantom with a sheet of EBT3 Gafchromic film inserted mid-plane. A dose of 400 cGy was prescribed to 4 small PTV{sub MET} structures in the phantom, while a WB structure was prescribed a dose of 200 cGy in the WB+PTV{sub MET} iterations. Doses generated from film readout with our in-house system were compared to treatment planned doses. Each delivery was repeated multiple times to assess reproducibility. Results and Conclusions: The reproducibility of film optical density readout was excellent throughout all experiments. Doses measured from the film agreed well with plans for the WB+PTV{sub MET} delivery. But, film doses for PTV{sub MET} only deliveries were significantly below planned doses. This discrepancy is due to stray/scattered light perturbations in our system during readout. Corrections schemes will be presented.« less

Top