Sample records for benchmark dose method

  1. EPA's Benchmark Dose Modeling Software

    EPA Science Inventory

    The EPA developed the Benchmark Dose Software (BMDS) as a tool to help Agency risk assessors facilitate applying benchmark dose (BMD) method’s to EPA’s human health risk assessment (HHRA) documents. The application of BMD methods overcomes many well know limitations ...

  2. Application of Benchmark Dose Methodology to a Variety of Endpoints and Exposures

    EPA Science Inventory

    This latest beta version (1.1b) of the U.S. Environmental Protection Agency (EPA) Benchmark Dose Software (BMDS) is being distributed for public comment. The BMDS system is being developed as a tool to facilitate the application of benchmark dose (BMD) methods to EPA hazardous p...

  3. Introduction to benchmark dose methods and U.S. EPA's benchmark dose software (BMDS) version 2.1.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, J. Allen, E-mail: davis.allen@epa.gov; Gift, Jeffrey S.; Zhao, Q. Jay

    2011-07-15

    Traditionally, the No-Observed-Adverse-Effect-Level (NOAEL) approach has been used to determine the point of departure (POD) from animal toxicology data for use in human health risk assessments. However, this approach is subject to substantial limitations that have been well defined, such as strict dependence on the dose selection, dose spacing, and sample size of the study from which the critical effect has been identified. Also, the NOAEL approach fails to take into consideration the shape of the dose-response curve and other related information. The benchmark dose (BMD) method, originally proposed as an alternative to the NOAEL methodology in the 1980s, addressesmore » many of the limitations of the NOAEL method. It is less dependent on dose selection and spacing, and it takes into account the shape of the dose-response curve. In addition, the estimation of a BMD 95% lower bound confidence limit (BMDL) results in a POD that appropriately accounts for study quality (i.e., sample size). With the recent advent of user-friendly BMD software programs, including the U.S. Environmental Protection Agency's (U.S. EPA) Benchmark Dose Software (BMDS), BMD has become the method of choice for many health organizations world-wide. This paper discusses the BMD methods and corresponding software (i.e., BMDS version 2.1.1) that have been developed by the U.S. EPA, and includes a comparison with recently released European Food Safety Authority (EFSA) BMD guidance.« less

  4. Nonparametric estimation of benchmark doses in environmental risk assessment

    PubMed Central

    Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen

    2013-01-01

    Summary An important statistical objective in environmental risk analysis is estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a pre-specified benchmark response in a dose-response experiment. In such settings, representations of the risk are traditionally based on a parametric dose-response model. It is a well-known concern, however, that if the chosen parametric form is misspecified, inaccurate and possibly unsafe low-dose inferences can result. We apply a nonparametric approach for calculating benchmark doses, based on an isotonic regression method for dose-response estimation with quantal-response data (Bhattacharya and Kong, 2007). We determine the large-sample properties of the estimator, develop bootstrap-based confidence limits on the BMDs, and explore the confidence limits’ small-sample properties via a short simulation study. An example from cancer risk assessment illustrates the calculations. PMID:23914133

  5. Quality Assurance Testing of Version 1.3 of U.S. EPA Benchmark Dose Software (Presentation)

    EPA Science Inventory

    EPA benchmark dose software (BMDS) issued to evaluate chemical dose-response data in support of Agency risk assessments, and must therefore be dependable. Quality assurance testing methods developed for BMDS were designed to assess model dependability with respect to curve-fitt...

  6. Properties of model-averaged BMDLs: a study of model averaging in dichotomous response risk estimation.

    PubMed

    Wheeler, Matthew W; Bailer, A John

    2007-06-01

    Model averaging (MA) has been proposed as a method of accounting for model uncertainty in benchmark dose (BMD) estimation. The technique has been used to average BMD dose estimates derived from dichotomous dose-response experiments, microbial dose-response experiments, as well as observational epidemiological studies. While MA is a promising tool for the risk assessor, a previous study suggested that the simple strategy of averaging individual models' BMD lower limits did not yield interval estimators that met nominal coverage levels in certain situations, and this performance was very sensitive to the underlying model space chosen. We present a different, more computationally intensive, approach in which the BMD is estimated using the average dose-response model and the corresponding benchmark dose lower bound (BMDL) is computed by bootstrapping. This method is illustrated with TiO(2) dose-response rat lung cancer data, and then systematically studied through an extensive Monte Carlo simulation. The results of this study suggest that the MA-BMD, estimated using this technique, performs better, in terms of bias and coverage, than the previous MA methodology. Further, the MA-BMDL achieves nominal coverage in most cases, and is superior to picking the "best fitting model" when estimating the benchmark dose. Although these results show utility of MA for benchmark dose risk estimation, they continue to highlight the importance of choosing an adequate model space as well as proper model fit diagnostics.

  7. The Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD) Tool

    EPA Pesticide Factsheets

    Providing quantal response models, which are also used in the U.S. EPA benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates.

  8. EPA and EFSA approaches for Benchmark Dose modeling

    EPA Science Inventory

    Benchmark dose (BMD) modeling has become the preferred approach in the analysis of toxicological dose-response data for the purpose of deriving human health toxicity values. The software packages most often used are Benchmark Dose Software (BMDS, developed by EPA) and PROAST (de...

  9. Experimental benchmarking of a Monte Carlo dose simulation code for pediatric CT

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Samei, Ehsan; Yoshizumi, Terry; Colsher, James G.; Jones, Robert P.; Frush, Donald P.

    2007-03-01

    In recent years, there has been a desire to reduce CT radiation dose to children because of their susceptibility and prolonged risk for cancer induction. Concerns arise, however, as to the impact of dose reduction on image quality and thus potentially on diagnostic accuracy. To study the dose and image quality relationship, we are developing a simulation code to calculate organ dose in pediatric CT patients. To benchmark this code, a cylindrical phantom was built to represent a pediatric torso, which allows measurements of dose distributions from its center to its periphery. Dose distributions for axial CT scans were measured on a 64-slice multidetector CT (MDCT) scanner (GE Healthcare, Chalfont St. Giles, UK). The same measurements were simulated using a Monte Carlo code (PENELOPE, Universitat de Barcelona) with the applicable CT geometry including bowtie filter. The deviations between simulated and measured dose values were generally within 5%. To our knowledge, this work is one of the first attempts to compare measured radial dose distributions on a cylindrical phantom with Monte Carlo simulated results. It provides a simple and effective method for benchmarking organ dose simulation codes and demonstrates the potential of Monte Carlo simulation for investigating the relationship between dose and image quality for pediatric CT patients.

  10. BENCHMARK DOSE TECHNICAL GUIDANCE DOCUMENT ...

    EPA Pesticide Factsheets

    The purpose of this document is to provide guidance for the Agency on the application of the benchmark dose approach in determining the point of departure (POD) for health effects data, whether a linear or nonlinear low dose extrapolation is used. The guidance includes discussion on computation of benchmark doses and benchmark concentrations (BMDs and BMCs) and their lower confidence limits, data requirements, dose-response analysis, and reporting requirements. This guidance is based on today's knowledge and understanding, and on experience gained in using this approach.

  11. [Benchmark experiment to verify radiation transport calculations for dosimetry in radiation therapy].

    PubMed

    Renner, Franziska

    2016-09-01

    Monte Carlo simulations are regarded as the most accurate method of solving complex problems in the field of dosimetry and radiation transport. In (external) radiation therapy they are increasingly used for the calculation of dose distributions during treatment planning. In comparison to other algorithms for the calculation of dose distributions, Monte Carlo methods have the capability of improving the accuracy of dose calculations - especially under complex circumstances (e.g. consideration of inhomogeneities). However, there is a lack of knowledge of how accurate the results of Monte Carlo calculations are on an absolute basis. A practical verification of the calculations can be performed by direct comparison with the results of a benchmark experiment. This work presents such a benchmark experiment and compares its results (with detailed consideration of measurement uncertainty) with the results of Monte Carlo calculations using the well-established Monte Carlo code EGSnrc. The experiment was designed to have parallels to external beam radiation therapy with respect to the type and energy of the radiation, the materials used and the kind of dose measurement. Because the properties of the beam have to be well known in order to compare the results of the experiment and the simulation on an absolute basis, the benchmark experiment was performed using the research electron accelerator of the Physikalisch-Technische Bundesanstalt (PTB), whose beam was accurately characterized in advance. The benchmark experiment and the corresponding Monte Carlo simulations were carried out for two different types of ionization chambers and the results were compared. Considering the uncertainty, which is about 0.7 % for the experimental values and about 1.0 % for the Monte Carlo simulation, the results of the simulation and the experiment coincide. Copyright © 2015. Published by Elsevier GmbH.

  12. 77 FR 36533 - Notice of Availability of the Benchmark Dose Technical Guidance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-19

    ... ENVIRONMENTAL PROTECTION AGENCY [FRL-9688-7] Notice of Availability of the Benchmark Dose Technical Guidance AGENCY: Environmental Protection Agency (EPA). ACTION: Notice of Availability. SUMMARY: The U.S. Environmental Protection Agency is announcing the availability of Benchmark Dose Technical...

  13. A Consumer's Guide to Benchmark Dose Models: Results of U.S. EPA Testing of 14 Dichotomous, 8 Continuous, and 6 Developmental Models (Presentation)

    EPA Science Inventory

    Benchmark dose risk assessment software (BMDS) was designed by EPA to generate dose-response curves and facilitate the analysis, interpretation and synthesis of toxicological data. Partial results of QA/QC testing of the EPA benchmark dose software (BMDS) are presented. BMDS pr...

  14. Benchmark dose analysis via nonparametric regression modeling

    PubMed Central

    Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen

    2013-01-01

    Estimation of benchmark doses (BMDs) in quantitative risk assessment traditionally is based upon parametric dose-response modeling. It is a well-known concern, however, that if the chosen parametric model is uncertain and/or misspecified, inaccurate and possibly unsafe low-dose inferences can result. We describe a nonparametric approach for estimating BMDs with quantal-response data based on an isotonic regression method, and also study use of corresponding, nonparametric, bootstrap-based confidence limits for the BMD. We explore the confidence limits’ small-sample properties via a simulation study, and illustrate the calculations with an example from cancer risk assessment. It is seen that this nonparametric approach can provide a useful alternative for BMD estimation when faced with the problem of parametric model uncertainty. PMID:23683057

  15. BENCHMARK DOSES FOR CHEMICAL MIXTURES: EVALUATION OF A MIXTURE OF 18 PHAHS.

    EPA Science Inventory

    Benchmark doses (BMDs), defined as doses of a substance that are expected to result in a pre-specified level of "benchmark" response (BMR), have been used for quantifying the risk associated with exposure to environmental hazards. The lower confidence limit of the BMD is used as...

  16. Benchmark studies of induced radioactivity produced in LHC materials, Part II: Remanent dose rates.

    PubMed

    Brugger, M; Khater, H; Mayer, S; Prinz, A; Roesler, S; Ulrici, L; Vincke, H

    2005-01-01

    A new method to estimate remanent dose rates, to be used with the Monte Carlo code FLUKA, was benchmarked against measurements from an experiment that was performed at the CERN-EU high-energy reference field facility. An extensive collection of samples of different materials were placed downstream of, and laterally to, a copper target, intercepting a positively charged mixed hadron beam with a momentum of 120 GeV c(-1). Emphasis was put on the reduction of uncertainties by taking measures such as careful monitoring of the irradiation parameters, using different instruments to measure dose rates, adopting detailed elemental analyses of the irradiated materials and making detailed simulations of the irradiation experiment. The measured and calculated dose rates are in good agreement.

  17. Latent uncertainties of the precalculated track Monte Carlo method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Renaud, Marc-André; Seuntjens, Jan; Roberge, David

    Purpose: While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited numbermore » of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pregenerated for electrons and protons using EGSnrc and GEANT4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (CUDA) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a “ground truth” benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of D{sub max}. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Results: Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the maximum dose. In proton calculations, a small (≤1 mm) distance-to-agreement error was observed at the Bragg peak. Latent uncertainty was characterized for electrons and found to follow a Poisson distribution with the number of unique tracks per energy. A track bank of 12 energies and 60000 unique tracks per pregenerated energy in water had a size of 2.4 GB and achieved a latent uncertainty of approximately 1% at an optimal efficiency gain over DOSXYZnrc. Larger track banks produced a lower latent uncertainty at the cost of increased memory consumption. Using an NVIDIA GTX 590, efficiency analysis showed a 807 × efficiency increase over DOSXYZnrc for 16 MeV electrons in water and 508 × for 16 MeV electrons in bone. Conclusions: The PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty of 1% with a large efficiency gain over conventional MC codes. Before performing clinical dose calculations, models to calculate dose contributions from uncharged particles must be implemented. Following the successful implementation of these models, the PMC method will be evaluated as a candidate for inverse planning of modulated electron radiation therapy and scanned proton beams.« less

  18. Comparison of Vocal Vibration-Dose Measures for Potential-Damage Risk Criteria

    ERIC Educational Resources Information Center

    Titze, Ingo R.; Hunter, Eric J.

    2015-01-01

    Purpose: School-teachers have become a benchmark population for the study of occupational voice use. A decade of vibration-dose studies on the teacher population allows a comparison to be made between specific dose measures for eventual assessment of damage risk. Method: Vibration dosimetry is reformulated with the inclusion of collision stress.…

  19. TH-A-19A-04: Latent Uncertainties and Performance of a GPU-Implemented Pre-Calculated Track Monte Carlo Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Renaud, M; Seuntjens, J; Roberge, D

    Purpose: Assessing the performance and uncertainty of a pre-calculated Monte Carlo (PMC) algorithm for proton and electron transport running on graphics processing units (GPU). While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from recycling a limited number of tracks in the pre-generated track bank is missing from the literature. With a proper uncertainty analysis, an optimal pre-generated track bank size can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pre-generated for electrons and protons using EGSnrc and GEANT4, respectively. The PMC algorithm for track transport was implementedmore » on the CUDA programming framework. GPU-PMC dose distributions were compared to benchmark dose distributions simulated using general-purpose MC codes in the same conditions. A latent uncertainty analysis was performed by comparing GPUPMC dose values to a “ground truth” benchmark while varying the track bank size and primary particle histories. Results: GPU-PMC dose distributions and benchmark doses were within 1% of each other in voxels with dose greater than 50% of Dmax. In proton calculations, a submillimeter distance-to-agreement error was observed at the Bragg Peak. Latent uncertainty followed a Poisson distribution with the number of tracks per energy (TPE) and a track bank of 20,000 TPE produced a latent uncertainty of approximately 1%. Efficiency analysis showed a 937× and 508× gain over a single processor core running DOSXYZnrc for 16 MeV electrons in water and bone, respectively. Conclusion: The GPU-PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty below 1%. The track bank size necessary to achieve an optimal efficiency can be tuned based on the desired uncertainty. Coupled with a model to calculate dose contributions from uncharged particles, GPU-PMC is a candidate for inverse planning of modulated electron radiotherapy and scanned proton beams. This work was supported in part by FRSQ-MSSS (Grant No. 22090), NSERC RG (Grant No. 432290) and CIHR MOP (Grant No. MOP-211360)« less

  20. Using the benchmark dose (BMD) methodology to determine an appropriate reduction of certain ingredients in food products.

    PubMed

    Bi, Jian

    2010-01-01

    As the desire to promote health increases, reductions of certain ingredients, for example, sodium, sugar, and fat in food products, are widely requested. However, the reduction is not risk free in sensory and marketing aspects. Over reduction may change the taste and influence the flavor of a product and lead to a decrease in consumer's overall liking or purchase intent for the product. This article uses the benchmark dose (BMD) methodology to determine an appropriate reduction. Calculations of BMD and one-sided lower confidence limit of BMD are illustrated. The article also discusses how to calculate BMD and BMDL for over dispersed binary data in replicated testing based on a corrected beta-binomial model. USEPA Benchmark Dose Software (BMDS) were used and S-Plus programs were developed. The method discussed in the article is originally used to determine an appropriate reduction of certain ingredients, for example, sodium, sugar, and fat in food products, considering both health reason and sensory or marketing risk.

  1. A MULTIMODEL APPROACH FOR CALCULATING BENCHMARK DOSE

    EPA Science Inventory


    A Multimodel Approach for Calculating Benchmark Dose
    Ramon I. Garcia and R. Woodrow Setzer

    In the assessment of dose response, a number of plausible dose- response models may give fits that are consistent with the data. If no dose response formulation had been speci...

  2. Translational benchmark risk analysis

    PubMed Central

    Piegorsch, Walter W.

    2010-01-01

    Translational development – in the sense of translating a mature methodology from one area of application to another, evolving area – is discussed for the use of benchmark doses in quantitative risk assessment. Illustrations are presented with traditional applications of the benchmark paradigm in biology and toxicology, and also with risk endpoints that differ from traditional toxicological archetypes. It is seen that the benchmark approach can apply to a diverse spectrum of risk management settings. This suggests a promising future for this important risk-analytic tool. Extensions of the method to a wider variety of applications represent a significant opportunity for enhancing environmental, biomedical, industrial, and socio-economic risk assessments. PMID:20953283

  3. Latent uncertainties of the precalculated track Monte Carlo method.

    PubMed

    Renaud, Marc-André; Roberge, David; Seuntjens, Jan

    2015-01-01

    While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited number of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Particle tracks were pregenerated for electrons and protons using EGSnrc and geant4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (cuda) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a "ground truth" benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of Dmax. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the maximum dose. In proton calculations, a small (≤ 1 mm) distance-to-agreement error was observed at the Bragg peak. Latent uncertainty was characterized for electrons and found to follow a Poisson distribution with the number of unique tracks per energy. A track bank of 12 energies and 60000 unique tracks per pregenerated energy in water had a size of 2.4 GB and achieved a latent uncertainty of approximately 1% at an optimal efficiency gain over DOSXYZnrc. Larger track banks produced a lower latent uncertainty at the cost of increased memory consumption. Using an NVIDIA GTX 590, efficiency analysis showed a 807 × efficiency increase over DOSXYZnrc for 16 MeV electrons in water and 508 × for 16 MeV electrons in bone. The PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty of 1% with a large efficiency gain over conventional MC codes. Before performing clinical dose calculations, models to calculate dose contributions from uncharged particles must be implemented. Following the successful implementation of these models, the PMC method will be evaluated as a candidate for inverse planning of modulated electron radiation therapy and scanned proton beams.

  4. Role of the standard deviation in the estimation of benchmark doses with continuous data.

    PubMed

    Gaylor, David W; Slikker, William

    2004-12-01

    For continuous data, risk is defined here as the proportion of animals with values above a large percentile, e.g., the 99th percentile or below the 1st percentile, for the distribution of values among control animals. It is known that reducing the standard deviation of measurements through improved experimental techniques will result in less stringent (higher) doses for the lower confidence limit on the benchmark dose that is estimated to produce a specified risk of animals with abnormal levels for a biological effect. Thus, a somewhat larger (less stringent) lower confidence limit is obtained that may be used as a point of departure for low-dose risk assessment. It is shown in this article that it is important for the benchmark dose to be based primarily on the standard deviation among animals, s(a), apart from the standard deviation of measurement errors, s(m), within animals. If the benchmark dose is incorrectly based on the overall standard deviation among average values for animals, which includes measurement error variation, the benchmark dose will be overestimated and the risk will be underestimated. The bias increases as s(m) increases relative to s(a). The bias is relatively small if s(m) is less than one-third of s(a), a condition achieved in most experimental designs.

  5. Methods for Derivation of Inhalation Reference Concentrations and Application of Inhalation Dosimetry

    EPA Pesticide Factsheets

    EPA's methodology for estimation of inhalation reference concentrations (RfCs) as benchmark estimates of the quantitative dose-response assessment of chronic noncancer toxicity for individual inhaled chemicals.

  6. SU-E-T-466: Implementation of An Extension Module for Dose Response Models in the TOPAS Monte Carlo Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramos-Mendez, J; Faddegon, B; Perl, J

    2015-06-15

    Purpose: To develop and verify an extension to TOPAS for calculation of dose response models (TCP/NTCP). TOPAS wraps and extends Geant4. Methods: The TOPAS DICOM interface was extended to include structure contours, for subsequent calculation of DVH’s and TCP/NTCP. The following dose response models were implemented: Lyman-Kutcher-Burman (LKB), critical element (CE), population based critical volume (CV), parallel-serials, a sigmoid-based model of Niemierko for NTCP and TCP, and a Poisson-based model for TCP. For verification, results for the parallel-serial and Poisson models, with 6 MV x-ray dose distributions calculated with TOPAS and Pinnacle v9.2, were compared to data from the benchmarkmore » configuration of the AAPM Task Group 166 (TG166). We provide a benchmark configuration suitable for proton therapy along with results for the implementation of the Niemierko, CV and CE models. Results: The maximum difference in DVH calculated with Pinnacle and TOPAS was 2%. Differences between TG166 data and Monte Carlo calculations of up to 4.2%±6.1% were found for the parallel-serial model and up to 1.0%±0.7% for the Poisson model (including the uncertainty due to lack of knowledge of the point spacing in TG166). For CE, CV and Niemierko models, the discrepancies between the Pinnacle and TOPAS results are 74.5%, 34.8% and 52.1% when using 29.7 cGy point spacing, the differences being highly sensitive to dose spacing. On the other hand, with our proposed benchmark configuration, the largest differences were 12.05%±0.38%, 3.74%±1.6%, 1.57%±4.9% and 1.97%±4.6% for the CE, CV, Niemierko and LKB models, respectively. Conclusion: Several dose response models were successfully implemented with the extension module. Reference data was calculated for future benchmarking. Dose response calculated for the different models varied much more widely for the TG166 benchmark than for the proposed benchmark, which had much lower sensitivity to the choice of DVH dose points. This work was supported by National Cancer Institute Grant R01CA140735.« less

  7. Model Uncertainty and Bayesian Model Averaged Benchmark Dose Estimation for Continuous Data

    EPA Science Inventory

    The benchmark dose (BMD) approach has gained acceptance as a valuable risk assessment tool, but risk assessors still face significant challenges associated with selecting an appropriate BMD/BMDL estimate from the results of a set of acceptable dose-response models. Current approa...

  8. Benchmark Credentialing Results for NRG-BR001: The First National Cancer Institute-Sponsored Trial of Stereotactic Body Radiation Therapy for Multiple Metastases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Al-Hallaq, Hania A., E-mail: halhallaq@radonc.uchicago.edu; Chmura, Steven J.; Salama, Joseph K.

    Purpose: The NRG-BR001 trial is the first National Cancer Institute–sponsored trial to treat multiple (range 2-4) extracranial metastases with stereotactic body radiation therapy. Benchmark credentialing is required to ensure adherence to this complex protocol, in particular, for metastases in close proximity. The present report summarizes the dosimetric results and approval rates. Methods and Materials: The benchmark used anonymized data from a patient with bilateral adrenal metastases, separated by <5 cm of normal tissue. Because the planning target volume (PTV) overlaps with organs at risk (OARs), institutions must use the planning priority guidelines to balance PTV coverage (45 Gy in 3 fractions) againstmore » OAR sparing. Submitted plans were processed by the Imaging and Radiation Oncology Core and assessed by the protocol co-chairs by comparing the doses to targets, OARs, and conformity metrics using nonparametric tests. Results: Of 63 benchmarks submitted through October 2015, 94% were approved, with 51% approved at the first attempt. Most used volumetric arc therapy (VMAT) (78%), a single plan for both PTVs (90%), and prioritized the PTV over the stomach (75%). The median dose to 95% of the volume was 44.8 ± 1.0 Gy and 44.9 ± 1.0 Gy for the right and left PTV, respectively. The median dose to 0.03 cm{sup 3} was 14.2 ± 2.2 Gy to the spinal cord and 46.5 ± 3.1 Gy to the stomach. Plans that spared the stomach significantly reduced the dose to the left PTV and stomach. Conformity metrics were significantly better for single plans that simultaneously treated both PTVs with VMAT, intensity modulated radiation therapy, or 3-dimensional conformal radiation therapy compared with separate plans. No significant differences existed in the dose at 2 cm from the PTVs. Conclusions: Although most plans used VMAT, the range of conformity and dose falloff was large. The decision to prioritize either OARs or PTV coverage varied considerably, suggesting that the toxicity outcomes in the trial could be affected. Several benchmarks met the dose-volume histogram metrics but produced unacceptable plans owing to low conformity. Dissemination of a frequently-asked-questions document improved the approval rate at the first attempt. Benchmark credentialing was found to be a valuable tool for educating institutions about the protocol requirements.« less

  9. RESULTS OF QA/QC TESTING OF EPA BENCHMARK DOSE SOFTWARE VERSION 1.2

    EPA Science Inventory

    EPA is developing benchmark dose software (BMDS) to support cancer and non-cancer dose-response assessments. Following the recent public review of BMDS version 1.1b, EPA developed a Hill model for evaluating continuous data, and improved the user interface and Multistage, Polyno...

  10. SU-E-T-148: Benchmarks and Pre-Treatment Reviews: A Study of Quality Assurance Effectiveness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lowenstein, J; Nguyen, H; Roll, J

    Purpose: To determine the impact benchmarks and pre-treatment reviews have on improving the quality of submitted clinical trial data. Methods: Benchmarks are used to evaluate a site’s ability to develop a treatment that meets a specific protocol’s treatment guidelines prior to placing their first patient on the protocol. A pre-treatment review is an actual patient placed on the protocol in which the dosimetry and contour volumes are evaluated to be per protocol guidelines prior to allowing the beginning of the treatment. A key component of these QA mechanisms is that sites are provided timely feedback to educate them on howmore » to plan per the protocol and prevent protocol deviations on patients accrued to a protocol. For both benchmarks and pre-treatment reviews a dose volume analysis (DVA) was performed using MIM softwareTM. For pre-treatment reviews a volume contour evaluation was also performed. Results: IROC Houston performed a QA effectiveness analysis of a protocol which required both benchmarks and pre-treatment reviews. In 70 percent of the patient cases submitted, the benchmark played an effective role in assuring that the pre-treatment review of the cases met protocol requirements. The 35 percent of sites failing the benchmark subsequently modified there planning technique to pass the benchmark before being allowed to submit a patient for pre-treatment review. However, in 30 percent of the submitted cases the pre-treatment review failed where the majority (71 percent) failed the DVA. 20 percent of sites submitting patients failed to correct their dose volume discrepancies indicated by the benchmark case. Conclusion: Benchmark cases and pre-treatment reviews can be an effective QA tool to educate sites on protocol guidelines and to minimize deviations. Without the benchmark cases it is possible that 65 percent of the cases undergoing a pre-treatment review would have failed to meet the protocols requirements.Support: U24-CA-180803.« less

  11. SU-E-T-577: Commissioning of a Deterministic Algorithm for External Photon Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, T; Finlay, J; Mesina, C

    Purpose: We report commissioning results for a deterministic algorithm for external photon beam treatment planning. A deterministic algorithm solves the radiation transport equations directly using a finite difference method, thus improve the accuracy of dose calculation, particularly under heterogeneous conditions with results similar to that of Monte Carlo (MC) simulation. Methods: Commissioning data for photon energies 6 – 15 MV includes the percentage depth dose (PDD) measured at SSD = 90 cm and output ratio in water (Spc), both normalized to 10 cm depth, for field sizes between 2 and 40 cm and depths between 0 and 40 cm. Off-axismore » ratio (OAR) for the same set of field sizes was used at 5 depths (dmax, 5, 10, 20, 30 cm). The final model was compared with the commissioning data as well as additional benchmark data. The benchmark data includes dose per MU determined for 17 points for SSD between 80 and 110 cm, depth between 5 and 20 cm, and lateral offset of up to 16.5 cm. Relative comparisons were made in a heterogeneous phantom made of cork and solid water. Results: Compared to the commissioning beam data, the agreement are generally better than 2% with large errors (up to 13%) observed in the buildup regions of the FDD and penumbra regions of the OAR profiles. The overall mean standard deviation is 0.04% when all data are taken into account. Compared to the benchmark data, the agreements are generally better than 2%. Relative comparison in heterogeneous phantom is in general better than 4%. Conclusion: A commercial deterministic algorithm was commissioned for megavoltage photon beams. In a homogeneous medium, the agreement between the algorithm and measurement at the benchmark points is generally better than 2%. The dose accuracy for a deterministic algorithm is better than a convolution algorithm in heterogeneous medium.« less

  12. Modification and benchmarking of SKYSHINE-III for use with ISFSI cask arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hertel, N.E.; Napolitano, D.G.

    1997-12-01

    Dry cask storage arrays are becoming more and more common at nuclear power plants in the United States. Title 10 of the Code of Federal Regulations, Part 72, limits doses at the controlled area boundary of these independent spent-fuel storage installations (ISFSI) to 0.25 mSv (25 mrem)/yr. The minimum controlled area boundaries of such a facility are determined by cask array dose calculations, which include direct radiation and radiation scattered by the atmosphere, also known as skyshine. NAC International (NAC) uses SKYSHINE-III to calculate the gamma-ray and neutron dose rates as a function of distance from ISFSI arrays. In thismore » paper, we present modifications to the SKYSHINE-III that more explicitly model cask arrays. In addition, we have benchmarked the radiation transport methods used in SKYSHINE-III against {sup 60}Co gamma-ray experiments and MCNP neutron calculations.« less

  13. Shutdown Dose Rate Analysis Using the Multi-Step CADIS Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.

    2015-01-01

    The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) hybrid Monte Carlo (MC)/deterministic radiation transport method was proposed to speed up the shutdown dose rate (SDDR) neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. This work applied the MS-CADIS method to the ITER SDDR benchmark problem. The MS-CADIS method was also used to calculate the SDDR uncertainty resulting from uncertainties in the MC neutron calculation and to determine the degree of undersampling in SDDR calculations because of the limited ability of the MC method to tally detailed spatial and energy distributions. The analysismore » that used the ITER benchmark problem compared the efficiency of the MS-CADIS method to the traditional approach of using global MC variance reduction techniques for speeding up SDDR neutron MC calculation. Compared to the standard Forward-Weighted-CADIS (FW-CADIS) method, the MS-CADIS method increased the efficiency of the SDDR neutron MC calculation by 69%. The MS-CADIS method also increased the fraction of nonzero scoring mesh tally elements in the space-energy regions of high importance to the final SDDR.« less

  14. An Improved Method of Heterogeneity Compensation for the Convolution / Superposition Algorithm

    NASA Astrophysics Data System (ADS)

    Jacques, Robert; McNutt, Todd

    2014-03-01

    Purpose: To improve the accuracy of convolution/superposition (C/S) in heterogeneous material by developing a new algorithm: heterogeneity compensated superposition (HCS). Methods: C/S has proven to be a good estimator of the dose deposited in a homogeneous volume. However, near heterogeneities electron disequilibrium occurs, leading to the faster fall-off and re-buildup of dose. We propose to filter the actual patient density in a position and direction sensitive manner, allowing the dose deposited near interfaces to be increased or decreased relative to C/S. We implemented the effective density function as a multivariate first-order recursive filter and incorporated it into GPU-accelerated, multi-energetic C/S implementation. We compared HCS against C/S using the ICCR 2000 Monte-Carlo accuracy benchmark, 23 similar accuracy benchmarks and 5 patient cases. Results: Multi-energetic HCS increased the dosimetric accuracy for the vast majority of voxels; in many cases near Monte-Carlo results were achieved. We defined the per-voxel error, %|mm, as the minimum of the distance to agreement in mm and the dosimetric percentage error relative to the maximum MC dose. HCS improved the average mean error by 0.79 %|mm for the patient volumes; reducing the average mean error from 1.93 %|mm to 1.14 %|mm. Very low densities (i.e. < 0.1 g / cm3) remained problematic, but may be solvable with a better filter function. Conclusions: HCS improved upon C/S's density scaled heterogeneity correction with a position and direction sensitive density filter. This method significantly improved the accuracy of the GPU based algorithm reaching the accuracy levels of Monte Carlo based methods with performance in a few tenths of seconds per beam. Acknowledgement: Funding for this research was provided by the NSF Cooperative Agreement EEC9731748, Elekta / IMPAC Medical Systems, Inc. and the Johns Hopkins University. James Satterthwaite provided the Monte Carlo benchmark simulations.

  15. Accuracy of a simplified method for shielded gamma-ray skyshine sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bassett, M.S.; Shultis, J.K.

    1989-11-01

    Rigorous transport or Monte Carlo methods for estimating far-field gamma-ray skyshine doses generally are computationally intensive. consequently, several simplified techniques such as point-kernel methods and methods based on beam response functions have been proposed. For unshielded skyshine sources, these simplified methods have been shown to be quite accurate from comparisons to benchmark problems and to benchmark experimental results. For shielded sources, the simplified methods typically use exponential attenuation and photon buildup factors to describe the effect of the shield. However, the energy and directional redistribution of photons scattered in the shield is usually ignored, i.e., scattered photons are assumed tomore » emerge from the shield with the same energy and direction as the uncollided photons. The accuracy of this shield treatment is largely unknown due to the paucity of benchmark results for shielded sources. In this paper, the validity of such a shield treatment is assessed by comparison to a composite method, which accurately calculates the energy and angular distribution of photons penetrating the shield.« less

  16. SU-E-J-30: Benchmark Image-Based TCP Calculation for Evaluation of PTV Margins for Lung SBRT Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, M; Chetty, I; Zhong, H

    2014-06-01

    Purpose: Tumor control probability (TCP) calculated with accumulated radiation doses may help design appropriate treatment margins. Image registration errors, however, may compromise the calculated TCP. The purpose of this study is to develop benchmark CT images to quantify registration-induced errors in the accumulated doses and their corresponding TCP. Methods: 4DCT images were registered from end-inhale (EI) to end-exhale (EE) using a “demons” algorithm. The demons DVFs were corrected by an FEM model to get realistic deformation fields. The FEM DVFs were used to warp the EI images to create the FEM-simulated images. The two images combined with the FEM DVFmore » formed a benchmark model. Maximum intensity projection (MIP) images, created from the EI and simulated images, were used to develop IMRT plans. Two plans with 3 and 5 mm margins were developed for each patient. With these plans, radiation doses were recalculated on the simulated images and warped back to the EI images using the FEM DVFs to get the accumulated doses. The Elastix software was used to register the FEM-simulated images to the EI images. TCPs calculated with the Elastix-accumulated doses were compared with those generated by the FEM to get the TCP error of the Elastix registrations. Results: For six lung patients, the mean Elastix registration error ranged from 0.93 to 1.98 mm. Their relative dose errors in PTV were between 0.28% and 6.8% for 3mm margin plans, and between 0.29% and 6.3% for 5mm-margin plans. As the PTV margin reduced from 5 to 3 mm, the mean TCP error of the Elastix-reconstructed doses increased from 2.0% to 2.9%, and the mean NTCP errors decreased from 1.2% to 1.1%. Conclusion: Patient-specific benchmark images can be used to evaluate the impact of registration errors on the computed TCPs, and may help select appropriate PTV margins for lung SBRT patients.« less

  17. Correlation of Noncancer Benchmark Doses in Short- and Long-Term Rodent Bioassays.

    PubMed

    Kratchman, Jessica; Wang, Bing; Fox, John; Gray, George

    2018-05-01

    This study investigated whether, in the absence of chronic noncancer toxicity data, short-term noncancer toxicity data can be used to predict chronic toxicity effect levels by focusing on the dose-response relationship instead of a critical effect. Data from National Toxicology Program (NTP) technical reports have been extracted and modeled using the Environmental Protection Agency's Benchmark Dose Software. Best-fit, minimum benchmark dose (BMD), and benchmark dose lower limits (BMDLs) have been modeled for all NTP pathologist identified significant nonneoplastic lesions, final mean body weight, and mean organ weight of 41 chemicals tested by NTP between 2000 and 2012. Models were then developed at the chemical level using orthogonal regression techniques to predict chronic (two years) noncancer health effect levels using the results of the short-term (three months) toxicity data. The findings indicate that short-term animal studies may reasonably provide a quantitative estimate of a chronic BMD or BMDL. This can allow for faster development of human health toxicity values for risk assessment for chemicals that lack chronic toxicity data. © 2017 Society for Risk Analysis.

  18. APPLICATION OF BENCHMARK DOSE METHODOLOGY TO DATA FROM PRENATAL DEVELOPMENTAL TOXICITY STUDIES

    EPA Science Inventory

    The benchmark dose (BMD) concept was applied to 246 conventional developmental toxicity datasets from government, industry and commercial laboratories. Five modeling approaches were used, two generic and three specific to developmental toxicity (DT models). BMDs for both quantal ...

  19. Categorical Regression and Benchmark Dose Software 3.0

    EPA Science Inventory

    The objective of this full-day course is to provide participants with interactive training on the use of the U.S. Environmental Protection Agency’s (EPA) Benchmark Dose software (BMDS, version 3.0, released fall 2018) and Categorical Regression software (CatReg, version 3.1...

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Grace L.; Department of Health Services Research, The University of Texas MD Anderson Cancer Center, Houston, Texas; Jiang, Jing

    Purpose: High-quality treatment for intact cervical cancer requires external radiation therapy, brachytherapy, and chemotherapy, carefully sequenced and completed without delays. We sought to determine how frequently current treatment meets quality benchmarks and whether new technologies have influenced patterns of care. Methods and Materials: By searching diagnosis and procedure claims in MarketScan, an employment-based health care claims database, we identified 1508 patients with nonmetastatic, intact cervical cancer treated from 1999 to 2011, who were <65 years of age and received >10 fractions of radiation. Treatments received were identified using procedure codes and compared with 3 quality benchmarks: receipt of brachytherapy, receipt ofmore » chemotherapy, and radiation treatment duration not exceeding 63 days. The Cochran-Armitage test was used to evaluate temporal trends. Results: Seventy-eight percent of patients (n=1182) received brachytherapy, with brachytherapy receipt stable over time (Cochran-Armitage P{sub trend}=.15). Among patients who received brachytherapy, 66% had high–dose rate and 34% had low–dose rate treatment, although use of high–dose rate brachytherapy steadily increased to 75% by 2011 (P{sub trend}<.001). Eighteen percent of patients (n=278) received intensity modulated radiation therapy (IMRT), and IMRT receipt increased to 37% by 2011 (P{sub trend}<.001). Only 2.5% of patients (n=38) received IMRT in the setting of brachytherapy omission. Overall, 79% of patients (n=1185) received chemotherapy, and chemotherapy receipt increased to 84% by 2011 (P{sub trend}<.001). Median radiation treatment duration was 56 days (interquartile range, 47-65 days); however, duration exceeded 63 days in 36% of patients (n=543). Although 98% of patients received at least 1 benchmark treatment, only 44% received treatment that met all 3 benchmarks. With more stringent indicators (brachytherapy, ≥4 chemotherapy cycles, and duration not exceeding 56 days), only 25% of patients received treatment that met all benchmarks. Conclusion: In this cohort, most cervical cancer patients received treatment that did not comply with all 3 benchmarks for quality treatment. In contrast to increasing receipt of newer radiation technologies, there was little improvement in receipt of essential treatment benchmarks.« less

  1. A Matter of Timing: Identifying Significant Multi-Dose Radiotherapy Improvements by Numerical Simulation and Genetic Algorithm Search

    PubMed Central

    Angus, Simon D.; Piotrowska, Monika Joanna

    2014-01-01

    Multi-dose radiotherapy protocols (fraction dose and timing) currently used in the clinic are the product of human selection based on habit, received wisdom, physician experience and intra-day patient timetabling. However, due to combinatorial considerations, the potential treatment protocol space for a given total dose or treatment length is enormous, even for relatively coarse search; well beyond the capacity of traditional in-vitro methods. In constrast, high fidelity numerical simulation of tumor development is well suited to the challenge. Building on our previous single-dose numerical simulation model of EMT6/Ro spheroids, a multi-dose irradiation response module is added and calibrated to the effective dose arising from 18 independent multi-dose treatment programs available in the experimental literature. With the developed model a constrained, non-linear, search for better performing cadidate protocols is conducted within the vicinity of two benchmarks by genetic algorithm (GA) techniques. After evaluating less than 0.01% of the potential benchmark protocol space, candidate protocols were identified by the GA which conferred an average of 9.4% (max benefit 16.5%) and 7.1% (13.3%) improvement (reduction) on tumour cell count compared to the two benchmarks, respectively. Noticing that a convergent phenomenon of the top performing protocols was their temporal synchronicity, a further series of numerical experiments was conducted with periodic time-gap protocols (10 h to 23 h), leading to the discovery that the performance of the GA search candidates could be replicated by 17–18 h periodic candidates. Further dynamic irradiation-response cell-phase analysis revealed that such periodicity cohered with latent EMT6/Ro cell-phase temporal patterning. Taken together, this study provides powerful evidence towards the hypothesis that even simple inter-fraction timing variations for a given fractional dose program may present a facile, and highly cost-effecitive means of significantly improving clinical efficacy. PMID:25460164

  2. A matter of timing: identifying significant multi-dose radiotherapy improvements by numerical simulation and genetic algorithm search.

    PubMed

    Angus, Simon D; Piotrowska, Monika Joanna

    2014-01-01

    Multi-dose radiotherapy protocols (fraction dose and timing) currently used in the clinic are the product of human selection based on habit, received wisdom, physician experience and intra-day patient timetabling. However, due to combinatorial considerations, the potential treatment protocol space for a given total dose or treatment length is enormous, even for relatively coarse search; well beyond the capacity of traditional in-vitro methods. In constrast, high fidelity numerical simulation of tumor development is well suited to the challenge. Building on our previous single-dose numerical simulation model of EMT6/Ro spheroids, a multi-dose irradiation response module is added and calibrated to the effective dose arising from 18 independent multi-dose treatment programs available in the experimental literature. With the developed model a constrained, non-linear, search for better performing cadidate protocols is conducted within the vicinity of two benchmarks by genetic algorithm (GA) techniques. After evaluating less than 0.01% of the potential benchmark protocol space, candidate protocols were identified by the GA which conferred an average of 9.4% (max benefit 16.5%) and 7.1% (13.3%) improvement (reduction) on tumour cell count compared to the two benchmarks, respectively. Noticing that a convergent phenomenon of the top performing protocols was their temporal synchronicity, a further series of numerical experiments was conducted with periodic time-gap protocols (10 h to 23 h), leading to the discovery that the performance of the GA search candidates could be replicated by 17-18 h periodic candidates. Further dynamic irradiation-response cell-phase analysis revealed that such periodicity cohered with latent EMT6/Ro cell-phase temporal patterning. Taken together, this study provides powerful evidence towards the hypothesis that even simple inter-fraction timing variations for a given fractional dose program may present a facile, and highly cost-effecitive means of significantly improving clinical efficacy.

  3. Benchmark solutions for the galactic heavy-ion transport equations with energy and spatial coupling

    NASA Technical Reports Server (NTRS)

    Ganapol, Barry D.; Townsend, Lawrence W.; Lamkin, Stanley L.; Wilson, John W.

    1991-01-01

    Nontrivial benchmark solutions are developed for the galactic heavy ion transport equations in the straightahead approximation with energy and spatial coupling. Analytical representations of the ion fluxes are obtained for a variety of sources with the assumption that the nuclear interaction parameters are energy independent. The method utilizes an analytical LaPlace transform inversion to yield a closed form representation that is computationally efficient. The flux profiles are then used to predict ion dose profiles, which are important for shield design studies.

  4. Neutron skyshine calculations with the integral line-beam method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gui, A.A.; Shultis, J.K.; Faw, R.E.

    1997-10-01

    Recently developed line- and conical-beam response functions are used to calculate neutron skyshine doses for four idealized source geometries. These calculations, which can serve as benchmarks, are compared with MCNP calculations, and the excellent agreement indicates that the integral conical- and line-beam method is an effective alternative to more computationally expensive transport calculations.

  5. Introduction of risk size in the determination of uncertainty factor UFL in risk assessment

    NASA Astrophysics Data System (ADS)

    Xue, Jinling; Lu, Yun; Velasquez, Natalia; Yu, Ruozhen; Hu, Hongying; Liu, Zhengtao; Meng, Wei

    2012-09-01

    The methodology for using uncertainty factors in health risk assessment has been developed for several decades. A default value is usually applied for the uncertainty factor UFL, which is used to extrapolate from LOAEL (lowest observed adverse effect level) to NAEL (no adverse effect level). Here, we have developed a new method that establishes a linear relationship between UFL and the additional risk level at LOAEL based on the dose-response information, which represents a very important factor that should be carefully considered. This linear formula makes it possible to select UFL properly in the additional risk range from 5.3% to 16.2%. Also the results remind us that the default value 10 may not be conservative enough when the additional risk level at LOAEL exceeds 16.2%. Furthermore, this novel method not only provides a flexible UFL instead of the traditional default value, but also can ensure a conservative estimation of the UFL with fewer errors, and avoid the benchmark response selection involved in the benchmark dose method. These advantages can improve the estimation of the extrapolation starting point in the risk assessment.

  6. Avoiding Pitfalls in the Use of the Benchmark Dose Approach to Chemical Risk Assessments; Some Illustrative Case Studies (Presentation)

    EPA Science Inventory

    The USEPA's benchmark dose software (BMDS) version 1.2 has been available over the Internet since April, 2000 (epa.gov/ncea/bmds.htm), and has already been used in risk assessments of some significant environmental pollutants (e.g., diesel exhaust, dichloropropene, hexachlorocycl...

  7. Benchmark solutions for the galactic ion transport equations: Energy and spatially dependent problems

    NASA Technical Reports Server (NTRS)

    Ganapol, Barry D.; Townsend, Lawrence W.; Wilson, John W.

    1989-01-01

    Nontrivial benchmark solutions are developed for the galactic ion transport (GIT) equations in the straight-ahead approximation. These equations are used to predict potential radiation hazards in the upper atmosphere and in space. Two levels of difficulty are considered: (1) energy independent, and (2) spatially independent. The analysis emphasizes analytical methods never before applied to the GIT equations. Most of the representations derived have been numerically implemented and compared to more approximate calculations. Accurate ion fluxes are obtained (3 to 5 digits) for nontrivial sources. For monoenergetic beams, both accurate doses and fluxes are found. The benchmarks presented are useful in assessing the accuracy of transport algorithms designed to accommodate more complex radiation protection problems. In addition, these solutions can provide fast and accurate assessments of relatively simple shield configurations.

  8. Benchmarking pediatric cranial CT protocols using a dose tracking software system: a multicenter study.

    PubMed

    De Bondt, Timo; Mulkens, Tom; Zanca, Federica; Pyfferoen, Lotte; Casselman, Jan W; Parizel, Paul M

    2017-02-01

    To benchmark regional standard practice for paediatric cranial CT-procedures in terms of radiation dose and acquisition parameters. Paediatric cranial CT-data were retrospectively collected during a 1-year period, in 3 different hospitals of the same country. A dose tracking system was used to automatically gather information. Dose (CTDI and DLP), scan length, amount of retakes and demographic data were stratified by age and clinical indication; appropriate use of child-specific protocols was assessed. In total, 296 paediatric cranial CT-procedures were collected. Although the median dose of each hospital was below national and international diagnostic reference level (DRL) for all age categories, statistically significant (p-value < 0.001) dose differences among hospitals were observed. The hospital with lowest dose levels showed smallest dose variability and used age-stratified protocols for standardizing paediatric head exams. Erroneous selection of adult protocols for children still occurred, mostly in the oldest age-group. Even though all hospitals complied with national and international DRLs, dose tracking and benchmarking showed that further dose optimization and standardization is possible by using age-stratified protocols for paediatric cranial CT. Moreover, having a dose tracking system revealed that adult protocols are still applied for paediatric CT, a practice that must be avoided. • Significant differences were observed in the delivered dose between age-groups and hospitals. • Using age-adapted scanning protocols gives a nearly linear dose increase. • Sharing dose-data can be a trigger for hospitals to reduce dose levels.

  9. Dose specification for hippocampal sparing whole brain radiotherapy (HS WBRT): considerations from the UK HIPPO trial QA programme.

    PubMed

    Megias, Daniel; Phillips, Mark; Clifton-Hadley, Laura; Harron, Elizabeth; Eaton, David J; Sanghera, Paul; Whitfield, Gillian

    2017-03-01

    The HIPPO trial is a UK randomized Phase II trial of hippocampal sparing (HS) vs conventional whole-brain radiotherapy after surgical resection or radiosurgery in patients with favourable prognosis with 1-4 brain metastases. Each participating centre completed a planning benchmark case as part of the dedicated radiotherapy trials quality assurance programme (RTQA), promoting the safe and effective delivery of HS intensity-modulated radiotherapy (IMRT) in a multicentre trial setting. Submitted planning benchmark cases were reviewed using visualization for radiotherapy software (VODCA) evaluating plan quality and compliance in relation to the HIPPO radiotherapy planning and delivery guidelines. Comparison of the planning benchmark data highlighted a plan specified using dose to medium as an outlier by comparison with those specified using dose to water. Further evaluation identified that the reported plan statistics for dose to medium were lower as a result of the dose calculated at regions of PTV inclusive of bony cranium being lower relative to brain. Specification of dose to water or medium remains a source of potential ambiguity and it is essential that as part of a multicentre trial, consideration is given to reported differences, particularly in the presence of bone. Evaluation of planning benchmark data as part of an RTQA programme has highlighted an important feature of HS IMRT dosimetry dependent on dose being specified to water or medium, informing the development and undertaking of HS IMRT as part of the HIPPO trial. Advances in knowledge: The potential clinical impact of differences between dose to medium and dose to water are demonstrated for the first time, in the setting of HS whole-brain radiotherapy.

  10. Meeting The Joint Commission's Dose Incident Identification and External Benchmarking Requirements Using the ACR's Dose Index Registry.

    PubMed

    Bohl, Michael A; Goswami, Roopa; Strassner, Brett; Stanger, Paula

    2016-08-01

    The purpose of this investigation was to evaluate the potential of using the ACR's Dose Index Registry(®) to meet The Joint Commission's requirements to identify incidents in which the radiation dose index from diagnostic CT examinations exceeded the protocol's expected dose index range. In total, 10,970 records in the Dose Index Registry were statistically analyzed to establish both an upper and lower expected dose index for each protocol. All 2015 studies to date were then retrospectively reviewed to identify examinations whose total examination dose index exceeded the protocol's defined upper threshold. Each dose incident was then logged and reviewed per the new Joint Commission requirements. Facilities may leverage their participation in the ACR's Dose Index Registry to fully meet The Joint Commission's dose incident identification review and external benchmarking requirements. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  11. The use of National Weather Service Data to Compute the Dose to the MEOI.

    PubMed

    Vickers, Linda

    2018-05-01

    The Turner method is the "benchmark method" for computing the stability class that is used to compute the X/Q (s m). The Turner method should be used to ascertain the validity of X/Q results determined by other methods. This paper used site-specific meteorological data obtained from the National Weather Service. The Turner method described herein is simple, quick, accurate, and transparent because all of the data, calculations, and results are visible for verification and validation with published literature.

  12. ORANGE: a Monte Carlo dose engine for radiotherapy.

    PubMed

    van der Zee, W; Hogenbirk, A; van der Marck, S C

    2005-02-21

    This study presents data for the verification of ORANGE, a fast MCNP-based dose engine for radiotherapy treatment planning. In order to verify the new algorithm, it has been benchmarked against DOSXYZ and against measurements. For the benchmarking, first calculations have been done using the ICCR-XIII benchmark. Next, calculations have been done with DOSXYZ and ORANGE in five different phantoms (one homogeneous, two with bone equivalent inserts and two with lung equivalent inserts). The calculations have been done with two mono-energetic photon beams (2 MeV and 6 MeV) and two mono-energetic electron beams (10 MeV and 20 MeV). Comparison of the calculated data (from DOSXYZ and ORANGE) against measurements was possible for a realistic 10 MV photon beam and a realistic 15 MeV electron beam in a homogeneous phantom only. For the comparison of the calculated dose distributions and dose distributions against measurements, the concept of the confidence limit (CL) has been used. This concept reduces the difference between two data sets to a single number, which gives the deviation for 90% of the dose distributions. Using this concept, it was found that ORANGE was always within the statistical bandwidth with DOSXYZ and the measurements. The ICCR-XIII benchmark showed that ORANGE is seven times faster than DOSXYZ, a result comparable with other accelerated Monte Carlo dose systems when no variance reduction is used. As shown for XVMC, using variance reduction techniques has the potential for further acceleration. Using modern computer hardware, this brings the total calculation time for a dose distribution with 1.5% (statistical) accuracy within the clinical range (less then 10 min). This means that ORANGE can be a candidate for a dose engine in radiotherapy treatment planning.

  13. SU-E-I-32: Benchmarking Head CT Doses: A Pooled Vs. Protocol Specific Analysis of Radiation Doses in Adult Head CT Examinations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fujii, K; UCLA School of Medicine, Los Angeles, CA; Bostani, M

    Purpose: The aim of this study was to collect CT dose index data from adult head exams to establish benchmarks based on either: (a) values pooled from all head exams or (b) values for specific protocols. One part of this was to investigate differences in scan frequency and CT dose index data for inpatients versus outpatients. Methods: We collected CT dose index data (CTDIvol) from adult head CT examinations performed at our medical facilities from Jan 1st to Dec 31th, 2014. Four of these scanners were used for inpatients, the other five were used for outpatients. All scanners used Tubemore » Current Modulation. We used X-ray dose management software to mine dose index data and evaluate CTDIvol for 15807 inpatients and 4263 outpatients undergoing Routine Brain, Sinus, Facial/Mandible, Temporal Bone, CTA Brain and CTA Brain-Neck protocols, and combined across all protocols. Results: For inpatients, Routine Brain series represented 84% of total scans performed. For outpatients, Sinus scans represented the largest fraction (36%). The CTDIvol (mean ± SD) across all head protocols was 39 ± 30 mGy (min-max: 3.3–540 mGy). The CTDIvol for Routine Brain was 51 ± 6.2 mGy (min-max: 36–84 mGy). The values for Sinus were 24 ± 3.2 mGy (min-max: 13–44 mGy) and for Facial/Mandible were 22 ± 4.3 mGy (min-max: 14–46 mGy). The mean CTDIvol for inpatients and outpatients was similar across protocols with one exception (CTA Brain-Neck). Conclusion: There is substantial dose variation when results from all protocols are pooled together; this is primarily a function of the differences in technical factors of the protocols themselves. When protocols are analyzed separately, there is much less variability. While analyzing pooled data affords some utility, reviewing protocols segregated by clinical indication provides greater opportunity for optimization and establishing useful benchmarks.« less

  14. Benchmark Dose for Urinary Cadmium based on a Marker of Renal Dysfunction: A Meta-Analysis

    PubMed Central

    Woo, Hae Dong; Chiu, Weihsueh A.; Jo, Seongil; Kim, Jeongseon

    2015-01-01

    Background Low doses of cadmium can cause adverse health effects. Benchmark dose (BMD) and the one-sided 95% lower confidence limit of BMD (BMDL) to derive points of departure for urinary cadmium exposure have been estimated in several previous studies, but the methods to derive BMD and the estimated BMDs differ. Objectives We aimed to find the associated factors that affect BMD calculation in the general population, and to estimate the summary BMD for urinary cadmium using reported BMDs. Methods A meta-regression was performed and the pooled BMD/BMDL was estimated using studies reporting a BMD and BMDL, weighted by sample size, that were calculated from individual data based on markers of renal dysfunction. Results BMDs were highly heterogeneous across studies. Meta-regression analysis showed that a significant predictor of BMD was the cut-off point which denotes an abnormal level. Using the 95th percentile as a cut off, BMD5/BMDL5 estimates for 5% benchmark responses (BMR) of β2-microglobulinuria (β2-MG) estimated was 6.18/4.88 μg/g creatinine in conventional quantal analysis and 3.56/3.13 μg/g creatinine in the hybrid approach, and BMD5/BMDL5 estimates for 5% BMR of N-acetyl-β-d-glucosaminidase (NAG) was 10.31/7.61 μg/g creatinine in quantal analysis and 3.21/2.24 g/g creatinine in the hybrid approach. However, the meta-regression showed that BMD and BMDL were significantly associated with the cut-off point, but BMD calculation method did not significantly affect the results. The urinary cadmium BMDL5 of β2-MG was 1.9 μg/g creatinine in the lowest cut-off point group. Conclusion The BMD was significantly associated with the cut-off point defining the abnormal level of renal dysfunction markers. PMID:25970611

  15. Patient radiation doses in interventional cardiology in the U.S.: Advisory data sets and possible initial values for U.S. reference levels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Donald L.; Hilohi, C. Michael; Spelic, David C.

    2012-10-15

    Purpose: To determine patient radiation doses from interventional cardiology procedures in the U.S and to suggest possible initial values for U.S. benchmarks for patient radiation dose from selected interventional cardiology procedures [fluoroscopically guided diagnostic cardiac catheterization and percutaneous coronary intervention (PCI)]. Methods: Patient radiation dose metrics were derived from analysis of data from the 2008 to 2009 Nationwide Evaluation of X-ray Trends (NEXT) survey of cardiac catheterization. This analysis used deidentified data and did not require review by an IRB. Data from 171 facilities in 30 states were analyzed. The distributions (percentiles) of radiation dose metrics were determined for diagnosticmore » cardiac catheterizations, PCI, and combined diagnostic and PCI procedures. Confidence intervals for these dose distributions were determined using bootstrap resampling. Results: Percentile distributions (advisory data sets) and possible preliminary U.S. reference levels (based on the 75th percentile of the dose distributions) are provided for cumulative air kerma at the reference point (K{sub a,r}), cumulative air kerma-area product (P{sub KA}), fluoroscopy time, and number of cine runs. Dose distributions are sufficiently detailed to permit dose audits as described in National Council on Radiation Protection and Measurements Report No. 168. Fluoroscopy times are consistent with those observed in European studies, but P{sub KA} is higher in the U.S. Conclusions: Sufficient data exist to suggest possible initial benchmarks for patient radiation dose for certain interventional cardiology procedures in the U.S. Our data suggest that patient radiation dose in these procedures is not optimized in U.S. practice.« less

  16. The current state of knowledge on the use of the benchmark dose concept in risk assessment.

    PubMed

    Sand, Salomon; Victorin, Katarina; Filipsson, Agneta Falk

    2008-05-01

    This review deals with the current state of knowledge on the use of the benchmark dose (BMD) concept in health risk assessment of chemicals. The BMD method is an alternative to the traditional no-observed-adverse-effect level (NOAEL) and has been presented as a methodological improvement in the field of risk assessment. The BMD method has mostly been employed in the USA but is presently given higher attention also in Europe. The review presents a number of arguments in favor of the BMD, relative to the NOAEL. In addition, it gives a detailed overview of the several procedures that have been suggested and applied for BMD analysis, for quantal as well as continuous data. For quantal data the BMD is generally defined as corresponding to an additional or extra risk of 5% or 10%. For continuous endpoints it is suggested that the BMD is defined as corresponding to a percentage change in response relative to background or relative to the dynamic range of response. Under such definitions, a 5% or 10% change can be considered as default. Besides how to define the BMD and its lower bound, the BMDL, the question of how to select the dose-response model to be used in the BMD and BMDL determination is highlighted. Issues of study design and comparison of dose-response curves and BMDs are also covered. Copyright (c) 2007 John Wiley & Sons, Ltd.

  17. High-energy neutron depth-dose distribution experiment.

    PubMed

    Ferenci, M S; Hertel, N E

    2003-01-01

    A unique set of high-energy neutron depth-dose benchmark experiments were performed at the Los Alamos Neutron Science Center/Weapons Neutron Research (LANSCE/WNR) complex. The experiments consisted of filtered neutron beams with energies up to 800 MeV impinging on a 30 x 30 x 30 cm3 liquid, tissue-equivalent phantom. The absorbed dose was measured in the phantom at various depths with tissue-equivalent ion chambers. This experiment is intended to serve as a benchmark experiment for the testing of high-energy radiation transport codes for the international radiation protection community.

  18. Hand rub dose needed for a single disinfection varies according to product: a bias in benchmarking using indirect hand hygiene indicator.

    PubMed

    Girard, Raphaële; Aupee, Martine; Erb, Martine; Bettinger, Anne; Jouve, Alice

    2012-12-01

    The 3ml volume currently used as the hand hygiene (HH) measure has been explored as the pertinent dose for an indirect indicator of HH compliance. A multicenter study was conducted in order to ascertain the required dose using different products. The average contact duration before drying was measured and compared with references. Effective hand coverage had to include the whole hand and the wrist. Two durations were chosen as points of reference: 30s, as given by guidelines, and the duration validated by the European standard EN 1500. Each product was to be tested, using standardized procedures, by three nosocomial infection prevention teams, for three different doses (3, 2 and 1.5ml). Data from 27 products and 1706 tests were analyzed. Depending on the product, the dose needed to ensure a 30-s contact duration in 75% of tests ranging from 2ml to more than 3ml, and to ensure a contact duration exceeding the EN 1500 times in 75% of tests ranging from 1.5ml to more than 3ml. The aftermath interpretation is the following: if different products are used, the volume utilized does not give an unbiased estimation of the HH compliance. Other compliance evaluation methods remain necessary for efficient benchmarking. Copyright © 2012 Ministry of Health, Saudi Arabia. Published by Elsevier Ltd. All rights reserved.

  19. What is a food and what is a medicinal product in the European Union? Use of the benchmark dose (BMD) methodology to define a threshold for "pharmacological action".

    PubMed

    Lachenmeier, Dirk W; Steffen, Christian; el-Atma, Oliver; Maixner, Sibylle; Löbell-Behrends, Sigrid; Kohl-Himmelseher, Matthias

    2012-11-01

    The decision criterion for the demarcation between foods and medicinal products in the EU is the significant "pharmacological action". Based on six examples of substances with ambivalent status, the benchmark dose (BMD) method is evaluated to provide a threshold for pharmacological action. Using significant dose-response models from literature clinical trial data or epidemiology, the BMD values were 63mg/day for caffeine, 5g/day for alcohol, 6mg/day for lovastatin, 769mg/day for glucosamine sulfate, 151mg/day for Ginkgo biloba extract, and 0.4mg/day for melatonin. The examples for caffeine and alcohol validate the approach because intake above BMD clearly exhibits pharmacological action. Nevertheless, due to uncertainties in dose-response modelling as well as the need for additional uncertainty factors to consider differences in sensitivity within the human population, a "borderline range" on the dose-response curve remains. "Pharmacological action" has proven to be not very well suited as binary decision criterion between foods and medicinal product. The European legislator should rethink the definition of medicinal products, as the current situation based on complicated case-by-case decisions on pharmacological action leads to an unregulated market flooded with potentially illegal food supplements. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Optimizing Radiation Doses for Computed Tomography Across Institutions: Dose Auditing and Best Practices.

    PubMed

    Demb, Joshua; Chu, Philip; Nelson, Thomas; Hall, David; Seibert, Anthony; Lamba, Ramit; Boone, John; Krishnam, Mayil; Cagnon, Christopher; Bostani, Maryam; Gould, Robert; Miglioretti, Diana; Smith-Bindman, Rebecca

    2017-06-01

    Radiation doses for computed tomography (CT) vary substantially across institutions. To assess the impact of institutional-level audit and collaborative efforts to share best practices on CT radiation doses across 5 University of California (UC) medical centers. In this before/after interventional study, we prospectively collected radiation dose metrics on all diagnostic CT examinations performed between October 1, 2013, and December 31, 2014, at 5 medical centers. Using data from January to March (baseline), we created audit reports detailing the distribution of radiation dose metrics for chest, abdomen, and head CT scans. In April, we shared reports with the medical centers and invited radiology professionals from the centers to a 1.5-day in-person meeting to review reports and share best practices. We calculated changes in mean effective dose 12 weeks before and after the audits and meeting, excluding a 12-week implementation period when medical centers could make changes. We compared proportions of examinations exceeding previously published benchmarks at baseline and following the audit and meeting, and calculated changes in proportion of examinations exceeding benchmarks. Of 158 274 diagnostic CT scans performed in the study period, 29 594 CT scans were performed in the 3 months before and 32 839 CT scans were performed 12 to 24 weeks after the audit and meeting. Reductions in mean effective dose were considerable for chest and abdomen. Mean effective dose for chest CT decreased from 13.2 to 10.7 mSv (18.9% reduction; 95% CI, 18.0%-19.8%). Reductions at individual medical centers ranged from 3.8% to 23.5%. The mean effective dose for abdominal CT decreased from 20.0 to 15.0 mSv (25.0% reduction; 95% CI, 24.3%-25.8%). Reductions at individual medical centers ranged from 10.8% to 34.7%. The number of CT scans that had an effective dose measurement that exceeded benchmarks was reduced considerably by 48% and 54% for chest and abdomen, respectively. After the audit and meeting, head CT doses varied less, although some institutions increased and some decreased mean head CT doses and the proportion above benchmarks. Reviewing institutional doses and sharing dose-optimization best practices resulted in lower radiation doses for chest and abdominal CT and more consistent doses for head CT.

  1. SU-E-T-22: A Deterministic Solver of the Boltzmann-Fokker-Planck Equation for Dose Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, X; Gao, H; Paganetti, H

    2015-06-15

    Purpose: The Boltzmann-Fokker-Planck equation (BFPE) accurately models the migration of photons/charged particles in tissues. While the Monte Carlo (MC) method is popular for solving BFPE in a statistical manner, we aim to develop a deterministic BFPE solver based on various state-of-art numerical acceleration techniques for rapid and accurate dose calculation. Methods: Our BFPE solver is based on the structured grid that is maximally parallelizable, with the discretization in energy, angle and space, and its cross section coefficients are derived or directly imported from the Geant4 database. The physical processes that are taken into account are Compton scattering, photoelectric effect, pairmore » production for photons, and elastic scattering, ionization and bremsstrahlung for charged particles.While the spatial discretization is based on the diamond scheme, the angular discretization synergizes finite element method (FEM) and spherical harmonics (SH). Thus, SH is used to globally expand the scattering kernel and FFM is used to locally discretize the angular sphere. As a Result, this hybrid method (FEM-SH) is both accurate in dealing with forward-peaking scattering via FEM, and efficient for multi-energy-group computation via SH. In addition, FEM-SH enables the analytical integration in energy variable of delta scattering kernel for elastic scattering with reduced truncation error from the numerical integration based on the classic SH-based multi-energy-group method. Results: The accuracy of the proposed BFPE solver was benchmarked against Geant4 for photon dose calculation. In particular, FEM-SH had improved accuracy compared to FEM, while both were within 2% of the results obtained with Geant4. Conclusion: A deterministic solver of the Boltzmann-Fokker-Planck equation is developed for dose calculation, and benchmarked against Geant4. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  2. Comparison of Vocal Vibration-Dose Measures for Potential-Damage Risk Criteria

    PubMed Central

    Hunter, Eric J.

    2015-01-01

    Purpose Schoolteachers have become a benchmark population for the study of occupational voice use. A decade of vibration-dose studies on the teacher population allows a comparison to be made between specific dose measures for eventual assessment of damage risk. Method Vibration dosimetry is reformulated with the inclusion of collision stress. Two methods of estimating amplitude of vocal-fold vibration are compared to capture variations in vocal intensity. Energy loss from collision is added to the energy-dissipation dose. An equal-energy-dissipation criterion is defined and used on the teacher corpus as a potential-damage risk criterion. Results Comparison of time-, cycle-, distance-, and energy-dose calculations for 57 teachers reveals a progression in information content in the ability to capture variations in duration, speaking pitch, and vocal intensity. The energy-dissipation dose carries the greatest promise in capturing excessive tissue stress and collision but also the greatest liability, due to uncertainty in parameters. Cycle dose is least correlated with the other doses. Conclusion As a first guide to damage risk in excessive voice use, the equal-energy-dissipation dose criterion can be used to structure trade-off relations between loudness, adduction, and duration of speech. PMID:26172434

  3. Benchmarking B-Cell Epitope Prediction with Quantitative Dose-Response Data on Antipeptide Antibodies: Towards Novel Pharmaceutical Product Development

    PubMed Central

    Caoili, Salvador Eugenio C.

    2014-01-01

    B-cell epitope prediction can enable novel pharmaceutical product development. However, a mechanistically framed consensus has yet to emerge on benchmarking such prediction, thus presenting an opportunity to establish standards of practice that circumvent epistemic inconsistencies of casting the epitope prediction task as a binary-classification problem. As an alternative to conventional dichotomous qualitative benchmark data, quantitative dose-response data on antibody-mediated biological effects are more meaningful from an information-theoretic perspective in the sense that such effects may be expressed as probabilities (e.g., of functional inhibition by antibody) for which the Shannon information entropy (SIE) can be evaluated as a measure of informativeness. Accordingly, half-maximal biological effects (e.g., at median inhibitory concentrations of antibody) correspond to maximally informative data while undetectable and maximal biological effects correspond to minimally informative data. This applies to benchmarking B-cell epitope prediction for the design of peptide-based immunogens that elicit antipeptide antibodies with functionally relevant cross-reactivity. Presently, the Immune Epitope Database (IEDB) contains relatively few quantitative dose-response data on such cross-reactivity. Only a small fraction of these IEDB data is maximally informative, and many more of them are minimally informative (i.e., with zero SIE). Nevertheless, the numerous qualitative data in IEDB suggest how to overcome the paucity of informative benchmark data. PMID:24949474

  4. BENCHMARK DOSE TECHNICAL GUIDANCE DOCUMENT ...

    EPA Pesticide Factsheets

    The U.S. EPA conducts risk assessments for an array of health effects that may result from exposure to environmental agents, and that require an analysis of the relationship between exposure and health-related outcomes. The dose-response assessment is essentially a two-step process, the first being the definition of a point of departure (POD), and the second extrapolation from the POD to low environmentally-relevant exposure levels. The benchmark dose (BMD) approach provides a more quantitative alternative to the first step in the dose-response assessment than the current NOAEL/LOAEL process for noncancer health effects, and is similar to that for determining the POD proposed for cancer endpoints. As the Agency moves toward harmonization of approaches for human health risk assessment, the dichotomy between cancer and noncancer health effects is being replaced by consideration of mode of action and whether the effects of concern are likely to be linear or nonlinear at low doses. Thus, the purpose of this project is to provide guidance for the Agency and the outside community on the application of the BMD approach in determining the POD for all types of health effects data, whether a linear or nonlinear low dose extrapolation is used. A guidance document is being developed under the auspices of EPA's Risk Assessment Forum. The purpose of this project is to provide guidance for the Agency and the outside community on the application of the benchmark dose (BMD) appr

  5. Benchmark Credentialing Results for NRG-BR001: The First National Cancer Institute-Sponsored Trial of Stereotactic Body Radiation Therapy for Multiple Metastases.

    PubMed

    Al-Hallaq, Hania A; Chmura, Steven J; Salama, Joseph K; Lowenstein, Jessica R; McNulty, Susan; Galvin, James M; Followill, David S; Robinson, Clifford G; Pisansky, Thomas M; Winter, Kathryn A; White, Julia R; Xiao, Ying; Matuszak, Martha M

    2017-01-01

    The NRG-BR001 trial is the first National Cancer Institute-sponsored trial to treat multiple (range 2-4) extracranial metastases with stereotactic body radiation therapy. Benchmark credentialing is required to ensure adherence to this complex protocol, in particular, for metastases in close proximity. The present report summarizes the dosimetric results and approval rates. The benchmark used anonymized data from a patient with bilateral adrenal metastases, separated by <5 cm of normal tissue. Because the planning target volume (PTV) overlaps with organs at risk (OARs), institutions must use the planning priority guidelines to balance PTV coverage (45 Gy in 3 fractions) against OAR sparing. Submitted plans were processed by the Imaging and Radiation Oncology Core and assessed by the protocol co-chairs by comparing the doses to targets, OARs, and conformity metrics using nonparametric tests. Of 63 benchmarks submitted through October 2015, 94% were approved, with 51% approved at the first attempt. Most used volumetric arc therapy (VMAT) (78%), a single plan for both PTVs (90%), and prioritized the PTV over the stomach (75%). The median dose to 95% of the volume was 44.8 ± 1.0 Gy and 44.9 ± 1.0 Gy for the right and left PTV, respectively. The median dose to 0.03 cm 3 was 14.2 ± 2.2 Gy to the spinal cord and 46.5 ± 3.1 Gy to the stomach. Plans that spared the stomach significantly reduced the dose to the left PTV and stomach. Conformity metrics were significantly better for single plans that simultaneously treated both PTVs with VMAT, intensity modulated radiation therapy, or 3-dimensional conformal radiation therapy compared with separate plans. No significant differences existed in the dose at 2 cm from the PTVs. Although most plans used VMAT, the range of conformity and dose falloff was large. The decision to prioritize either OARs or PTV coverage varied considerably, suggesting that the toxicity outcomes in the trial could be affected. Several benchmarks met the dose-volume histogram metrics but produced unacceptable plans owing to low conformity. Dissemination of a frequently-asked-questions document improved the approval rate at the first attempt. Benchmark credentialing was found to be a valuable tool for educating institutions about the protocol requirements. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Potential uncertainty reduction in model-averaged benchmark dose estimates informed by an additional dose study.

    PubMed

    Shao, Kan; Small, Mitchell J

    2011-10-01

    A methodology is presented for assessing the information value of an additional dosage experiment in existing bioassay studies. The analysis demonstrates the potential reduction in the uncertainty of toxicity metrics derived from expanded studies, providing insights for future studies. Bayesian methods are used to fit alternative dose-response models using Markov chain Monte Carlo (MCMC) simulation for parameter estimation and Bayesian model averaging (BMA) is used to compare and combine the alternative models. BMA predictions for benchmark dose (BMD) are developed, with uncertainty in these predictions used to derive the lower bound BMDL. The MCMC and BMA results provide a basis for a subsequent Monte Carlo analysis that backcasts the dosage where an additional test group would have been most beneficial in reducing the uncertainty in the BMD prediction, along with the magnitude of the expected uncertainty reduction. Uncertainty reductions are measured in terms of reduced interval widths of predicted BMD values and increases in BMDL values that occur as a result of this reduced uncertainty. The methodology is illustrated using two existing data sets for TCDD carcinogenicity, fitted with two alternative dose-response models (logistic and quantal-linear). The example shows that an additional dose at a relatively high value would have been most effective for reducing the uncertainty in BMA BMD estimates, with predicted reductions in the widths of uncertainty intervals of approximately 30%, and expected increases in BMDL values of 5-10%. The results demonstrate that dose selection for studies that subsequently inform dose-response models can benefit from consideration of how these models will be fit, combined, and interpreted. © 2011 Society for Risk Analysis.

  7. Using physiologically based pharmacokinetic modeling and benchmark dose methods to derive an occupational exposure limit for N-methylpyrrolidone.

    PubMed

    Poet, T S; Schlosser, P M; Rodriguez, C E; Parod, R J; Rodwell, D E; Kirman, C R

    2016-04-01

    The developmental effects of NMP are well studied in Sprague-Dawley rats following oral, inhalation, and dermal routes of exposure. Short-term and chronic occupational exposure limit (OEL) values were derived using an updated physiologically based pharmacokinetic (PBPK) model for NMP, along with benchmark dose modeling. Two suitable developmental endpoints were evaluated for human health risk assessment: (1) for acute exposures, the increased incidence of skeletal malformations, an effect noted only at oral doses that were toxic to the dam and fetus; and (2) for repeated exposures to NMP, changes in fetal/pup body weight. Where possible, data from multiple studies were pooled to increase the predictive power of the dose-response data sets. For the purposes of internal dose estimation, the window of susceptibility was estimated for each endpoint, and was used in the dose-response modeling. A point of departure value of 390 mg/L (in terms of peak NMP in blood) was calculated for skeletal malformations based on pooled data from oral and inhalation studies. Acceptable dose-response model fits were not obtained using the pooled data for fetal/pup body weight changes. These data sets were also assessed individually, from which the geometric mean value obtained from the inhalation studies (470 mg*hr/L), was used to derive the chronic OEL. A PBPK model for NMP in humans was used to calculate human equivalent concentrations corresponding to the internal dose point of departure values. Application of a net uncertainty factor of 20-21, which incorporates data-derived extrapolation factors, to the point of departure values yields short-term and chronic occupational exposure limit values of 86 and 24 ppm, respectively. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Comparative Benchmark Dose Modeling as a Tool to Make the First Estimate of Safe Human Exposure Levels to Lunar Dust

    NASA Technical Reports Server (NTRS)

    James, John T.; Lam, Chiu-wing; Scully, Robert R.

    2013-01-01

    Brief exposures of Apollo Astronauts to lunar dust occasionally elicited upper respiratory irritation; however, no limits were ever set for prolonged exposure ot lunar dust. Habitats for exploration, whether mobile of fixed must be designed to limit human exposure to lunar dust to safe levels. We have used a new technique we call Comparative Benchmark Dose Modeling to estimate safe exposure limits for lunar dust collected during the Apollo 14 mission.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y M; Bush, K; Han, B

    Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) methodmore » that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high performance dose calculation in modern RT. The approach is generalizable to all modalities where heterogeneities play a large role, notably particle therapy.« less

  10. SU-F-R-11: Designing Quality and Safety Informatics Through Implementation of a CT Radiation Dose Monitoring Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, JM; Samei, E; Departments of Physics, Electrical and Computer Engineering, and Biomedical Engineering, and Medical Physics Graduate Program, Duke University, Durham, NC

    2016-06-15

    Purpose: Recent legislative and accreditation requirements have driven rapid development and implementation of CT radiation dose monitoring solutions. Institutions must determine how to improve quality, safety, and consistency of their clinical performance. The purpose of this work was to design a strategy and meaningful characterization of results from an in-house, clinically-deployed dose monitoring solution. Methods: A dose monitoring platform was designed by our imaging physics group that focused on extracting protocol parameters, dose metrics, and patient demographics and size. Compared to most commercial solutions, which focus on individual exam alerts and global thresholds, the program sought to characterize overall consistencymore » and targeted thresholds based on eight analytic interrogations. Those were based on explicit questions related to protocol application, national benchmarks, protocol and size-specific dose targets, operational consistency, outliers, temporal trends, intra-system variability, and consistent use of electronic protocols. Using historical data since the start of 2013, 95% and 99% intervals were used to establish yellow and amber parameterized dose alert thresholds, respectively, as a function of protocol, scanner, and size. Results: Quarterly reports have been generated for three hospitals for 3 quarters of 2015 totaling 27880, 28502, 30631 exams, respectively. Four adult and two pediatric protocols were higher than external institutional benchmarks. Four protocol dose levels were being inconsistently applied as a function of patient size. For the three hospitals, the minimum and maximum amber outlier percentages were [1.53%,2.28%], [0.76%,1.8%], [0.94%,1.17%], respectively. Compared with the electronic protocols, 10 protocols were found to be used with some inconsistency. Conclusion: Dose monitoring can satisfy requirements with global alert thresholds and patient dose records, but the real value is in optimizing patient-specific protocols, balancing image quality trade-offs that dose-reduction strategies promise, and improving the performance and consistency of a clinical operation. Data plots that capture patient demographics and scanner performance demonstrate that value.« less

  11. Benchmarking of MCNP for calculating dose rates at an interim storage facility for nuclear waste.

    PubMed

    Heuel-Fabianek, Burkhard; Hille, Ralf

    2005-01-01

    During the operation of research facilities at Research Centre Jülich, Germany, nuclear waste is stored in drums and other vessels in an interim storage building on-site, which has a concrete shielding at the side walls. Owing to the lack of a well-defined source, measured gamma spectra were unfolded to determine the photon flux on the surface of the containers. The dose rate simulation, including the effects of skyshine, using the Monte Carlo transport code MCNP is compared with the measured dosimetric data at some locations in the vicinity of the interim storage building. The MCNP data for direct radiation confirm the data calculated using a point-kernel method. However, a comparison of the modelled dose rates for direct radiation and skyshine with the measured data demonstrate the need for a more precise definition of the source. Both the measured and the modelled dose rates verified the fact that the legal limits (<1 mSv a(-1)) are met in the area outside the perimeter fence of the storage building to which members of the public have access. Using container surface data (gamma spectra) to define the source may be a useful tool for practical calculations and additionally for benchmarking of computer codes if the discussed critical aspects with respect to the source can be addressed adequately.

  12. Benchmarking and validation of a Geant4-SHADOW Monte Carlo simulation for dose calculations in microbeam radiation therapy.

    PubMed

    Cornelius, Iwan; Guatelli, Susanna; Fournier, Pauline; Crosbie, Jeffrey C; Sanchez Del Rio, Manuel; Bräuer-Krisch, Elke; Rosenfeld, Anatoly; Lerch, Michael

    2014-05-01

    Microbeam radiation therapy (MRT) is a synchrotron-based radiotherapy modality that uses high-intensity beams of spatially fractionated radiation to treat tumours. The rapid evolution of MRT towards clinical trials demands accurate treatment planning systems (TPS), as well as independent tools for the verification of TPS calculated dose distributions in order to ensure patient safety and treatment efficacy. Monte Carlo computer simulation represents the most accurate method of dose calculation in patient geometries and is best suited for the purpose of TPS verification. A Monte Carlo model of the ID17 biomedical beamline at the European Synchrotron Radiation Facility has been developed, including recent modifications, using the Geant4 Monte Carlo toolkit interfaced with the SHADOW X-ray optics and ray-tracing libraries. The code was benchmarked by simulating dose profiles in water-equivalent phantoms subject to irradiation by broad-beam (without spatial fractionation) and microbeam (with spatial fractionation) fields, and comparing against those calculated with a previous model of the beamline developed using the PENELOPE code. Validation against additional experimental dose profiles in water-equivalent phantoms subject to broad-beam irradiation was also performed. Good agreement between codes was observed, with the exception of out-of-field doses and toward the field edge for larger field sizes. Microbeam results showed good agreement between both codes and experimental results within uncertainties. Results of the experimental validation showed agreement for different beamline configurations. The asymmetry in the out-of-field dose profiles due to polarization effects was also investigated, yielding important information for the treatment planning process in MRT. This work represents an important step in the development of a Monte Carlo-based independent verification tool for treatment planning in MRT.

  13. Multiscale benchmarking of drug delivery vectors.

    PubMed

    Summers, Huw D; Ware, Matthew J; Majithia, Ravish; Meissner, Kenith E; Godin, Biana; Rees, Paul

    2016-10-01

    Cross-system comparisons of drug delivery vectors are essential to ensure optimal design. An in-vitro experimental protocol is presented that separates the role of the delivery vector from that of its cargo in determining the cell response, thus allowing quantitative comparison of different systems. The technique is validated through benchmarking of the dose-response of human fibroblast cells exposed to the cationic molecule, polyethylene imine (PEI); delivered as a free molecule and as a cargo on the surface of CdSe nanoparticles and Silica microparticles. The exposure metrics are converted to a delivered dose with the transport properties of the different scale systems characterized by a delivery time, τ. The benchmarking highlights an agglomeration of the free PEI molecules into micron sized clusters and identifies the metric determining cell death as the total number of PEI molecules presented to cells, determined by the delivery vector dose and the surface density of the cargo. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Diagnostic reference levels of paediatric computed tomography examinations performed at a dedicated Australian paediatric hospital.

    PubMed

    Bibbo, Giovanni; Brown, Scott; Linke, Rebecca

    2016-08-01

    Diagnostic Reference Levels (DRL) of procedures involving ionizing radiation are important tools to optimizing radiation doses delivered to patients and in identifying cases where the levels of doses are unusually high. This is particularly important for paediatric patients undergoing computed tomography (CT) examinations as these examinations are associated with relatively high-dose. Paediatric CT studies, performed at our institution from January 2010 to March 2014, have been retrospectively analysed to determine the 75th and 95th percentiles of both the volume computed tomography dose index (CTDIvol ) and dose-length product (DLP) for the most commonly performed studies to: establish local diagnostic reference levels for paediatric computed tomography examinations performed at our institution, benchmark our DRL with national and international published paediatric values, and determine the compliance of CT radiographer with established protocols. The derived local 75th percentile DRL have been found to be acceptable when compared with those published by the Australian National Radiation Dose Register and two national children's hospitals, and at the international level with the National Reference Doses for the UK. The 95th percentiles of CTDIvol for the various CT examinations have been found to be acceptable values for the CT scanner Dose-Check Notification. Benchmarking CT radiographers shows that they follow the set protocols for the various examinations without significant variations in the machine setting factors. The derivation of DRL has given us the tool to evaluate and improve the performance of our CT service by improved compliance and a reduction in radiation dose to our paediatric patients. We have also been able to benchmark our performance with similar national and international institutions. © 2016 The Royal Australian and New Zealand College of Radiologists.

  15. Absolute dose calibration of an X-ray system and dead time investigations of photon-counting techniques

    NASA Astrophysics Data System (ADS)

    Carpentieri, C.; Schwarz, C.; Ludwig, J.; Ashfaq, A.; Fiederle, M.

    2002-07-01

    High precision concerning the dose calibration of X-ray sources is required when counting and integrating methods are compared. The dose calibration for a dental X-ray tube was executed with special dose calibration equipment (dosimeter) as function of exposure time and rate. Results were compared with a benchmark spectrum and agree within ±1.5%. Dead time investigations with the Medipix1 photon-counting chip (PCC) have been performed by rate variations. Two different types of dead time, paralysable and non-paralysable will be discussed. The dead time depends on settings of the front-end electronics and is a function of signal height, which might lead to systematic defects of systems. Dead time losses in excess of 30% have been found for the PCC at 200 kHz absorbed photons per pixel.

  16. Experimental verification of a CT-based Monte Carlo dose-calculation method in heterogeneous phantoms.

    PubMed

    Wang, L; Lovelock, M; Chui, C S

    1999-12-01

    To further validate the Monte Carlo dose-calculation method [Med. Phys. 25, 867-878 (1998)] developed at the Memorial Sloan-Kettering Cancer Center, we have performed experimental verification in various inhomogeneous phantoms. The phantom geometries included simple layered slabs, a simulated bone column, a simulated missing-tissue hemisphere, and an anthropomorphic head geometry (Alderson Rando Phantom). The densities of the inhomogeneity range from 0.14 to 1.86 g/cm3, simulating both clinically relevant lunglike and bonelike materials. The data are reported as central axis depth doses, dose profiles, dose values at points of interest, such as points at the interface of two different media and in the "nasopharynx" region of the Rando head. The dosimeters used in the measurement included dosimetry film, TLD chips, and rods. The measured data were compared to that of Monte Carlo calculations for the same geometrical configurations. In the case of the Rando head phantom, a CT scan of the phantom was used to define the calculation geometry and to locate the points of interest. The agreement between the calculation and measurement is generally within 2.5%. This work validates the accuracy of the Monte Carlo method. While Monte Carlo, at present, is still too slow for routine treatment planning, it can be used as a benchmark against which other dose calculation methods can be compared.

  17. Concordance of transcriptional and apical benchmark dose levels for conazole-induced liver effects in mice.

    PubMed

    Bhat, Virunya S; Hester, Susan D; Nesnow, Stephen; Eastmond, David A

    2013-11-01

    The ability to anchor chemical class-based gene expression changes to phenotypic lesions and to describe these changes as a function of dose and time informs mode-of-action determinations and improves quantitative risk assessments. Previous global expression profiling identified a 330-probe cluster differentially expressed and commonly responsive to 3 hepatotumorigenic conazoles (cyproconazole, epoxiconazole, and propiconazole) at 30 days. Extended to 2 more conazoles (triadimefon and myclobutanil), the present assessment encompasses 4 tumorigenic and 1 nontumorigenic conazole. Transcriptional benchmark dose levels (BMDL(T)) were estimated for a subset of the cluster with dose-responsive behavior and a ≥ 5-fold increase or decrease in signal intensity at the highest dose. These genes primarily encompassed CAR/RXR activation, P450 metabolism, liver hypertrophy- glutathione depletion, LPS/IL-1-mediated inhibition of RXR, and NRF2-mediated oxidative stress pathways. Median BMDL(T) estimates from the subset were concordant (within a factor of 2.4) with apical benchmark doses (BMDL(A)) for increased liver weight at 30 days for the 5 conazoles. The 30-day median BMDL(T) estimates were within one-half order of magnitude of the chronic BMDLA for hepatocellular tumors. Potency differences seen in the dose-responsive transcription of certain phase II metabolism, bile acid detoxification, and lipid oxidation genes mirrored each conazole's tumorigenic potency. The 30-day BMDL(T) corresponded to tumorigenic potency on a milligram per kilogram day basis with cyproconazole > epoxiconazole > propiconazole > triadimefon > myclobutanil (nontumorigenic). These results support the utility of measuring short-term gene expression changes to inform quantitative risk assessments from long-term exposures.

  18. Benchmark concentrations for methyl mercury obtained from the 9-year follow-up of the Seychelles Child Development Study.

    PubMed

    van Wijngaarden, Edwin; Beck, Christopher; Shamlaye, Conrad F; Cernichiari, Elsa; Davidson, Philip W; Myers, Gary J; Clarkson, Thomas W

    2006-09-01

    Methyl mercury (MeHg) is highly toxic to the developing nervous system. Human exposure is mainly from fish consumption since small amounts are present in all fish. Findings of developmental neurotoxicity following high-level prenatal exposure to MeHg raised the question of whether children whose mothers consumed fish contaminated with background levels during pregnancy are at an increased risk of impaired neurological function. Benchmark doses determined from studies in New Zealand, and the Faroese and Seychelles Islands indicate that a level of 4-25 parts per million (ppm) measured in maternal hair may carry a risk to the infant. However, there are numerous sources of uncertainty that could affect the derivation of benchmark doses, and it is crucial to continue to investigate the most appropriate derivation of safe consumption levels. Earlier, we published the findings from benchmark analyses applied to the data collected on the Seychelles main cohort at the 66-month follow-up period. Here, we expand on the main cohort analyses by determining the benchmark doses (BMD) of MeHg level in maternal hair based on 643 Seychellois children for whom 26 different neurobehavioral endpoints were measured at 9 years of age. Dose-response models applied to these continuous endpoints incorporated a variety of covariates and included the k-power model, the Weibull model, and the logistic model. The average 95% lower confidence limit of the BMD (BMDL) across all 26 endpoints varied from 20.1 ppm (range=17.2-22.5) for the logistic model to 20.4 ppm (range=17.9-23.0) for the k-power model. These estimates are somewhat lower than those obtained after 66 months of follow-up. The Seychelles Child Development Study continues to provide a firm scientific basis for the derivation of safe levels of MeHg consumption.

  19. Evaluation of triclosan in Minnesota lakes and rivers: Part II - human health risk assessment.

    PubMed

    Yost, Lisa J; Barber, Timothy R; Gentry, P Robinan; Bock, Michael J; Lyndall, Jennifer L; Capdevielle, Marie C; Slezak, Brian P

    2017-08-01

    Triclosan, an antimicrobial compound found in consumer products, has been detected in low concentrations in Minnesota municipal wastewater treatment plant (WWTP) effluent. This assessment evaluates potential health risks for exposure of adults and children to triclosan in Minnesota surface water, sediments, and fish. Potential exposures via fish consumption are considered for recreational or subsistence-level consumers. This assessment uses two chronic oral toxicity benchmarks, which bracket other available toxicity values. The first benchmark is a lower bound on a benchmark dose associated with a 10% risk (BMDL 10 ) of 47mg per kilogram per day (mg/kg-day) for kidney effects in hamsters. This value was identified as the most sensitive endpoint and species in a review by Rodricks et al. (2010) and is used herein to derive an estimated reference dose (RfD (Rodricks) ) of 0.47mg/kg-day. The second benchmark is a reference dose (RfD) of 0.047mg/kg-day derived from a no observed adverse effect level (NOAEL) of 10mg/kg-day for hepatic and hematopoietic effects in mice (Minnesota Department of Health [MDH] 2014). Based on conservative assumptions regarding human exposures to triclosan, calculated risk estimates are far below levels of concern. These estimates are likely to overestimate risks for potential receptors, particularly because sample locations were generally biased towards known discharges (i.e., WWTP effluent). Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Application of the first collision source method to CSNS target station shielding calculation

    NASA Astrophysics Data System (ADS)

    Zheng, Ying; Zhang, Bin; Chen, Meng-Teng; Zhang, Liang; Cao, Bo; Chen, Yi-Xue; Yin, Wen; Liang, Tian-Jiao

    2016-04-01

    Ray effects are an inherent problem of the discrete ordinates method. RAY3D, a functional module of ARES, which is a discrete ordinates code system, employs a semi-analytic first collision source method to mitigate ray effects. This method decomposes the flux into uncollided and collided components, and then calculates them with an analytical method and discrete ordinates method respectively. In this article, RAY3D is validated by the Kobayashi benchmarks and applied to the neutron beamline shielding problem of China Spallation Neutron Source (CSNS) target station. The numerical results of the Kobayashi benchmarks indicate that the solutions of DONTRAN3D with RAY3D agree well with the Monte Carlo solutions. The dose rate at the end of the neutron beamline is less than 10.83 μSv/h in the CSNS target station neutron beamline shutter model. RAY3D can effectively mitigate the ray effects and obtain relatively reasonable results. Supported by Major National S&T Specific Program of Large Advanced Pressurized Water Reactor Nuclear Power Plant (2011ZX06004-007), National Natural Science Foundation of China (11505059, 11575061), and the Fundamental Research Funds for the Central Universities (13QN34).

  1. Comparative risk assessment of alcohol, tobacco, cannabis and other illicit drugs using the margin of exposure approach.

    PubMed

    Lachenmeier, Dirk W; Rehm, Jürgen

    2015-01-30

    A comparative risk assessment of drugs including alcohol and tobacco using the margin of exposure (MOE) approach was conducted. The MOE is defined as ratio between toxicological threshold (benchmark dose) and estimated human intake. Median lethal dose values from animal experiments were used to derive the benchmark dose. The human intake was calculated for individual scenarios and population-based scenarios. The MOE was calculated using probabilistic Monte Carlo simulations. The benchmark dose values ranged from 2 mg/kg bodyweight for heroin to 531 mg/kg bodyweight for alcohol (ethanol). For individual exposure the four substances alcohol, nicotine, cocaine and heroin fall into the "high risk" category with MOE < 10, the rest of the compounds except THC fall into the "risk" category with MOE < 100. On a population scale, only alcohol would fall into the "high risk" category, and cigarette smoking would fall into the "risk" category, while all other agents (opiates, cocaine, amphetamine-type stimulants, ecstasy, and benzodiazepines) had MOEs > 100, and cannabis had a MOE > 10,000. The toxicological MOE approach validates epidemiological and social science-based drug ranking approaches especially in regard to the positions of alcohol and tobacco (high risk) and cannabis (low risk).

  2. Comparative risk assessment of tobacco smoke constituents using the margin of exposure approach: the neglected contribution of nicotine

    PubMed Central

    Baumung, Claudia; Rehm, Jürgen; Franke, Heike; Lachenmeier, Dirk W.

    2016-01-01

    Nicotine was not included in previous efforts to identify the most important toxicants of tobacco smoke. A health risk assessment of nicotine for smokers of cigarettes was conducted using the margin of exposure (MOE) approach and results were compared to literature MOEs of various other tobacco toxicants. The MOE is defined as ratio between toxicological threshold (benchmark dose) and estimated human intake. Dose-response modelling of human and animal data was used to derive the benchmark dose. The MOE was calculated using probabilistic Monte Carlo simulations for daily cigarette smokers. Benchmark dose values ranged from 0.004 mg/kg bodyweight for symptoms of intoxication in children to 3 mg/kg bodyweight for mortality in animals; MOEs ranged from below 1 up to 7.6 indicating a considerable consumer risk. The dimension of the MOEs is similar to those of other tobacco toxicants with high concerns relating to adverse health effects such as acrolein or formaldehyde. Owing to the lack of toxicological data in particular relating to cancer, long term animal testing studies for nicotine are urgently necessary. There is immediate need of action concerning the risk of nicotine also with regard to electronic cigarettes and smokeless tobacco. PMID:27759090

  3. Comparative risk assessment of alcohol, tobacco, cannabis and other illicit drugs using the margin of exposure approach

    PubMed Central

    Lachenmeier, Dirk W.; Rehm, Jürgen

    2015-01-01

    A comparative risk assessment of drugs including alcohol and tobacco using the margin of exposure (MOE) approach was conducted. The MOE is defined as ratio between toxicological threshold (benchmark dose) and estimated human intake. Median lethal dose values from animal experiments were used to derive the benchmark dose. The human intake was calculated for individual scenarios and population-based scenarios. The MOE was calculated using probabilistic Monte Carlo simulations. The benchmark dose values ranged from 2 mg/kg bodyweight for heroin to 531 mg/kg bodyweight for alcohol (ethanol). For individual exposure the four substances alcohol, nicotine, cocaine and heroin fall into the “high risk” category with MOE < 10, the rest of the compounds except THC fall into the “risk” category with MOE < 100. On a population scale, only alcohol would fall into the “high risk” category, and cigarette smoking would fall into the “risk” category, while all other agents (opiates, cocaine, amphetamine-type stimulants, ecstasy, and benzodiazepines) had MOEs > 100, and cannabis had a MOE > 10,000. The toxicological MOE approach validates epidemiological and social science-based drug ranking approaches especially in regard to the positions of alcohol and tobacco (high risk) and cannabis (low risk). PMID:25634572

  4. Alternative stitching method for massively parallel e-beam lithography

    NASA Astrophysics Data System (ADS)

    Brandt, Pieter; Tranquillin, Céline; Wieland, Marco; Bayle, Sébastien; Milléquant, Matthieu; Renault, Guillaume

    2015-03-01

    In this study a novel stitching method other than Soft Edge (SE) and Smart Boundary (SB) is introduced and benchmarked against SE. The method is based on locally enhanced Exposure Latitude without cost of throughput, making use of the fact that the two beams that pass through the stitching region can deposit up to 2x the nominal dose. The method requires a complex Proximity Effect Correction that takes a preset stitching dose profile into account. On a Metal clip at minimum half-pitch of 32 nm for MAPPER FLX 1200 tool specifications, the novel stitching method effectively mitigates Beam to Beam (B2B) position errors such that they do not induce increase in CD Uniformity (CDU). In other words, the same CDU can be realized inside the stitching region as outside the stitching region. For the SE method, the CDU inside is 0.3 nm higher than outside the stitching region. 5 nm direct overlay impact from B2B position errors cannot be reduced by a stitching strategy.

  5. A Web-Based System for Bayesian Benchmark Dose Estimation.

    PubMed

    Shao, Kan; Shapiro, Andrew J

    2018-01-11

    Benchmark dose (BMD) modeling is an important step in human health risk assessment and is used as the default approach to identify the point of departure for risk assessment. A probabilistic framework for dose-response assessment has been proposed and advocated by various institutions and organizations; therefore, a reliable tool is needed to provide distributional estimates for BMD and other important quantities in dose-response assessment. We developed an online system for Bayesian BMD (BBMD) estimation and compared results from this software with U.S. Environmental Protection Agency's (EPA's) Benchmark Dose Software (BMDS). The system is built on a Bayesian framework featuring the application of Markov chain Monte Carlo (MCMC) sampling for model parameter estimation and BMD calculation, which makes the BBMD system fundamentally different from the currently prevailing BMD software packages. In addition to estimating the traditional BMDs for dichotomous and continuous data, the developed system is also capable of computing model-averaged BMD estimates. A total of 518 dichotomous and 108 continuous data sets extracted from the U.S. EPA's Integrated Risk Information System (IRIS) database (and similar databases) were used as testing data to compare the estimates from the BBMD and BMDS programs. The results suggest that the BBMD system may outperform the BMDS program in a number of aspects, including fewer failed BMD and BMDL calculations and estimates. The BBMD system is a useful alternative tool for estimating BMD with additional functionalities for BMD analysis based on most recent research. Most importantly, the BBMD has the potential to incorporate prior information to make dose-response modeling more reliable and can provide distributional estimates for important quantities in dose-response assessment, which greatly facilitates the current trend for probabilistic risk assessment. https://doi.org/10.1289/EHP1289.

  6. Development of risk-based nanomaterial groups for occupational exposure control

    NASA Astrophysics Data System (ADS)

    Kuempel, E. D.; Castranova, V.; Geraci, C. L.; Schulte, P. A.

    2012-09-01

    Given the almost limitless variety of nanomaterials, it will be virtually impossible to assess the possible occupational health hazard of each nanomaterial individually. The development of science-based hazard and risk categories for nanomaterials is needed for decision-making about exposure control practices in the workplace. A possible strategy would be to select representative (benchmark) materials from various mode of action (MOA) classes, evaluate the hazard and develop risk estimates, and then apply a systematic comparison of new nanomaterials with the benchmark materials in the same MOA class. Poorly soluble particles are used here as an example to illustrate quantitative risk assessment methods for possible benchmark particles and occupational exposure control groups, given mode of action and relative toxicity. Linking such benchmark particles to specific exposure control bands would facilitate the translation of health hazard and quantitative risk information to the development of effective exposure control practices in the workplace. A key challenge is obtaining sufficient dose-response data, based on standard testing, to systematically evaluate the nanomaterials' physical-chemical factors influencing their biological activity. Categorization processes involve both science-based analyses and default assumptions in the absence of substance-specific information. Utilizing data and information from related materials may facilitate initial determinations of exposure control systems for nanomaterials.

  7. Transcriptomic Dose-Response Analysis for Mode of Action ...

    EPA Pesticide Factsheets

    Microarray and RNA-seq technologies can play an important role in assessing the health risks associated with environmental exposures. The utility of gene expression data to predict hazard has been well documented. Early toxicogenomics studies used relatively high, single doses with minimal replication. Thus, they were not useful in understanding health risks at environmentally-relevant doses. Until the past decade, application of toxicogenomics in dose response assessment and determination of chemical mode of action has been limited. New transcriptomic biomarkers have evolved to detect chemical hazards in multiple tissues together with pathway methods to study biological effects across the full dose response range and critical time course. Comprehensive low dose datasets are now available and with the use of transcriptomic benchmark dose estimation techniques within a mode of action framework, the ability to incorporate informative genomic data into human health risk assessment has substantially improved. The key advantage to applying transcriptomic technology to risk assessment is both the sensitivity and comprehensive examination of direct and indirect molecular changes that lead to adverse outcomes. Book Chapter with topic on future application of toxicogenomics technologies for MoA and risk assessment

  8. A tiered asthma hazard characterization and exposure assessment approach for evaluation of consumer product ingredients.

    PubMed

    Maier, Andrew; Vincent, Melissa J; Parker, Ann; Gadagbui, Bernard K; Jayjock, Michael

    2015-12-01

    Asthma is a complex syndrome with significant consequences for those affected. The number of individuals affected is growing, although the reasons for the increase are uncertain. Ensuring the effective management of potential exposures follows from substantial evidence that exposure to some chemicals can increase the likelihood of asthma responses. We have developed a safety assessment approach tailored to the screening of asthma risks from residential consumer product ingredients as a proactive risk management tool. Several key features of the proposed approach advance the assessment resources often used for asthma issues. First, a quantitative health benchmark for asthma or related endpoints (irritation and sensitization) is provided that extends qualitative hazard classification methods. Second, a parallel structure is employed to include dose-response methods for asthma endpoints and methods for scenario specific exposure estimation. The two parallel tracks are integrated in a risk characterization step. Third, a tiered assessment structure is provided to accommodate different amounts of data for both the dose-response assessment (i.e., use of existing benchmarks, hazard banding, or the threshold of toxicological concern) and exposure estimation (i.e., use of empirical data, model estimates, or exposure categories). Tools building from traditional methods and resources have been adapted to address specific issues pertinent to asthma toxicology (e.g., mode-of-action and dose-response features) and the nature of residential consumer product use scenarios (e.g., product use patterns and exposure durations). A case study for acetic acid as used in various sentinel products and residential cleaning scenarios was developed to test the safety assessment methodology. In particular, the results were used to refine and verify relationships among tiered approaches such that each lower data tier in the approach provides a similar or greater margin of safety for a given scenario. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Development of a flattening filter free multiple source model for use as an independent, Monte Carlo, dose calculation, quality assurance tool for clinical trials.

    PubMed

    Faught, Austin M; Davidson, Scott E; Popple, Richard; Kry, Stephen F; Etzel, Carol; Ibbott, Geoffrey S; Followill, David S

    2017-09-01

    The Imaging and Radiation Oncology Core-Houston (IROC-H) Quality Assurance Center (formerly the Radiological Physics Center) has reported varying levels of compliance from their anthropomorphic phantom auditing program. IROC-H studies have suggested that one source of disagreement between institution submitted calculated doses and measurement is the accuracy of the institution's treatment planning system dose calculations and heterogeneity corrections used. In order to audit this step of the radiation therapy treatment process, an independent dose calculation tool is needed. Monte Carlo multiple source models for Varian flattening filter free (FFF) 6 MV and FFF 10 MV therapeutic x-ray beams were commissioned based on central axis depth dose data from a 10 × 10 cm 2 field size and dose profiles for a 40 × 40 cm 2 field size. The models were validated against open-field measurements in a water tank for field sizes ranging from 3 × 3 cm 2 to 40 × 40 cm 2 . The models were then benchmarked against IROC-H's anthropomorphic head and neck phantom and lung phantom measurements. Validation results, assessed with a ±2%/2 mm gamma criterion, showed average agreement of 99.9% and 99.0% for central axis depth dose data for FFF 6 MV and FFF 10 MV models, respectively. Dose profile agreement using the same evaluation technique averaged 97.8% and 97.9% for the respective models. Phantom benchmarking comparisons were evaluated with a ±3%/2 mm gamma criterion, and agreement averaged 90.1% and 90.8% for the respective models. Multiple source models for Varian FFF 6 MV and FFF 10 MV beams have been developed, validated, and benchmarked for inclusion in an independent dose calculation quality assurance tool for use in clinical trial audits. © 2017 American Association of Physicists in Medicine.

  10. BMDExpress Data Viewer: A Visualization Tool to Analyze BMDExpress Datasets

    EPA Science Inventory

    Regulatory agencies increasingly apply benchmark dose (BMD) modeling to determine points of departure in human risk assessments. BMDExpress applies BMD modeling to transcriptomics datasets and groups genes to biological processes and pathways for rapid assessment of doses at whic...

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faillace, E.R.; Cheng, J.J.; Yu, C.

    A series of benchmarking runs were conducted so that results obtained with the RESRAD code could be compared against those obtained with six pathway analysis models used to determine the radiation dose to an individual living on a radiologically contaminated site. The RESRAD computer code was benchmarked against five other computer codes - GENII-S, GENII, DECOM, PRESTO-EPA-CPG, and PATHRAE-EPA - and the uncodified methodology presented in the NUREG/CR-5512 report. Estimated doses for the external gamma pathway; the dust inhalation pathway; and the soil, food, and water ingestion pathways were calculated for each methodology by matching, to the extent possible, inputmore » parameters such as occupancy, shielding, and consumption factors.« less

  12. Benchmark Dose Software (BMDS) Development and ...

    EPA Pesticide Factsheets

    This report is intended to provide an overview of beta version 1.0 of the implementation of a model of repeated measures data referred to as the Toxicodiffusion model. The implementation described here represents the first steps towards integration of the Toxicodiffusion model into the EPA benchmark dose software (BMDS). This version runs from within BMDS 2.0 using an option screen for making model selection, as is done for other models in the BMDS 2.0 suite. This report is intended to provide an overview of beta version 1.0 of the implementation of a model of repeated measures data referred to as the Toxicodiffusion model.

  13. Analytical dose evaluation of neutron and secondary gamma-ray skyshine from nuclear facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayashi, K.; Nakamura, T.

    1985-11-01

    The skyshine dose distributions of neutron and secondary gamma rays were calculated systematically using the Monte Carlo method for distances up to 2 km from the source. The energy of source neutrons ranged from thermal to 400 MeV; their emission angle from 0 to 90 deg from the ver tical was treated with a distribution of the direction cosine containing five equal intervals. Calculated dose distributions D(r) were fitted to the formula; D(r) = Q exp (-r/lambda)/r. The value of Q and lambda are slowly varied functions of energy. This formula was applied to the benchmark problems of neutron skyshinemore » from fission, fusion, and accelerator facilities, and good agreement was achieved. This formula will be quite useful for shielding designs of various nuclear facilities.« less

  14. Rationale of technical requirements for NRG-BR001: The first NCI-sponsored trial of SBRT for the treatment of multiple metastases.

    PubMed

    Al-Hallaq, Hania A; Chmura, Steven; Salama, Joseph K; Winter, Kathryn A; Robinson, Clifford G; Pisansky, Thomas M; Borges, Virginia; Lowenstein, Jessica R; McNulty, Susan; Galvin, James M; Followill, David S; Timmerman, Robert D; White, Julia R; Xiao, Ying; Matuszak, Martha M

    In 2014, the NRG Oncology Group initiated the first National Cancer Institute-sponsored, phase 1 clinical trial of stereotactic body radiation therapy (SBRT) for the treatment of multiple metastases in multiple organ sites (BR001; NCT02206334). The primary endpoint is to test the safety of SBRT for the treatment of 2 to 4 multiple lesions in several anatomic sites in a multi-institutional setting. Because of the technical challenges inherent to treating multiple lesions as their spatial separation decreases, we present the technical requirements for NRG-BR001 and the rationale for their selection. Patients with controlled primary tumors of breast, non-small cell lung, or prostate are eligible if they have 2 to 4 metastases distributed among 7 extracranial anatomic locations throughout the body. Prescription and organ-at-risk doses were determined by expert consensus. Credentialing requirements include (1) irradiation of the Imaging and Radiation Oncology Core phantom with SBRT, (2) submitting image guided radiation therapy case studies, and (3) planning the benchmark. Guidelines for navigating challenging planning cases including assessing composite dose are discussed. Dosimetric planning to multiple lesions receiving differing doses (45-50 Gy) and fractionation (3-5) while irradiating the same organs at risk is discussed, particularly for metastases in close proximity (≤5 cm). The benchmark case was selected to demonstrate the planning tradeoffs required to satisfy protocol requirements for 2 nearby lesions. Examples of passing benchmark plans exhibited a large variability in plan conformity. NRG-BR001 was developed using expert consensus on multiple issues from the dose fractionation regimen to the minimum image guided radiation therapy guidelines. Credentialing was tied to the task rather than the anatomic site to reduce its burden. Every effort was made to include a variety of delivery methods to reflect current SBRT technology. Although some simplifications were adopted, the successful completion of this trial will inform future designs of both national and institutional trials and would allow immediate clinical adoption of SBRT trials for oligometastases. Copyright © 2016 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  15. BMDExpress Data Viewer: A Visualization Tool to Analyze BMDExpress Datasets(SoTC)

    EPA Science Inventory

    Background: Benchmark Dose (BMD) modelling is a mathematical approach used to determine where a dose-response change begins to take place relative to controls following chemical exposure. BMDs are being increasingly applied in regulatory toxicology to estimate acceptable exposure...

  16. BMDExpress Data Viewer: A Visualization Tool to Analyze BMDExpress Datasets (STC symposium)

    EPA Science Inventory

    Background: Benchmark Dose (BMD) modelling is a mathematical approach used to determine where a dose-response change begins to take place relative to controls following chemical exposure. BMDs are being increasingly applied in regulatory toxicology to estimate acceptable exposure...

  17. Altered operant responding for motor reinforcement and the determination of benchmark doses following perinatal exposure to low-level 2,3,7,8-tetrachlorodibenzo-p-dioxin.

    PubMed

    Markowski, V P; Zareba, G; Stern, S; Cox, C; Weiss, B

    2001-06-01

    Pregnant Holtzman rats were exposed to a single oral dose of 0, 20, 60, or 180 ng/kg 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) on the 18th day of gestation. Their adult female offspring were trained to respond on a lever for brief opportunities to run in specially designed running wheels. Once they had begun responding on a fixed-ratio 1 (FR1) schedule of reinforcement, the fixed-ratio requirement for lever pressing was increased at five-session intervals to values of FR2, FR5, FR10, FR20, and FR30. We examined vaginal cytology after each behavior session to track estrous cyclicity. Under each of the FR values, perinatal TCDD exposure produced a significant dose-related reduction in the number of earned opportunities to run, the lever response rate, and the total number of revolutions in the wheel. Estrous cyclicity was not affected. Because of the consistent dose-response relationship at all FR values, we used the behavioral data to calculate benchmark doses based on displacements from modeled zero-dose performance of 1% (ED(01)) and 10% (ED(10)), as determined by a quadratic fit to the dose-response function. The mean ED(10) benchmark dose for earned run opportunities was 10.13 ng/kg with a 95% lower bound of 5.77 ng/kg. The corresponding ED(01) was 0.98 ng/kg with a 95% lower bound of 0.83 ng/kg. The mean ED(10) for total wheel revolutions was calculated as 7.32 ng/kg with a 95% lower bound of 5.41 ng/kg. The corresponding ED(01) was 0.71 ng/kg with a 95% lower bound of 0.60. These values should be viewed from the perspective of current human body burdens, whose average value, based on TCDD toxic equivalents, has been calculated as 13 ng/kg.

  18. Development of a chronic noncancer oral reference dose and drinking water screening level for sulfolane using benchmark dose modeling.

    PubMed

    Thompson, Chad M; Gaylor, David W; Tachovsky, J Andrew; Perry, Camarie; Carakostas, Michael C; Haws, Laurie C

    2013-12-01

    Sulfolane is a widely used industrial solvent that is often used for gas treatment (sour gas sweetening; hydrogen sulfide removal from shale and coal processes, etc.), and in the manufacture of polymers and electronics, and may be found in pharmaceuticals as a residual solvent used in the manufacturing processes. Sulfolane is considered a high production volume chemical with worldwide production around 18 000-36 000 tons per year. Given that sulfolane has been detected as a contaminant in groundwater, an important potential route of exposure is tap water ingestion. Because there are currently no federal drinking water standards for sulfolane in the USA, we developed a noncancer oral reference dose (RfD) based on benchmark dose modeling, as well as a tap water screening value that is protective of ingestion. Review of the available literature suggests that sulfolane is not likely to be mutagenic, clastogenic or carcinogenic, or pose reproductive or developmental health risks except perhaps at very high exposure concentrations. RfD values derived using benchmark dose modeling were 0.01-0.04 mg kg(-1) per day, although modeling of developmental endpoints resulted in higher values, approximately 0.4 mg kg(-1) per day. The lowest, most conservative, RfD of 0.01 mg kg(-1) per day was based on reduced white blood cell counts in female rats. This RfD was used to develop a tap water screening level that is protective of ingestion, viz. 365 µg l(-1). It is anticipated that these values, along with the hazard identification and dose-response modeling described herein, should be informative for risk assessors and regulators interested in setting health-protective drinking water guideline values for sulfolane. Copyright © 2012 John Wiley & Sons, Ltd.

  19. Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Norris, Edward T.; Liu, Xin, E-mail: xinliu@mst.edu; Hsieh, Jiang

    Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. Themore » CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer. Conclusions: The simulation results showed that the deterministic method can be effectively used to estimate the absorbed dose in a CTDI phantom. The accuracy of the discrete ordinates method was close to that of a Monte Carlo simulation, and the primary benefit of the discrete ordinates method lies in its rapid computation speed. It is expected that further optimization of this method in routine clinical CT dose estimation will improve its accuracy and speed.« less

  20. What Pertussis Mortality Rates Make Maternal Acellular Pertussis Immunization Cost-Effective in Low- and Middle-Income Countries? A Decision Analysis

    PubMed Central

    Russell, Louise B.; Pentakota, Sri Ram; Toscano, Cristiana Maria; Cosgriff, Ben; Sinha, Anushua

    2016-01-01

    Background. Despite longstanding infant vaccination programs in low- and middle-income countries (LMICs), pertussis continues to cause deaths in the youngest infants. A maternal monovalent acellular pertussis (aP) vaccine, in development, could prevent many of these deaths. We estimated infant pertussis mortality rates at which maternal vaccination would be a cost-effective use of public health resources in LMICs. Methods. We developed a decision model to evaluate the cost-effectiveness of maternal aP immunization plus routine infant vaccination vs routine infant vaccination alone in Bangladesh, Nigeria, and Brazil. For a range of maternal aP vaccine prices, one-way sensitivity analyses identified the infant pertussis mortality rates required to make maternal immunization cost-effective by alternative benchmarks ($100, 0.5 gross domestic product [GDP] per capita, and GDP per capita per disability-adjusted life-year [DALY]). Probabilistic sensitivity analysis provided uncertainty intervals for these mortality rates. Results. Infant pertussis mortality rates necessary to make maternal aP immunization cost-effective exceed the rates suggested by current evidence except at low vaccine prices and/or cost-effectiveness benchmarks at the high end of those considered in this report. For example, at a vaccine price of $0.50/dose, pertussis mortality would need to be 0.051 per 1000 infants in Bangladesh, and 0.018 per 1000 in Nigeria, to cost 0.5 per capita GDP per DALY. In Brazil, a middle-income country, at a vaccine price of $4/dose, infant pertussis mortality would need to be 0.043 per 1000 to cost 0.5 per capita GDP per DALY. Conclusions. For commonly used cost-effectiveness benchmarks, maternal aP immunization would be cost-effective in many LMICs only if the vaccine were offered at less than $1–$2/dose. PMID:27838677

  1. DOSE-RESPONSE ASSESSMENT FOR DEVELOPMENTAL TOXICITY III. STATISTICAL MODELS

    EPA Science Inventory

    Although quantitative modeling has been central to cancer risk assessment for years, the concept of do@e-response modeling for developmental effects is relatively new. he benchmark dose (BMD) approach has been proposed for use with developmental (as well as other noncancer) endpo...

  2. Experimental depth dose curves of a 67.5 MeV proton beam for benchmarking and validation of Monte Carlo simulation

    PubMed Central

    Faddegon, Bruce A.; Shin, Jungwook; Castenada, Carlos M.; Ramos-Méndez, José; Daftari, Inder K.

    2015-01-01

    Purpose: To measure depth dose curves for a 67.5 ± 0.1 MeV proton beam for benchmarking and validation of Monte Carlo simulation. Methods: Depth dose curves were measured in 2 beam lines. Protons in the raw beam line traversed a Ta scattering foil, 0.1016 or 0.381 mm thick, a secondary emission monitor comprised of thin Al foils, and a thin Kapton exit window. The beam energy and peak width and the composition and density of material traversed by the beam were known with sufficient accuracy to permit benchmark quality measurements. Diodes for charged particle dosimetry from two different manufacturers were used to scan the depth dose curves with 0.003 mm depth reproducibility in a water tank placed 300 mm from the exit window. Depth in water was determined with an uncertainty of 0.15 mm, including the uncertainty in the water equivalent depth of the sensitive volume of the detector. Parallel-plate chambers were used to verify the accuracy of the shape of the Bragg peak and the peak-to-plateau ratio measured with the diodes. The uncertainty in the measured peak-to-plateau ratio was 4%. Depth dose curves were also measured with a diode for a Bragg curve and treatment beam spread out Bragg peak (SOBP) on the beam line used for eye treatment. The measurements were compared to Monte Carlo simulation done with geant4 using topas. Results: The 80% dose at the distal side of the Bragg peak for the thinner foil was at 37.47 ± 0.11 mm (average of measurement with diodes from two different manufacturers), compared to the simulated value of 37.20 mm. The 80% dose for the thicker foil was at 35.08 ± 0.15 mm, compared to the simulated value of 34.90 mm. The measured peak-to-plateau ratio was within one standard deviation experimental uncertainty of the simulated result for the thinnest foil and two standard deviations for the thickest foil. It was necessary to include the collimation in the simulation, which had a more pronounced effect on the peak-to-plateau ratio for the thicker foil. The treatment beam, being unfocussed, had a broader Bragg peak than the raw beam. A 1.3 ± 0.1 MeV FWHM peak width in the energy distribution was used in the simulation to match the Bragg peak width. An additional 1.3–2.24 mm of water in the water column was required over the nominal values to match the measured depth penetration. Conclusions: The proton Bragg curve measured for the 0.1016 mm thick Ta foil provided the most accurate benchmark, having a low contribution of proton scatter from upstream of the water tank. The accuracy was 0.15% in measured beam energy and 0.3% in measured depth penetration at the Bragg peak. The depth of the distal edge of the Bragg peak in the simulation fell short of measurement, suggesting that the mean ionization potential of water is 2–5 eV higher than the 78 eV used in the stopping power calculation for the simulation. The eye treatment beam line depth dose curves provide validation of Monte Carlo simulation of a Bragg curve and SOBP with 4%/2 mm accuracy. PMID:26133619

  3. SU-F-T-201: Acceleration of Dose Optimization Process Using Dual-Loop Optimization Technique for Spot Scanning Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirayama, S; Fujimoto, R

    Purpose: The purpose was to demonstrate a developed acceleration technique of dose optimization and to investigate its applicability to the optimization process in a treatment planning system (TPS) for proton therapy. Methods: In the developed technique, the dose matrix is divided into two parts, main and halo, based on beam sizes. The boundary of the two parts is varied depending on the beam energy and water equivalent depth by utilizing the beam size as a singular threshold parameter. The optimization is executed with two levels of iterations. In the inner loop, doses from the main part are updated, whereas dosesmore » from the halo part remain constant. In the outer loop, the doses from the halo part are recalculated. We implemented this technique to the optimization process in the TPS and investigated the dependence on the target volume of the speedup effect and applicability to the worst-case optimization (WCO) in benchmarks. Results: We created irradiation plans for various cubic targets and measured the optimization time varying the target volume. The speedup effect was improved as the target volume increased, and the calculation speed increased by a factor of six for a 1000 cm3 target. An IMPT plan for the RTOG benchmark phantom was created in consideration of ±3.5% range uncertainties using the WCO. Beams were irradiated at 0, 45, and 315 degrees. The target’s prescribed dose and OAR’s Dmax were set to 3 Gy and 1.5 Gy, respectively. Using the developed technique, the calculation speed increased by a factor of 1.5. Meanwhile, no significant difference in the calculated DVHs was found before and after incorporating the technique into the WCO. Conclusion: The developed technique could be adapted to the TPS’s optimization. The technique was effective particularly for large target cases.« less

  4. BMDExpress Data Viewer - A visualization Tool to Analyze BMDExpress Datasets (Health Canada Science Forum)

    EPA Science Inventory

    Benchmark Dose (BMD) modelling is a mathematical approach used to determine where a dose-response change begins to take place relative to controls following chemical exposure. BMDs are being increasingly applied in regulatory toxicology to determine points of departure. BMDExpres...

  5. Concordance of Transcriptional and Apical Benchmark Dose Levels for Conazole-Induced Liver Effects in Mice

    EPA Science Inventory

    ABSTRACT The ability to anchor chemical class-based gene expression changes to phenotypic lesions and to describe these changes as a function of dose and time informs mode of action determinations and improves quantitative risk assessments. Previous transcription-based microarra...

  6. Formative usability evaluation of a fixed-dose pen-injector platform device

    PubMed Central

    Lange, Jakob; Nemeth, Tobias

    2018-01-01

    Background This article for the first time presents a formative usability study of a fixed-dose pen injector platform device used for the subcutaneous delivery of biopharmaceuticals, primarily for self-administration by the patient. The study was conducted with a user population of both naïve and experienced users across a range of ages. The goals of the study were to evaluate whether users could use the devices safely and effectively relying on the instructions for use (IFU) for guidance, as well as to benchmark the device against another similar injector established in the market. Further objectives were to capture any usability issues and obtain participants’ subjective ratings on the properties and performance of both devices. Methods A total of 20 participants in three groups studied the IFU and performed simulated injections into an injection pad. Results All participants were able to use the device successfully. The device was well appreciated by all users with, maximum usability feedback scores reported by 90% or more on handling forces and device feedback, and by 85% or more on fit and grip of the device. The presence of clear audible and visible feedbacks upon successful loading of a dose and completion of injection was seen to be a significant improvement over the benchmark injector. Conclusion The observation that the platform device can be safely and efficiently used by all user groups provides confidence that the device and IFU in their current form will pass future summative testing in specific applications. PMID:29670411

  7. A comparative study of space radiation organ doses and associated cancer risks using PHITS and HZETRN.

    PubMed

    Bahadori, Amir A; Sato, Tatsuhiko; Slaba, Tony C; Shavers, Mark R; Semones, Edward J; Van Baalen, Mary; Bolch, Wesley E

    2013-10-21

    NASA currently uses one-dimensional deterministic transport to generate values of the organ dose equivalent needed to calculate stochastic radiation risk following crew space exposures. In this study, organ absorbed doses and dose equivalents are calculated for 50th percentile male and female astronaut phantoms using both the NASA High Charge and Energy Transport Code to perform one-dimensional deterministic transport and the Particle and Heavy Ion Transport Code System to perform three-dimensional Monte Carlo transport. Two measures of radiation risk, effective dose and risk of exposure-induced death (REID) are calculated using the organ dose equivalents resulting from the two methods of radiation transport. For the space radiation environments and simplified shielding configurations considered, small differences (<8%) in the effective dose and REID are found. However, for the galactic cosmic ray (GCR) boundary condition, compensating errors are observed, indicating that comparisons between the integral measurements of complex radiation environments and code calculations can be misleading. Code-to-code benchmarks allow for the comparison of differential quantities, such as secondary particle differential fluence, to provide insight into differences observed in integral quantities for particular components of the GCR spectrum.

  8. A comparative study of space radiation organ doses and associated cancer risks using PHITS and HZETRN

    NASA Astrophysics Data System (ADS)

    Bahadori, Amir A.; Sato, Tatsuhiko; Slaba, Tony C.; Shavers, Mark R.; Semones, Edward J.; Van Baalen, Mary; Bolch, Wesley E.

    2013-10-01

    NASA currently uses one-dimensional deterministic transport to generate values of the organ dose equivalent needed to calculate stochastic radiation risk following crew space exposures. In this study, organ absorbed doses and dose equivalents are calculated for 50th percentile male and female astronaut phantoms using both the NASA High Charge and Energy Transport Code to perform one-dimensional deterministic transport and the Particle and Heavy Ion Transport Code System to perform three-dimensional Monte Carlo transport. Two measures of radiation risk, effective dose and risk of exposure-induced death (REID) are calculated using the organ dose equivalents resulting from the two methods of radiation transport. For the space radiation environments and simplified shielding configurations considered, small differences (<8%) in the effective dose and REID are found. However, for the galactic cosmic ray (GCR) boundary condition, compensating errors are observed, indicating that comparisons between the integral measurements of complex radiation environments and code calculations can be misleading. Code-to-code benchmarks allow for the comparison of differential quantities, such as secondary particle differential fluence, to provide insight into differences observed in integral quantities for particular components of the GCR spectrum.

  9. Additional adjoint Monte Carlo studies of the shielding of concrete structures against initial gamma radiation. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, M.; Cohen, M.O.

    1975-02-01

    The adjoint Monte Carlo method previously developed by MAGI has been applied to the calculation of initial radiation dose due to air secondary gamma rays and fission product gamma rays at detector points within buildings for a wide variety of problems. These provide an in-depth survey of structure shielding effects as well as many new benchmark problems for matching by simplified models. Specifically, elevated ring source results were obtained in the following areas: doses at on-and off-centerline detectors in four concrete blockhouse structures; doses at detector positions along the centerline of a high-rise structure without walls; dose mapping at basementmore » detector positions in the high-rise structure; doses at detector points within a complex concrete structure containing exterior windows and walls and interior partitions; modeling of the complex structure by replacing interior partitions by additional material at exterior walls; effects of elevation angle changes; effects on the dose of changes in fission product ambient spectra; and modeling of mutual shielding due to external structures. In addition, point source results yielding dose extremes about the ring source average were obtained. (auth)« less

  10. A method for modeling laterally asymmetric proton beamlets resulting from collimation

    PubMed Central

    Gelover, Edgar; Wang, Dongxu; Hill, Patrick M.; Flynn, Ryan T.; Gao, Mingcheng; Laub, Steve; Pankuch, Mark; Hyer, Daniel E.

    2015-01-01

    Purpose: To introduce a method to model the 3D dose distribution of laterally asymmetric proton beamlets resulting from collimation. The model enables rapid beamlet calculation for spot scanning (SS) delivery using a novel penumbra-reducing dynamic collimation system (DCS) with two pairs of trimmers oriented perpendicular to each other. Methods: Trimmed beamlet dose distributions in water were simulated with MCNPX and the collimating effects noted in the simulations were validated by experimental measurement. The simulated beamlets were modeled analytically using integral depth dose curves along with an asymmetric Gaussian function to represent fluence in the beam’s eye view (BEV). The BEV parameters consisted of Gaussian standard deviations (sigmas) along each primary axis (σx1,σx2,σy1,σy2) together with the spatial location of the maximum dose (μx,μy). Percent depth dose variation with trimmer position was accounted for with a depth-dependent correction function. Beamlet growth with depth was accounted for by combining the in-air divergence with Hong’s fit of the Highland approximation along each axis in the BEV. Results: The beamlet model showed excellent agreement with the Monte Carlo simulation data used as a benchmark. The overall passing rate for a 3D gamma test with 3%/3 mm passing criteria was 96.1% between the analytical model and Monte Carlo data in an example treatment plan. Conclusions: The analytical model is capable of accurately representing individual asymmetric beamlets resulting from use of the DCS. This method enables integration of the DCS into a treatment planning system to perform dose computation in patient datasets. The method could be generalized for use with any SS collimation system in which blades, leaves, or trimmers are used to laterally sharpen beamlets. PMID:25735287

  11. Risk assessment for consumer exposure to toluene diisocyanate (TDI) derived from polyurethane flexible foam.

    PubMed

    Arnold, Scott M; Collins, Michael A; Graham, Cynthia; Jolly, Athena T; Parod, Ralph J; Poole, Alan; Schupp, Thomas; Shiotsuka, Ronald N; Woolhiser, Michael R

    2012-12-01

    Polyurethanes (PU) are polymers made from diisocyanates and polyols for a variety of consumer products. It has been suggested that PU foam may contain trace amounts of residual toluene diisocyanate (TDI) monomers and present a health risk. To address this concern, the exposure scenario and health risks posed by sleeping on a PU foam mattress were evaluated. Toxicity benchmarks for key non-cancer endpoints (i.e., irritation, sensitization, respiratory tract effects) were determined by dividing points of departure by uncertainty factors. The cancer benchmark was derived using the USEPA Benchmark Dose Software. Results of previous migration and emission data of TDI from PU foam were combined with conservative exposure factors to calculate upper-bound dermal and inhalation exposures to TDI as well as a lifetime average daily dose to TDI from dermal exposure. For each non-cancer endpoint, the toxicity benchmark was divided by the calculated exposure to determine the margin of safety (MOS), which ranged from 200 (respiratory tract) to 3×10(6) (irritation). Although available data indicate TDI is not carcinogenic, a theoretical excess cancer risk (1×10(-7)) was calculated. We conclude from this assessment that sleeping on a PU foam mattress does not pose TDI-related health risks to consumers. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    PubMed

    Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja

    2015-01-01

    The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  13. Using benchmarking to identify inter-centre differences in persistent ductus arteriosus treatment: can we improve outcome?

    PubMed

    Jansen, Esther J S; Dijkman, Koen P; van Lingen, Richard A; de Vries, Willem B; Vijlbrief, Daniel C; de Boode, Willem P; Andriessen, Peter

    2017-10-01

    The aim of this study was to identify inter-centre differences in persistent ductus arteriosus treatment and their related outcomes. Materials and methods We carried out a retrospective, multicentre study including infants between 24+0 and 27+6 weeks of gestation in the period between 2010 and 2011. In all centres, echocardiography was used as the standard procedure to diagnose a patent ductus arteriosus and to document ductal closure. In total, 367 preterm infants were included. All four participating neonatal ICU had a comparable number of preterm infants; however, differences were observed in the incidence of treatment (33-63%), choice and dosing of medication (ibuprofen or indomethacin), number of pharmacological courses (1-4), and the need for surgical ligation after failure of pharmacological treatment (8-52%). Despite the differences in treatment, we found no difference in short-term morbidity between the centres. Adjusted mortality showed independent risk contribution of gestational age, birth weight, ductal ligation, and perinatal centre. Using benchmarking as a tool identified inter-centre differences. In these four perinatal centres, the factors that explained the differences in patent ductus arteriosus treatment are quite complex. Timing, choice of medication, and dosing are probably important determinants for successful patent ductus arteriosus closure.

  14. Demonstration of a software design and statistical analysis methodology with application to patient outcomes data sets

    PubMed Central

    Mayo, Charles; Conners, Steve; Warren, Christopher; Miller, Robert; Court, Laurence; Popple, Richard

    2013-01-01

    Purpose: With emergence of clinical outcomes databases as tools utilized routinely within institutions, comes need for software tools to support automated statistical analysis of these large data sets and intrainstitutional exchange from independent federated databases to support data pooling. In this paper, the authors present a design approach and analysis methodology that addresses both issues. Methods: A software application was constructed to automate analysis of patient outcomes data using a wide range of statistical metrics, by combining use of C#.Net and R code. The accuracy and speed of the code was evaluated using benchmark data sets. Results: The approach provides data needed to evaluate combinations of statistical measurements for ability to identify patterns of interest in the data. Through application of the tools to a benchmark data set for dose-response threshold and to SBRT lung data sets, an algorithm was developed that uses receiver operator characteristic curves to identify a threshold value and combines use of contingency tables, Fisher exact tests, Welch t-tests, and Kolmogorov-Smirnov tests to filter the large data set to identify values demonstrating dose-response. Kullback-Leibler divergences were used to provide additional confirmation. Conclusions: The work demonstrates the viability of the design approach and the software tool for analysis of large data sets. PMID:24320426

  15. Current modeling practice may lead to falsely high benchmark dose estimates.

    PubMed

    Ringblom, Joakim; Johanson, Gunnar; Öberg, Mattias

    2014-07-01

    Benchmark dose (BMD) modeling is increasingly used as the preferred approach to define the point-of-departure for health risk assessment of chemicals. As data are inherently variable, there is always a risk to select a model that defines a lower confidence bound of the BMD (BMDL) that, contrary to expected, exceeds the true BMD. The aim of this study was to investigate how often and under what circumstances such anomalies occur under current modeling practice. Continuous data were generated from a realistic dose-effect curve by Monte Carlo simulations using four dose groups and a set of five different dose placement scenarios, group sizes between 5 and 50 animals and coefficients of variations of 5-15%. The BMD calculations were conducted using nested exponential models, as most BMD software use nested approaches. "Non-protective" BMDLs (higher than true BMD) were frequently observed, in some scenarios reaching 80%. The phenomenon was mainly related to the selection of the non-sigmoidal exponential model (Effect=a·e(b)(·dose)). In conclusion, non-sigmoid models should be used with caution as it may underestimate the risk, illustrating that awareness of the model selection process and sound identification of the point-of-departure is vital for health risk assessment. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Evaluating MoE and its Uncertainty and Variability for Food Contaminants (EuroTox presentation)

    EPA Science Inventory

    Margin of Exposure (MoE), is a metric for quantifying the relationship between exposure and hazard. Ideally, it is the ratio of the dose associated with hazard and an estimate of exposure. For example, hazard may be characterized by a benchmark dose (BMD), and, for food contami...

  17. Benchmark Dose Analysis from Multiple Datasets: The Cumulative Risk Assessment for the N-Methyl Carbamate Pesticides

    EPA Science Inventory

    The US EPA’s N-Methyl Carbamate (NMC) Cumulative Risk assessment was based on the effect on acetylcholine esterase (AChE) activity of exposure to 10 NMC pesticides through dietary, drinking water, and residential exposures, assuming the effects of joint exposure to NMCs is dose-...

  18. 75 FR 40729 - Residues of Quaternary Ammonium Compounds, N-Alkyl (C12-14

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-14

    .... Systemic toxicity occurs after absorption and distribution of the chemical to tissues in the body. Such... identified (the LOAEL) or a Benchmark Dose (BMD) approach is sometimes used for risk assessment. Uncertainty.... No systemic effects observed up to 20 mg/ kg/day, highest dose of technical that could be tested...

  19. Concordance of transcriptional and apical benchmark dose levels for conazole-ind uced liver effects in mice

    EPA Science Inventory

    The ability to anchor chemical class-based gene expression changes to phenotypic lesions and to describe these changes as a function of dose and time can inform mode of action and improve quantitative risk assessment. Previous research identified a 330-gene cluster commonly resp...

  20. Issues in the Design and Interpretation of Chronic Toxicity and Carcinogenicity Studies in Rodents: Approaches to Dose Selection

    EPA Science Inventory

    For more than three decades chronic studies in rodents have been the benchmark for assessing the potential long-term toxicity, and particularly the carcinogenicity, of chemicals. With doses typically administered for about 2 years (18 months to lifetime), the rodent bioassay has ...

  1. Experimental validation of the TOPAS Monte Carlo system for passive scattering proton therapy

    PubMed Central

    Testa, M.; Schümann, J.; Lu, H.-M.; Shin, J.; Faddegon, B.; Perl, J.; Paganetti, H.

    2013-01-01

    Purpose: TOPAS (TOol for PArticle Simulation) is a particle simulation code recently developed with the specific aim of making Monte Carlo simulations user-friendly for research and clinical physicists in the particle therapy community. The authors present a thorough and extensive experimental validation of Monte Carlo simulations performed with TOPAS in a variety of setups relevant for proton therapy applications. The set of validation measurements performed in this work represents an overall end-to-end testing strategy recommended for all clinical centers planning to rely on TOPAS for quality assurance or patient dose calculation and, more generally, for all the institutions using passive-scattering proton therapy systems. Methods: The authors systematically compared TOPAS simulations with measurements that are performed routinely within the quality assurance (QA) program in our institution as well as experiments specifically designed for this validation study. First, the authors compared TOPAS simulations with measurements of depth-dose curves for spread-out Bragg peak (SOBP) fields. Second, absolute dosimetry simulations were benchmarked against measured machine output factors (OFs). Third, the authors simulated and measured 2D dose profiles and analyzed the differences in terms of field flatness and symmetry and usable field size. Fourth, the authors designed a simple experiment using a half-beam shifter to assess the effects of multiple Coulomb scattering, beam divergence, and inverse square attenuation on lateral and longitudinal dose profiles measured and simulated in a water phantom. Fifth, TOPAS’ capabilities to simulate time dependent beam delivery was benchmarked against dose rate functions (i.e., dose per unit time vs time) measured at different depths inside an SOBP field. Sixth, simulations of the charge deposited by protons fully stopping in two different types of multilayer Faraday cups (MLFCs) were compared with measurements to benchmark the nuclear interaction models used in the simulations. Results: SOBPs’ range and modulation width were reproduced, on average, with an accuracy of +1, −2 and ±3 mm, respectively. OF simulations reproduced measured data within ±3%. Simulated 2D dose-profiles show field flatness and average field radius within ±3% of measured profiles. The field symmetry resulted, on average in ±3% agreement with commissioned profiles. TOPAS accuracy in reproducing measured dose profiles downstream the half beam shifter is better than 2%. Dose rate function simulation reproduced the measurements within ∼2% showing that the four-dimensional modeling of the passively modulation system was implement correctly and millimeter accuracy can be achieved in reproducing measured data. For MLFCs simulations, 2% agreement was found between TOPAS and both sets of experimental measurements. The overall results show that TOPAS simulations are within the clinical accepted tolerances for all QA measurements performed at our institution. Conclusions: Our Monte Carlo simulations reproduced accurately the experimental data acquired through all the measurements performed in this study. Thus, TOPAS can reliably be applied to quality assurance for proton therapy and also as an input for commissioning of commercial treatment planning systems. This work also provides the basis for routine clinical dose calculations in patients for all passive scattering proton therapy centers using TOPAS. PMID:24320505

  2. Alternative stitching method for massively parallel e-beam lithography

    NASA Astrophysics Data System (ADS)

    Brandt, Pieter; Tranquillin, Céline; Wieland, Marco; Bayle, Sébastien; Milléquant, Matthieu; Renault, Guillaume

    2015-07-01

    In this study, a stitching method other than soft edge (SE) and smart boundary (SB) is introduced and benchmarked against SE. The method is based on locally enhanced exposure latitude without throughput cost, making use of the fact that the two beams that pass through the stitching region can deposit up to 2× the nominal dose. The method requires a complex proximity effect correction that takes a preset stitching dose profile into account. Although the principle of the presented stitching method can be multibeam (lithography) systems in general, in this study, the MAPPER FLX 1200 tool is specifically considered. For the latter tool at a metal clip at minimum half-pitch of 32 nm, the stitching method effectively mitigates beam-to-beam (B2B) position errors such that they do not induce an increase in critical dimension uniformity (CDU). In other words, the same CDU can be realized inside the stitching region as outside the stitching region. For the SE method, the CDU inside is 0.3 nm higher than outside the stitching region. A 5-nm direct overlay impact from the B2B position errors cannot be reduced by a stitching strategy.

  3. SU-E-T-776: Use of Quality Metrics for a New Hypo-Fractionated Pre-Surgical Mesothelioma Protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richardson, S; Mehta, V

    Purpose: The “SMART” (Surgery for Mesothelioma After Radiation Therapy) approach involves hypo-fractionated radiotherapy of the lung pleura to 25Gy over 5 days followed by surgical resection within 7. Early clinical results suggest that this approach is very promising, but also logistically challenging due to the multidisciplinary involvement. Due to the compressed schedule, high dose, and shortened planning time, the delivery of the planned doses were monitored for safety with quality metric software. Methods: Hypo-fractionated IMRT treatment plans were developed for all patients and exported to Quality Reports™ software. Plan quality metrics or PQMs™ were created to calculate an objective scoringmore » function for each plan. This allows for an objective assessment of the quality of the plan and a benchmark for plan improvement for subsequent patients. The priorities of various components were incorporated based on similar hypo-fractionated protocols such as lung SBRT treatments. Results: Five patients have been treated at our institution using this approach. The plans were developed, QA performed, and ready within 5 days of simulation. Plan Quality metrics utilized in scoring included doses to OAR and target coverage. All patients tolerated treatment well and proceeded to surgery as scheduled. Reported toxicity included grade 1 nausea (n=1), grade 1 esophagitis (n=1), grade 2 fatigue (n=3). One patient had recurrent fluid accumulation following surgery. No patients experienced any pulmonary toxicity prior to surgery. Conclusion: An accelerated course of pre-operative high dose radiation for mesothelioma is an innovative and promising new protocol. Without historical data, one must proceed cautiously and monitor the data carefully. The development of quality metrics and scoring functions for these treatments allows us to benchmark our plans and monitor improvement. If subsequent toxicities occur, these will be easy to investigate and incorporate into the metrics. This will improve the safe delivery of large doses for these patients.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Y; Lacroix, F; Lavallee, M

    Purpose: To evaluate the commercially released Collapsed Cone convolution-based(CCC) dose calculation module of the Elekta OncentraBrachy(OcB) treatment planning system(TPS). Methods: An allwater phantom was used to perform TG43 benchmarks with single source and seventeen sources, separately. Furthermore, four real-patient heterogeneous geometries (chestwall, lung, breast and prostate) were used. They were selected based on their clinical representativity of a class of clinical anatomies that pose clear challenges. The plans were used as is(no modification). For each case, TG43 and CCC calculations were performed in the OcB TPS, with TG186-recommended materials properly assigned to ROIs. For comparison, Monte Carlo simulation was runmore » for each case with the same material scheme and grid mesh as TPS calculations. Both modes of CCC (standard and high quality) were tested. Results: For the benchmark case, the CCC dose, when divided by that of TG43, yields hot-n-cold spots in a radial pattern. The pattern of the high mode is denser than that of the standard mode and is representative of angular dicretization. The total deviation ((hot-cold)/TG43) is 18% for standard mode and 11% for high mode. Seventeen dwell positions help to reduce “ray-effect”, with the total deviation to 6% (standard) and 5% (high), respectively. For the four patient cases, CCC produces, as expected, more realistic dose distributions than TG43. A close agreement was observed between CCC and MC for all isodose lines, from 20% and up; the 10% isodose line of CCC appears shifted compared to that of MC. The DVH plots show dose deviations of CCC from MC in small volume, high dose regions (>100% isodose). For patient cases, the difference between standard and high modes is almost undiscernable. Conclusion: OncentraBrachy CCC algorithm marks a significant dosimetry improvement relative to TG43 in real-patient cases. Further researches are recommended regarding the clinical implications of the above observations. Support provided by a CIHR grant and CCC system provided by Elekta-Nucletron.« less

  5. Intercomparison of Monte Carlo radiation transport codes to model TEPC response in low-energy neutron and gamma-ray fields.

    PubMed

    Ali, F; Waker, A J; Waller, E J

    2014-10-01

    Tissue-equivalent proportional counters (TEPC) can potentially be used as a portable and personal dosemeter in mixed neutron and gamma-ray fields, but what hinders this use is their typically large physical size. To formulate compact TEPC designs, the use of a Monte Carlo transport code is necessary to predict the performance of compact designs in these fields. To perform this modelling, three candidate codes were assessed: MCNPX 2.7.E, FLUKA 2011.2 and PHITS 2.24. In each code, benchmark simulations were performed involving the irradiation of a 5-in. TEPC with monoenergetic neutron fields and a 4-in. wall-less TEPC with monoenergetic gamma-ray fields. The frequency and dose mean lineal energies and dose distributions calculated from each code were compared with experimentally determined data. For the neutron benchmark simulations, PHITS produces data closest to the experimental values and for the gamma-ray benchmark simulations, FLUKA yields data closest to the experimentally determined quantities. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Benchmarking Data Sets for the Evaluation of Virtual Ligand Screening Methods: Review and Perspectives.

    PubMed

    Lagarde, Nathalie; Zagury, Jean-François; Montes, Matthieu

    2015-07-27

    Virtual screening methods are commonly used nowadays in drug discovery processes. However, to ensure their reliability, they have to be carefully evaluated. The evaluation of these methods is often realized in a retrospective way, notably by studying the enrichment of benchmarking data sets. To this purpose, numerous benchmarking data sets were developed over the years, and the resulting improvements led to the availability of high quality benchmarking data sets. However, some points still have to be considered in the selection of the active compounds, decoys, and protein structures to obtain optimal benchmarking data sets.

  7. Benchmarking and Performance Measurement.

    ERIC Educational Resources Information Center

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  8. Assessment of radiation shield integrity of DD/DT fusion neutron generator facilities by Monte Carlo and experimental methods

    NASA Astrophysics Data System (ADS)

    Srinivasan, P.; Priya, S.; Patel, Tarun; Gopalakrishnan, R. K.; Sharma, D. N.

    2015-01-01

    DD/DT fusion neutron generators are used as sources of 2.5 MeV/14.1 MeV neutrons in experimental laboratories for various applications. Detailed knowledge of the radiation dose rates around the neutron generators are essential for ensuring radiological protection of the personnel involved with the operation. This work describes the experimental and Monte Carlo studies carried out in the Purnima Neutron Generator facility of the Bhabha Atomic Research Center (BARC), Mumbai. Verification and validation of the shielding adequacy was carried out by measuring the neutron and gamma dose-rates at various locations inside and outside the neutron generator hall during different operational conditions both for 2.5-MeV and 14.1-MeV neutrons and comparing with theoretical simulations. The calculated and experimental dose rates were found to agree with a maximum deviation of 20% at certain locations. This study has served in benchmarking the Monte Carlo simulation methods adopted for shield design of such facilities. This has also helped in augmenting the existing shield thickness to reduce the neutron and associated gamma dose rates for radiological protection of personnel during operation of the generators at higher source neutron yields up to 1 × 1010 n/s.

  9. The Isprs Benchmark on Indoor Modelling

    NASA Astrophysics Data System (ADS)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  10. An analysis of MCNP cross-sections and tally methods for low-energy photon emitters.

    PubMed

    Demarco, John J; Wallace, Robert E; Boedeker, Kirsten

    2002-04-21

    Monte Carlo calculations are frequently used to analyse a variety of radiological science applications using low-energy (10-1000 keV) photon sources. This study seeks to create a low-energy benchmark for the MCNP Monte Carlo code by simulating the absolute dose rate in water and the air-kerma rate for monoenergetic point sources with energies between 10 keV and 1 MeV. The analysis compares four cross-section datasets as well as the tally method for collision kerma versus absorbed dose. The total photon attenuation coefficient cross-section for low atomic number elements has changed significantly as cross-section data have changed between 1967 and 1989. Differences of up to 10% are observed in the photoelectric cross-section for water at 30 keV between the standard MCNP cross-section dataset (DLC-200) and the most recent XCOM/NIST tabulation. At 30 keV, the absolute dose rate in water at 1.0 cm from the source increases by 7.8% after replacing the DLC-200 photoelectric cross-sections for water with those from the XCOM/NIST tabulation. The differences in the absolute dose rate are analysed when calculated with either the MCNP absorbed dose tally or the collision kerma tally. Significant differences between the collision kerma tally and the absorbed dose tally can occur when using the DLC-200 attenuation coefficients in conjunction with a modern tabulation of mass energy-absorption coefficients.

  11. The KMAT: Benchmarking Knowledge Management.

    ERIC Educational Resources Information Center

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  12. TU-FG-201-06: Remote Dosimetric Auditing for Clinical Trials Using EPID Dosimetry: A Pilot Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miri, N; Legge, K; Greer, P

    2016-06-15

    Purpose: To perform a pilot study for remote dosimetric credentialing of intensity modulated radiation therapy (IMRT) based clinical trials. The study introduces a novel, time efficient and inexpensive dosimetry audit method for multi-center credentialing. The method employs electronic portal imaging device (EPID) to reconstruct delivered dose inside a virtual flat/cylindrical water phantom. Methods: Five centers, including different accelerator types and treatment planning systems (TPS), were asked to download two CT data sets of a Head and Neck (H&N) and Postprostatectomy (P-P) patients to produce benchmark plans. These were then transferred to virtual flat and cylindrical phantom data sets that weremore » also provided. In-air EPID images of the plans were then acquired, and the data sent to the central site for analysis. At the central site, these were converted to DICOM format, all images were used to reconstruct 2D and 3D dose distributions inside respectively the flat and cylindrical phantoms using inhouse EPID to dose conversion software. 2D dose was calculated for individual fields and 3D dose for the combined fields. The results were compared to corresponding TPS doses. Three gamma criteria were used, 3%3mm-3%/2mm–2%/2mm with a 10% dose threshold, to compare the calculated and prescribed dose. Results: All centers had a high pass rate for the criteria of 3%/3 mm. For 2D dose, the average of centers mean pass rate was 99.6% (SD: 0.3%) and 99.8% (SD: 0.3%) for respectively H&N and PP patients. For 3D dose, 3D gamma was used to compare the model dose with TPS combined dose. The mean pass rate was 97.7% (SD: 2.8%) and 98.3% (SD: 1.6%). Conclusion: Successful performance of the method for the pilot centers establishes the method for dosimetric multi-center credentialing. The results are promising and show a high level of gamma agreement and, the procedure is efficient, consistent and inexpensive. Funding has been provided from Department of Radiation Oncology, TROG Cancer Research and the University of Newcastle. Narges Miri is a recipient of a University of Newcastle postgraduate scholarship.« less

  13. 3D conditional generative adversarial networks for high-quality PET image estimation at low dose.

    PubMed

    Wang, Yan; Yu, Biting; Wang, Lei; Zu, Chen; Lalush, David S; Lin, Weili; Wu, Xi; Zhou, Jiliu; Shen, Dinggang; Zhou, Luping

    2018-07-01

    Positron emission tomography (PET) is a widely used imaging modality, providing insight into both the biochemical and physiological processes of human body. Usually, a full dose radioactive tracer is required to obtain high-quality PET images for clinical needs. This inevitably raises concerns about potential health hazards. On the other hand, dose reduction may cause the increased noise in the reconstructed PET images, which impacts the image quality to a certain extent. In this paper, in order to reduce the radiation exposure while maintaining the high quality of PET images, we propose a novel method based on 3D conditional generative adversarial networks (3D c-GANs) to estimate the high-quality full-dose PET images from low-dose ones. Generative adversarial networks (GANs) include a generator network and a discriminator network which are trained simultaneously with the goal of one beating the other. Similar to GANs, in the proposed 3D c-GANs, we condition the model on an input low-dose PET image and generate a corresponding output full-dose PET image. Specifically, to render the same underlying information between the low-dose and full-dose PET images, a 3D U-net-like deep architecture which can combine hierarchical features by using skip connection is designed as the generator network to synthesize the full-dose image. In order to guarantee the synthesized PET image to be close to the real one, we take into account of the estimation error loss in addition to the discriminator feedback to train the generator network. Furthermore, a concatenated 3D c-GANs based progressive refinement scheme is also proposed to further improve the quality of estimated images. Validation was done on a real human brain dataset including both the normal subjects and the subjects diagnosed as mild cognitive impairment (MCI). Experimental results show that our proposed 3D c-GANs method outperforms the benchmark methods and achieves much better performance than the state-of-the-art methods in both qualitative and quantitative measures. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Issues in benchmarking human reliability analysis methods : a literature review.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lois, Erasmia; Forester, John Alan; Tran, Tuan Q.

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted,more » reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less

  15. Issues in Benchmarking Human Reliability Analysis Methods: A Literature Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring; Stacey M. L. Hendrickson; John A. Forester

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing pastmore » benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less

  16. A benchmarking method to measure dietary absorption efficiency of chemicals by fish.

    PubMed

    Xiao, Ruiyang; Adolfsson-Erici, Margaretha; Åkerman, Gun; McLachlan, Michael S; MacLeod, Matthew

    2013-12-01

    Understanding the dietary absorption efficiency of chemicals in the gastrointestinal tract of fish is important from both a scientific and a regulatory point of view. However, reported fish absorption efficiencies for well-studied chemicals are highly variable. In the present study, the authors developed and exploited an internal chemical benchmarking method that has the potential to reduce uncertainty and variability and, thus, to improve the precision of measurements of fish absorption efficiency. The authors applied the benchmarking method to measure the gross absorption efficiency for 15 chemicals with a wide range of physicochemical properties and structures. They selected 2,2',5,6'-tetrachlorobiphenyl (PCB53) and decabromodiphenyl ethane as absorbable and nonabsorbable benchmarks, respectively. Quantities of chemicals determined in fish were benchmarked to the fraction of PCB53 recovered in fish, and quantities of chemicals determined in feces were benchmarked to the fraction of decabromodiphenyl ethane recovered in feces. The performance of the benchmarking procedure was evaluated based on the recovery of the test chemicals and precision of absorption efficiency from repeated tests. Benchmarking did not improve the precision of the measurements; after benchmarking, however, the median recovery for 15 chemicals was 106%, and variability of recoveries was reduced compared with before benchmarking, suggesting that benchmarking could account for incomplete extraction of chemical in fish and incomplete collection of feces from different tests. © 2013 SETAC.

  17. Concentration transport calculations by an original C++ program with interediate fidelity physics through user-defined buildings with an emphasis on release scenarios in radiological facilities

    NASA Astrophysics Data System (ADS)

    Sayre, George Anthony

    The purpose of this dissertation was to develop the C ++ program Emergency Dose to calculate transport of radionuclides through indoor spaces using intermediate fidelity physics that provides improved spatial heterogeneity over well-mixed models such as MELCORRTM and much lower computation times than CFD codes such as FLUENTRTM . Modified potential flow theory, which is an original formulation of potential flow theory with additions of turbulent jet and natural convection approximations, calculates spatially heterogeneous velocity fields that well-mixed models cannot predict. Other original contributions of MPFT are: (1) generation of high fidelity boundary conditions relative to well-mixed-CFD coupling methods (conflation), (2) broadening of potential flow applications to arbitrary indoor spaces previously restricted to specific applications such as exhaust hood studies, and (3) great reduction of computation time relative to CFD codes without total loss of heterogeneity. Additionally, the Lagrangian transport module, which is discussed in Sections 1.3 and 2.4, showcases an ensemble-based formulation thought to be original to interior studies. Velocity and concentration transport benchmarks against analogous formulations in COMSOLRTM produced favorable results with discrepancies resulting from the tetrahedral meshing used in COMSOLRTM outperforming the Cartesian method used by Emergency Dose. A performance comparison of the concentration transport modules against MELCORRTM showed that Emergency Dose held advantages over the well-mixed model especially in scenarios with many interior partitions and varied source positions. A performance comparison of velocity module against FLUENTRTM showed that viscous drag provided the largest error between Emergency Dose and CFD velocity calculations, but that Emergency Dose's turbulent jets well approximated the corresponding CFD jets. Overall, Emergency Dose was found to provide a viable intermediate solution method for concentration transport with relatively low computation times.

  18. A method for modeling laterally asymmetric proton beamlets resulting from collimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gelover, Edgar; Wang, Dongxu; Flynn, Ryan T.

    2015-03-15

    Purpose: To introduce a method to model the 3D dose distribution of laterally asymmetric proton beamlets resulting from collimation. The model enables rapid beamlet calculation for spot scanning (SS) delivery using a novel penumbra-reducing dynamic collimation system (DCS) with two pairs of trimmers oriented perpendicular to each other. Methods: Trimmed beamlet dose distributions in water were simulated with MCNPX and the collimating effects noted in the simulations were validated by experimental measurement. The simulated beamlets were modeled analytically using integral depth dose curves along with an asymmetric Gaussian function to represent fluence in the beam’s eye view (BEV). The BEVmore » parameters consisted of Gaussian standard deviations (sigmas) along each primary axis (σ{sub x1},σ{sub x2},σ{sub y1},σ{sub y2}) together with the spatial location of the maximum dose (μ{sub x},μ{sub y}). Percent depth dose variation with trimmer position was accounted for with a depth-dependent correction function. Beamlet growth with depth was accounted for by combining the in-air divergence with Hong’s fit of the Highland approximation along each axis in the BEV. Results: The beamlet model showed excellent agreement with the Monte Carlo simulation data used as a benchmark. The overall passing rate for a 3D gamma test with 3%/3 mm passing criteria was 96.1% between the analytical model and Monte Carlo data in an example treatment plan. Conclusions: The analytical model is capable of accurately representing individual asymmetric beamlets resulting from use of the DCS. This method enables integration of the DCS into a treatment planning system to perform dose computation in patient datasets. The method could be generalized for use with any SS collimation system in which blades, leaves, or trimmers are used to laterally sharpen beamlets.« less

  19. Cumulative organophosphate pesticide exposure and risk assessment among pregnant women living in an agricultural community: a case study from the CHAMACOS cohort.

    PubMed Central

    Castorina, Rosemary; Bradman, Asa; McKone, Thomas E; Barr, Dana B; Harnly, Martha E; Eskenazi, Brenda

    2003-01-01

    Approximately 230,000 kg of organophosphate (OP) pesticides are applied annually in California's Salinas Valley. These activities have raised concerns about exposures to area residents. We collected three spot urine samples from pregnant women (between 1999 and 2001) enrolled in CHAMACOS (Center for the Health Assessment of Mothers and Children of Salinas), a longitudinal birth cohort study, and analyzed them for six dialkyl phosphate metabolites. We used urine from 446 pregnant women to estimate OP pesticide doses with two deterministic steady-state modeling methods: method 1, which assumed the metabolites were attributable entirely to a single diethyl or dimethyl OP pesticide; and method 2, which adapted U.S. Environmental Protection Agency (U.S. EPA) draft guidelines for cumulative risk assessment to estimate dose from a mixture of OP pesticides that share a common mechanism of toxicity. We used pesticide use reporting data for the Salinas Valley to approximate the mixture to which the women were exposed. Based on average OP pesticide dose estimates that assumed exposure to a single OP pesticide (method 1), between 0% and 36.1% of study participants' doses failed to attain a margin of exposure (MOE) of 100 relative to the U.S. EPA oral benchmark dose(10) (BMD(10)), depending on the assumption made about the parent compound. These BMD(10) values are doses expected to produce a 10% reduction in brain cholinesterase activity compared with background response in rats. Given the participants' average cumulative OP pesticide dose estimates (method 2) and regardless of the index chemical selected, we found that 14.8% of the doses failed to attain an MOE of 100 relative to the BMD(10) of the selected index. An uncertainty analysis of the pesticide mixture parameter, which is extrapolated from pesticide application data for the study area and not directly quantified for each individual, suggests that this point estimate could range from 1 to 34%. In future analyses, we will use pesticide-specific urinary metabolites, when available, to evaluate cumulative OP pesticide exposures. PMID:14527844

  20. Benchmarking the minimum Electron Beam (eBeam) dose required for the sterilization of space foods

    NASA Astrophysics Data System (ADS)

    Bhatia, Sohini S.; Wall, Kayley R.; Kerth, Chris R.; Pillai, Suresh D.

    2018-02-01

    As manned space missions extend in length, the safety, nutrition, acceptability, and shelf life of space foods are of paramount importance to NASA. Since food and mealtimes play a key role in reducing stress and boredom of prolonged missions, the quality of food in terms of appearance, flavor, texture, and aroma can have significant psychological ramifications on astronaut performance. The FDA, which oversees space foods, currently requires a minimum dose of 44 kGy for irradiated space foods. The underlying hypothesis was that commercial sterility of space foods could be achieved at a significantly lower dose, and this lowered dose would positively affect the shelf life of the product. Electron beam processed beef fajitas were used as an example NASA space food to benchmark the minimum eBeam dose required for sterility. A 15 kGy dose was able to achieve an approximately 10 log reduction in Shiga-toxin-producing Escherichia coli bacteria, and a 5 log reduction in Clostridium sporogenes spores. Furthermore, accelerated shelf life testing (ASLT) to determine sensory and quality characteristics under various conditions was conducted. Using Multidimensional gas-chromatography-olfactometry-mass spectrometry (MDGC-O-MS), numerous volatiles were shown to be dependent on the dose applied to the product. Furthermore, concentrations of off -flavor aroma compounds such as dimethyl sulfide were decreased at the reduced 15 kGy dose. The results suggest that the combination of conventional cooking combined with eBeam processing (15 kGy) can achieve the safety and shelf-life objectives needed for long duration space-foods.

  1. Computational Chemistry Comparison and Benchmark Database

    National Institute of Standards and Technology Data Gateway

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  2. The grout/glass performance assessment code system (GPACS) with verification and benchmarking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piepho, M.G.; Sutherland, W.H.; Rittmann, P.D.

    1994-12-01

    GPACS is a computer code system for calculating water flow (unsaturated or saturated), solute transport, and human doses due to the slow release of contaminants from a waste form (in particular grout or glass) through an engineered system and through a vadose zone to an aquifer, well and river. This dual-purpose document is intended to serve as a user`s guide and verification/benchmark document for the Grout/Glass Performance Assessment Code system (GPACS). GPACS can be used for low-level-waste (LLW) Glass Performance Assessment and many other applications including other low-level-waste performance assessments and risk assessments. Based on all the cses presented, GPACSmore » is adequate (verified) for calculating water flow and contaminant transport in unsaturated-zone sediments and for calculating human doses via the groundwater pathway.« less

  3. Multi-axis dose accumulation of noninvasive image-guided breast brachytherapy through biomechanical modeling of tissue deformation using the finite element method

    PubMed Central

    Ghadyani, Hamid R.; Bastien, Adam D.; Lutz, Nicholas N.; Hepel, Jaroslaw T.

    2015-01-01

    Purpose Noninvasive image-guided breast brachytherapy delivers conformal HDR 192Ir brachytherapy treatments with the breast compressed, and treated in the cranial-caudal and medial-lateral directions. This technique subjects breast tissue to extreme deformations not observed for other disease sites. Given that, commercially-available software for deformable image registration cannot accurately co-register image sets obtained in these two states, a finite element analysis based on a biomechanical model was developed to deform dose distributions for each compression circumstance for dose summation. Material and methods The model assumed the breast was under planar stress with values of 30 kPa for Young's modulus and 0.3 for Poisson's ratio. Dose distributions from round and skin-dose optimized applicators in cranial-caudal and medial-lateral compressions were deformed using 0.1 cm planar resolution. Dose distributions, skin doses, and dose-volume histograms were generated. Results were examined as a function of breast thickness, applicator size, target size, and offset distance from the center. Results Over the range of examined thicknesses, target size increased several millimeters as compression thickness decreased. This trend increased with increasing offset distances. Applicator size minimally affected target coverage, until applicator size was less than the compressed target size. In all cases, with an applicator larger or equal to the compressed target size, > 90% of the target covered by > 90% of the prescription dose. In all cases, dose coverage became less uniform as offset distance increased and average dose increased. This effect was more pronounced for smaller target–applicator combinations. Conclusions The model exhibited skin dose trends that matched MC-generated benchmarking results within 2% and clinical observations over a similar range of breast thicknesses and target sizes. The model provided quantitative insight on dosimetric treatment variables over a range of clinical circumstances. These findings highlight the need for careful target localization and accurate identification of compression thickness and target offset. PMID:25829938

  4. Linking log files with dosimetric accuracy--A multi-institutional study on quality assurance of volumetric modulated arc therapy.

    PubMed

    Pasler, Marlies; Kaas, Jochem; Perik, Thijs; Geuze, Job; Dreindl, Ralf; Künzler, Thomas; Wittkamper, Frits; Georg, Dietmar

    2015-12-01

    To systematically evaluate machine specific quality assurance (QA) for volumetric modulated arc therapy (VMAT) based on log files by applying a dynamic benchmark plan. A VMAT benchmark plan was created and tested on 18 Elekta linacs (13 MLCi or MLCi2, 5 Agility) at 4 different institutions. Linac log files were analyzed and a delivery robustness index was introduced. For dosimetric measurements an ionization chamber array was used. Relative dose deviations were assessed by mean gamma for each control point and compared to the log file evaluation. Fourteen linacs delivered the VMAT benchmark plan, while 4 linacs failed by consistently terminating the delivery. The mean leaf error (±1SD) was 0.3±0.2 mm for all linacs. Large MLC maximum errors up to 6.5 mm were observed at reversal positions. Delivery robustness index accounting for MLC position correction (0.8-1.0) correlated with delivery time (80-128 s) and depended on dose rate performance. Dosimetric evaluation indicated in general accurate plan reproducibility with γ(mean)(±1 SD)=0.4±0.2 for 1 mm/1%. However single control point analysis revealed larger deviations and attributed well to log file analysis. The designed benchmark plan helped identify linac related malfunctions in dynamic mode for VMAT. Log files serve as an important additional QA measure to understand and visualize dynamic linac parameters. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. Gamma irradiator dose mapping simulation using the MCNP code and benchmarking with dosimetry.

    PubMed

    Sohrabpour, M; Hassanzadeh, M; Shahriari, M; Sharifzadeh, M

    2002-10-01

    The Monte Carlo transport code, MCNP, has been applied in simulating dose rate distribution in the IR-136 gamma irradiator system. Isodose curves, cumulative dose values, and system design data such as throughputs, over-dose-ratios, and efficiencies have been simulated as functions of product density. Simulated isodose curves, and cumulative dose values were compared with dosimetry values obtained using polymethyle-methacrylate, Fricke, ethanol-chlorobenzene, and potassium dichromate dosimeters. The produced system design data were also found to agree quite favorably with those of the system manufacturer's data. MCNP has thus been found to be an effective transport code for handling of various dose mapping excercises for gamma irradiators.

  6. Toxicogenomics and cancer risk assessment: a framework for key event analysis and dose-response assessment for nongenotoxic carcinogens.

    PubMed

    Bercu, Joel P; Jolly, Robert A; Flagella, Kelly M; Baker, Thomas K; Romero, Pedro; Stevens, James L

    2010-12-01

    In order to determine a threshold for nongenotoxic carcinogens, the traditional risk assessment approach has been to identify a mode of action (MOA) with a nonlinear dose-response. The dose-response for one or more key event(s) linked to the MOA for carcinogenicity allows a point of departure (POD) to be selected from the most sensitive effect dose or no-effect dose. However, this can be challenging because multiple MOAs and key events may exist for carcinogenicity and oftentimes extensive research is required to elucidate the MOA. In the present study, a microarray analysis was conducted to determine if a POD could be identified following short-term oral rat exposure with two nongenotoxic rodent carcinogens, fenofibrate and methapyrilene, using a benchmark dose analysis of genes aggregated in Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways and Gene Ontology (GO) biological processes, which likely encompass key event(s) for carcinogenicity. The gene expression response for fenofibrate given to rats for 2days was consistent with its MOA and known key events linked to PPARα activation. The temporal response from daily dosing with methapyrilene demonstrated biological complexity with waves of pathways/biological processes occurring over 1, 3, and 7days; nonetheless, the benchmark dose values were consistent over time. When comparing the dose-response of toxicogenomic data to tumorigenesis or precursor events, the toxicogenomics POD was slightly below any effect level. Our results suggest that toxicogenomic analysis using short-term studies can be used to identify a threshold for nongenotoxic carcinogens based on evaluation of potential key event(s) which then can be used within a risk assessment framework. Copyright © 2010 Elsevier Inc. All rights reserved.

  7. Piloting a Process Maturity Model as an e-Learning Benchmarking Method

    ERIC Educational Resources Information Center

    Petch, Jim; Calverley, Gayle; Dexter, Hilary; Cappelli, Tim

    2007-01-01

    As part of a national e-learning benchmarking initiative of the UK Higher Education Academy, the University of Manchester is carrying out a pilot study of a method to benchmark e-learning in an institution. The pilot was designed to evaluate the operational viability of a method based on the e-Learning Maturity Model developed at the University of…

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, S; Ho, M; Chen, C

    Purpose: The use of log files to perform patient specific quality assurance for both protons and IMRT has been established. Here, we extend that approach to a proprietary log file format and compare our results to measurements in phantom. Our goal was to generate a system that would permit gross errors to be found within 3 fractions until direct measurements. This approach could eventually replace direct measurements. Methods: Spot scanning protons pass through multi-wire ionization chambers which provide information about the charge, location, and size of each delivered spot. We have generated a program that calculates the dose in phantommore » from these log files and compares the measurements with the plan. The program has 3 different spot shape models: single Gaussian, double Gaussian and the ASTROID model. The program was benchmarked across different treatment sites for 23 patients and 74 fields. Results: The dose calculated from the log files were compared to those generate by the treatment planning system (Raystation). While the dual Gaussian model often gave better agreement, overall, the ASTROID model gave the most consistent results. Using a 5%–3 mm gamma with a 90% passing criteria and excluding doses below 20% of prescription all patient samples passed. However, the degree of agreement of the log file approach was slightly worse than that of the chamber array measurement approach. Operationally, this implies that if the beam passes the log file model, it should pass direct measurement. Conclusion: We have established and benchmarked a model for log file QA in an IBA proteus plus system. The choice of optimal spot model for a given class of patients may be affected by factors such as site, field size, and range shifter and will be investigated further.« less

  9. A deterministic partial differential equation model for dose calculation in electron radiotherapy.

    PubMed

    Duclous, R; Dubroca, B; Frank, M

    2010-07-07

    High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g.Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung, Compton scattering and the production of delta electrons are added to our model, the computation time will only slightly increase. Its margin of error, on the other hand, will decrease and should be within a few per cent of the actual dose. Therefore, the new model has the potential to become useful for dose calculations in clinical practice.

  10. A deterministic partial differential equation model for dose calculation in electron radiotherapy

    NASA Astrophysics Data System (ADS)

    Duclous, R.; Dubroca, B.; Frank, M.

    2010-07-01

    High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g. Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung, Compton scattering and the production of δ electrons are added to our model, the computation time will only slightly increase. Its margin of error, on the other hand, will decrease and should be within a few per cent of the actual dose. Therefore, the new model has the potential to become useful for dose calculations in clinical practice.

  11. SU-F-T-231: Improving the Efficiency of a Radiotherapy Peer-Review System for Quality Assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsu, S; Basavatia, A; Garg, M

    Purpose: To improve the efficiency of a radiotherapy peer-review system using a commercially available software application for plan quality evaluation and documentation. Methods: A commercial application, FullAccess (Radialogica LLC, Version 1.4.4), was implemented in a Citrix platform for peer-review process and patient documentation. This application can display images, isodose lines, and dose-volume histograms and create plan reports for peer-review process. Dose metrics in the report can also be benchmarked for plan quality evaluation. Site-specific templates were generated based on departmental treatment planning policies and procedures for each disease site, which generally follow RTOG protocols as well as published prospective clinicalmore » trial data, including both conventional fractionation and hypo-fractionation schema. Once a plan is ready for review, the planner exports the plan to FullAccess, applies the site-specific template, and presents the report for plan review. The plan is still reviewed in the treatment planning system, as that is the legal record. Upon physician’s approval of a plan, the plan is packaged for peer review with the plan report and dose metrics are saved to the database. Results: The reports show dose metrics of PTVs and critical organs for the plans and also indicate whether or not the metrics are within tolerance. Graphical results with green, yellow, and red lights are displayed of whether planning objectives have been met. In addition, benchmarking statistics are collected to see where the current plan falls compared to all historical plans on each metric. All physicians in peer review can easily verify constraints by these reports. Conclusion: We have demonstrated the improvement in a radiotherapy peer-review system, which allows physicians to easily verify planning constraints for different disease sites and fractionation schema, allows for standardization in the clinic to ensure that departmental policies are maintained, and builds a comprehensive database for potential clinical outcome evaluation.« less

  12. Method and system for benchmarking computers

    DOEpatents

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  13. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction

    PubMed Central

    Puton, Tomasz; Kozlowski, Lukasz P.; Rother, Kristian M.; Bujnicki, Janusz M.

    2013-01-01

    We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks. PMID:23435231

  14. Benchmark duration of work hours for development of fatigue symptoms in Japanese workers with adjustment for job-related stress.

    PubMed

    Suwazono, Yasushi; Dochi, Mirei; Kobayashi, Etsuko; Oishi, Mitsuhiro; Okubo, Yasushi; Tanaka, Kumihiko; Sakata, Kouichi

    2008-12-01

    The objective of this study was to calculate benchmark durations and lower 95% confidence limits for benchmark durations of working hours associated with subjective fatigue symptoms by applying the benchmark dose approach while adjusting for job-related stress using multiple logistic regression analyses. A self-administered questionnaire was completed by 3,069 male and 412 female daytime workers (age 18-67 years) in a Japanese steel company. The eight dependent variables in the Cumulative Fatigue Symptoms Index were decreased vitality, general fatigue, physical disorders, irritability, decreased willingness to work, anxiety, depressive feelings, and chronic tiredness. Independent variables were daily working hours, four subscales (job demand, job control, interpersonal relationship, and job suitability) of the Brief Job Stress Questionnaire, and other potential covariates. Using significant parameters for working hours and those for other covariates, the benchmark durations of working hours were calculated for the corresponding Index property. Benchmark response was set at 5% or 10%. Assuming a condition of worst job stress, the benchmark duration/lower 95% confidence limit for benchmark duration of working hours per day with a benchmark response of 5% or 10% were 10.0/9.4 or 11.7/10.7 (irritability) and 9.2/8.9 or 10.4/9.8 (chronic tiredness) in men and 8.9/8.4 or 9.8/8.9 (chronic tiredness) in women. The threshold amounts of working hours for fatigue symptoms under the worst job-related stress were very close to the standard daily working hours in Japan. The results strongly suggest that special attention should be paid to employees whose working hours exceed threshold amounts based on individual levels of job-related stress.

  15. Standardised Benchmarking in the Quest for Orthologs

    PubMed Central

    Altenhoff, Adrian M.; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A.; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P.; Schreiber, Fabian; Sousa da Silva, Alan; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Juhl Jensen, Lars; Martin, Maria J.; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E.; Thomas, Paul D.; Sonnhammer, Erik; Dessimoz, Christophe

    2016-01-01

    The identification of evolutionarily related genes across different species—orthologs in particular—forms the backbone of many comparative, evolutionary, and functional genomic analyses. Achieving high accuracy in orthology inference is thus essential. Yet the true evolutionary history of genes, required to ascertain orthology, is generally unknown. Furthermore, orthologs are used for very different applications across different phyla, with different requirements in terms of the precision-recall trade-off. As a result, assessing the performance of orthology inference methods remains difficult for both users and method developers. Here, we present a community effort to establish standards in orthology benchmarking and facilitate orthology benchmarking through an automated web-based service (http://orthology.benchmarkservice.org). Using this new service, we characterise the performance of 15 well-established orthology inference methods and resources on a battery of 20 different benchmarks. Standardised benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimal requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  16. Tolerance limits and methodologies for IMRT measurement-based verification QA: Recommendations of AAPM Task Group No. 218.

    PubMed

    Miften, Moyed; Olch, Arthur; Mihailidis, Dimitris; Moran, Jean; Pawlicki, Todd; Molineu, Andrea; Li, Harold; Wijesooriya, Krishni; Shi, Jie; Xia, Ping; Papanikolaou, Nikos; Low, Daniel A

    2018-04-01

    Patient-specific IMRT QA measurements are important components of processes designed to identify discrepancies between calculated and delivered radiation doses. Discrepancy tolerance limits are neither well defined nor consistently applied across centers. The AAPM TG-218 report provides a comprehensive review aimed at improving the understanding and consistency of these processes as well as recommendations for methodologies and tolerance limits in patient-specific IMRT QA. The performance of the dose difference/distance-to-agreement (DTA) and γ dose distribution comparison metrics are investigated. Measurement methods are reviewed and followed by a discussion of the pros and cons of each. Methodologies for absolute dose verification are discussed and new IMRT QA verification tools are presented. Literature on the expected or achievable agreement between measurements and calculations for different types of planning and delivery systems are reviewed and analyzed. Tests of vendor implementations of the γ verification algorithm employing benchmark cases are presented. Operational shortcomings that can reduce the γ tool accuracy and subsequent effectiveness for IMRT QA are described. Practical considerations including spatial resolution, normalization, dose threshold, and data interpretation are discussed. Published data on IMRT QA and the clinical experience of the group members are used to develop guidelines and recommendations on tolerance and action limits for IMRT QA. Steps to check failed IMRT QA plans are outlined. Recommendations on delivery methods, data interpretation, dose normalization, the use of γ analysis routines and choice of tolerance limits for IMRT QA are made with focus on detecting differences between calculated and measured doses via the use of robust analysis methods and an in-depth understanding of IMRT verification metrics. The recommendations are intended to improve the IMRT QA process and establish consistent, and comparable IMRT QA criteria among institutions. © 2018 American Association of Physicists in Medicine.

  17. PMLB: a large benchmark suite for machine learning evaluation and comparison.

    PubMed

    Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H

    2017-01-01

    The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.

  18. Selection of appropriate tumour data sets for Benchmark Dose Modelling (BMD) and derivation of a Margin of Exposure (MoE) for substances that are genotoxic and carcinogenic: considerations of biological relevance of tumour type, data quality and uncertainty assessment.

    PubMed

    Edler, Lutz; Hart, Andy; Greaves, Peter; Carthew, Philip; Coulet, Myriam; Boobis, Alan; Williams, Gary M; Smith, Benjamin

    2014-08-01

    This article addresses a number of concepts related to the selection and modelling of carcinogenicity data for the calculation of a Margin of Exposure. It follows up on the recommendations put forward by the International Life Sciences Institute - European branch in 2010 on the application of the Margin of Exposure (MoE) approach to substances in food that are genotoxic and carcinogenic. The aims are to provide practical guidance on the relevance of animal tumour data for human carcinogenic hazard assessment, appropriate selection of tumour data for Benchmark Dose Modelling, and approaches for dealing with the uncertainty associated with the selection of data for modelling and, consequently, the derived Point of Departure (PoD) used to calculate the MoE. Although the concepts outlined in this article are interrelated, the background expertise needed to address each topic varies. For instance, the expertise needed to make a judgement on biological relevance of a specific tumour type is clearly different to that needed to determine the statistical uncertainty around the data used for modelling a benchmark dose. As such, each topic is dealt with separately to allow those with specialised knowledge to target key areas of guidance and provide a more in-depth discussion on each subject for those new to the concept of the Margin of Exposure approach. Copyright © 2013 ILSI Europe. Published by Elsevier Ltd.. All rights reserved.

  19. Rigorous-two-Steps scheme of TRIPOLI-4® Monte Carlo code validation for shutdown dose rate calculation

    NASA Astrophysics Data System (ADS)

    Jaboulay, Jean-Charles; Brun, Emeric; Hugot, François-Xavier; Huynh, Tan-Dat; Malouch, Fadhel; Mancusi, Davide; Tsilanizara, Aime

    2017-09-01

    After fission or fusion reactor shutdown the activated structure emits decay photons. For maintenance operations the radiation dose map must be established in the reactor building. Several calculation schemes have been developed to calculate the shutdown dose rate. These schemes are widely developed in fusion application and more precisely for the ITER tokamak. This paper presents the rigorous-two-steps scheme implemented at CEA. It is based on the TRIPOLI-4® Monte Carlo code and the inventory code MENDEL. The ITER shutdown dose rate benchmark has been carried out, results are in a good agreement with the other participant.

  20. Whole-body to tissue concentration ratios for use in biota dose assessments for animals.

    PubMed

    Yankovich, Tamara L; Beresford, Nicholas A; Wood, Michael D; Aono, Tasuo; Andersson, Pål; Barnett, Catherine L; Bennett, Pamela; Brown, Justin E; Fesenko, Sergey; Fesenko, J; Hosseini, Ali; Howard, Brenda J; Johansen, Mathew P; Phaneuf, Marcel M; Tagami, Keiko; Takata, Hyoe; Twining, John R; Uchida, Shigeo

    2010-11-01

    Environmental monitoring programs often measure contaminant concentrations in animal tissues consumed by humans (e.g., muscle). By comparison, demonstration of the protection of biota from the potential effects of radionuclides involves a comparison of whole-body doses to radiological dose benchmarks. Consequently, methods for deriving whole-body concentration ratios based on tissue-specific data are required to make best use of the available information. This paper provides a series of look-up tables with whole-body:tissue-specific concentration ratios for non-human biota. Focus was placed on relatively broad animal categories (including molluscs, crustaceans, freshwater fishes, marine fishes, amphibians, reptiles, birds and mammals) and commonly measured tissues (specifically, bone, muscle, liver and kidney). Depending upon organism, whole-body to tissue concentration ratios were derived for between 12 and 47 elements. The whole-body to tissue concentration ratios can be used to estimate whole-body concentrations from tissue-specific measurements. However, we recommend that any given whole-body to tissue concentration ratio should not be used if the value falls between 0.75 and 1.5. Instead, a value of one should be assumed.

  1. Split exponential track length estimator for Monte-Carlo simulations of small-animal radiation therapy

    NASA Astrophysics Data System (ADS)

    Smekens, F.; Létang, J. M.; Noblet, C.; Chiavassa, S.; Delpon, G.; Freud, N.; Rit, S.; Sarrut, D.

    2014-12-01

    We propose the split exponential track length estimator (seTLE), a new kerma-based method combining the exponential variant of the TLE and a splitting strategy to speed up Monte Carlo (MC) dose computation for low energy photon beams. The splitting strategy is applied to both the primary and the secondary emitted photons, triggered by either the MC events generator for primaries or the photon interactions generator for secondaries. Split photons are replaced by virtual particles for fast dose calculation using the exponential TLE. Virtual particles are propagated by ray-tracing in voxelized volumes and by conventional MC navigation elsewhere. Hence, the contribution of volumes such as collimators, treatment couch and holding devices can be taken into account in the dose calculation. We evaluated and analysed the seTLE method for two realistic small animal radiotherapy treatment plans. The effect of the kerma approximation, i.e. the complete deactivation of electron transport, was investigated. The efficiency of seTLE against splitting multiplicities was also studied. A benchmark with analog MC and TLE was carried out in terms of dose convergence and efficiency. The results showed that the deactivation of electrons impacts the dose at the water/bone interface in high dose regions. The maximum and mean dose differences normalized to the dose at the isocenter were, respectively of 14% and 2% . Optimal splitting multiplicities were found to be around 300. In all situations, discrepancies in integral dose were below 0.5% and 99.8% of the voxels fulfilled a 1%/0.3 mm gamma index criterion. Efficiency gains of seTLE varied from 3.2 × 105 to 7.7 × 105 compared to analog MC and from 13 to 15 compared to conventional TLE. In conclusion, seTLE provides results similar to the TLE while increasing the efficiency by a factor between 13 and 15, which makes it particularly well-suited to typical small animal radiation therapy applications.

  2. Benchmarking routine psychological services: a discussion of challenges and methods.

    PubMed

    Delgadillo, Jaime; McMillan, Dean; Leach, Chris; Lucock, Mike; Gilbody, Simon; Wood, Nick

    2014-01-01

    Policy developments in recent years have led to important changes in the level of access to evidence-based psychological treatments. Several methods have been used to investigate the effectiveness of these treatments in routine care, with different approaches to outcome definition and data analysis. To present a review of challenges and methods for the evaluation of evidence-based treatments delivered in routine mental healthcare. This is followed by a case example of a benchmarking method applied in primary care. High, average and poor performance benchmarks were calculated through a meta-analysis of published data from services working under the Improving Access to Psychological Therapies (IAPT) Programme in England. Pre-post treatment effect sizes (ES) and confidence intervals were estimated to illustrate a benchmarking method enabling services to evaluate routine clinical outcomes. High, average and poor performance ES for routine IAPT services were estimated to be 0.91, 0.73 and 0.46 for depression (using PHQ-9) and 1.02, 0.78 and 0.52 for anxiety (using GAD-7). Data from one specific IAPT service exemplify how to evaluate and contextualize routine clinical performance against these benchmarks. The main contribution of this report is to summarize key recommendations for the selection of an adequate set of psychometric measures, the operational definition of outcomes, and the statistical evaluation of clinical performance. A benchmarking method is also presented, which may enable a robust evaluation of clinical performance against national benchmarks. Some limitations concerned significant heterogeneity among data sources, and wide variations in ES and data completeness.

  3. Low utilisation of diabetes medicines in Iran, despite their affordability (2000–2012): a time-series and benchmarking study

    PubMed Central

    Sarayani, Amir; Rashidian, Arash; Gholami, Kheirollah

    2014-01-01

    Objectives Diabetes is a major public health concern worldwide, particularly in low-income and middle-income countries (LMICs). Limited data exist on the status of access to diabetes medicines in LMICs. We assessed the utilisation and affordability of diabetes medicines in Iran as a middle-income country. Design We used a retrospective time-series design (2000–2012) and assessed national diabetes medicines’ utilisation using pharmaceuticals wholesale data. Methods We calculated defined daily dose consumptions per population days (DDDs/1000 inhabitants/day; DIDs) indicator. Findings were benchmarked with data from Organization for Economic Co-operation and Development (OECD) countries. We also employed Drug Utilization-90% (DU-90) method to compare DU-90s with the Essential Medicines List published by the WHO. We measured affordability using number of minimum daily wage required to purchase a treatment course for 1 month. Results Diabetes medicines’ consumption increased from 4.47 to 33.54 DIDs. The benchmarking showed that medicines’ utilisation in Iran in 2011 was only 54% of the median DIDs of 22 OECD countries. Oral hypoglycaemic agents consisted over 80% of use throughout the study period. Regular and isophane insulin (NPH), glibenclamide, metformin and gliclazide were the DU-90 drugs in 2012. Metformin, glibenclamide and regular/NPH insulin combination therapy were affordable throughout the study period (∼0.4, ∼0.1, ∼0.3 of minimum daily wage, respectively). While the affordability of novel insulin preparations improved over time, they were still unaffordable in 2012. Conclusions The utilisation of diabetes medicines was relatively low, perhaps due to underdiagnosis and inadequate management of patients with diabetes. This had occurred despite affordability of essential diabetes medicines in Iran. Appropriate policies are required to address the underutilisation of diabetes medicines in Iran. PMID:25324322

  4. ANALYSES OF NEUROBEHAVIORAL SCREENING DATA: BENCHMARK DOSE ESTIMATION.

    EPA Science Inventory

    Analysis of neurotoxicological screening data such as those of the functional observational battery (FOB) traditionally relies on analysis of variance (ANOVA) with repeated measurements, followed by determination of a no-adverse-effect level (NOAEL). The US EPA has proposed the ...

  5. Radiation dose in coronary angiography and intervention: initial results from the establishment of a multi-centre diagnostic reference level in Queensland public hospitals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowhurst, James A, E-mail: jimcrowhurst@hotmail.com; School of Medicine, University of Queensland, St. Lucia, Brisbane, Queensland; Whitby, Mark

    Radiation dose to patients undergoing invasive coronary angiography (ICA) is relatively high. Guidelines suggest that a local benchmark or diagnostic reference level (DRL) be established for these procedures. This study sought to create a DRL for ICA procedures in Queensland public hospitals. Data were collected for all Cardiac Catheter Laboratories in Queensland public hospitals. Data were collected for diagnostic coronary angiography (CA) and single-vessel percutaneous intervention (PCI) procedures. Dose area product (P{sub KA}), skin surface entrance dose (K{sub AR}), fluoroscopy time (FT), and patient height and weight were collected for 3 months. The DRL was set from the 75th percentilemore » of the P{sub KA.} 2590 patients were included in the CA group where the median FT was 3.5 min (inter-quartile range = 2.3–6.1). Median K{sub AR} = 581 mGy (374–876). Median P{sub KA} = 3908 uGym{sup 2} (2489–5865) DRL = 5865 uGym{sup 2}. 947 patients were included in the PCI group where median FT was 11.2 min (7.7–17.4). Median K{sub AR} = 1501 mGy (928–2224). Median P{sub KA} = 8736 uGym{sup 2} (5449–12,900) DRL = 12,900 uGym{sup 2}. This study established a benchmark for radiation dose for diagnostic and interventional coronary angiography in Queensland public facilities.« less

  6. Benchmarking methods and data sets for ligand enrichment assessment in virtual screening.

    PubMed

    Xia, Jie; Tilahun, Ermias Lemma; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon

    2015-01-01

    Retrospective small-scale virtual screening (VS) based on benchmarking data sets has been widely used to estimate ligand enrichments of VS approaches in the prospective (i.e. real-world) efforts. However, the intrinsic differences of benchmarking sets to the real screening chemical libraries can cause biased assessment. Herein, we summarize the history of benchmarking methods as well as data sets and highlight three main types of biases found in benchmarking sets, i.e. "analogue bias", "artificial enrichment" and "false negative". In addition, we introduce our recent algorithm to build maximum-unbiased benchmarking sets applicable to both ligand-based and structure-based VS approaches, and its implementations to three important human histone deacetylases (HDACs) isoforms, i.e. HDAC1, HDAC6 and HDAC8. The leave-one-out cross-validation (LOO CV) demonstrates that the benchmarking sets built by our algorithm are maximum-unbiased as measured by property matching, ROC curves and AUCs. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Benchmarking Methods and Data Sets for Ligand Enrichment Assessment in Virtual Screening

    PubMed Central

    Xia, Jie; Tilahun, Ermias Lemma; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon

    2014-01-01

    Retrospective small-scale virtual screening (VS) based on benchmarking data sets has been widely used to estimate ligand enrichments of VS approaches in the prospective (i.e. real-world) efforts. However, the intrinsic differences of benchmarking sets to the real screening chemical libraries can cause biased assessment. Herein, we summarize the history of benchmarking methods as well as data sets and highlight three main types of biases found in benchmarking sets, i.e. “analogue bias”, “artificial enrichment” and “false negative”. In addition, we introduced our recent algorithm to build maximum-unbiased benchmarking sets applicable to both ligand-based and structure-based VS approaches, and its implementations to three important human histone deacetylase (HDAC) isoforms, i.e. HDAC1, HDAC6 and HDAC8. The Leave-One-Out Cross-Validation (LOO CV) demonstrates that the benchmarking sets built by our algorithm are maximum-unbiased in terms of property matching, ROC curves and AUCs. PMID:25481478

  8. Can data-driven benchmarks be used to set the goals of healthy people 2010?

    PubMed Central

    Allison, J; Kiefe, C I; Weissman, N W

    1999-01-01

    OBJECTIVES: Expert panels determined the public health goals of Healthy People 2000 subjectively. The present study examined whether data-driven benchmarks provide a better alternative. METHODS: We developed the "pared-mean" method to define from data the best achievable health care practices. We calculated the pared-mean benchmark for screening mammography from the 1994 National Health Interview Survey, using the metropolitan statistical area as the "provider" unit. Beginning with the best-performing provider and adding providers in descending sequence, we established the minimum provider subset that included at least 10% of all women surveyed on this question. The pared-mean benchmark is then the proportion of women in this subset who received mammography. RESULTS: The pared-mean benchmark for screening mammography was 71%, compared with the Healthy People 2000 goal of 60%. CONCLUSIONS: For Healthy People 2010, benchmarks derived from data reflecting the best available care provide viable alternatives to consensus-derived targets. We are currently pursuing additional refinements to the data-driven pared-mean benchmark approach. PMID:9987466

  9. Benchmarking to improve the quality of cystic fibrosis care.

    PubMed

    Schechter, Michael S

    2012-11-01

    Benchmarking involves the ascertainment of healthcare programs with most favorable outcomes as a means to identify and spread effective strategies for delivery of care. The recent interest in the development of patient registries for patients with cystic fibrosis (CF) has been fueled in part by an interest in using them to facilitate benchmarking. This review summarizes reports of how benchmarking has been operationalized in attempts to improve CF care. Although certain goals of benchmarking can be accomplished with an exclusive focus on registry data analysis, benchmarking programs in Germany and the United States have supplemented these data analyses with exploratory interactions and discussions to better understand successful approaches to care and encourage their spread throughout the care network. Benchmarking allows the discovery and facilitates the spread of effective approaches to care. It provides a pragmatic alternative to traditional research methods such as randomized controlled trials, providing insights into methods that optimize delivery of care and allowing judgments about the relative effectiveness of different therapeutic approaches.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stathakis, S; Defoor, D; Saenz, D

    Purpose: Stereotactic radiosurgery (SRS) outcomes are related to the delivered dose to the target and to surrounding tissue. We have commissioned a Monte Carlo based dose calculation algorithm to recalculated the delivered dose planned using pencil beam calculation dose engine. Methods: Twenty consecutive previously treated patients have been selected for this study. All plans were generated using the iPlan treatment planning system (TPS) and calculated using the pencil beam algorithm. Each patient plan consisted of 1 to 3 targets and treated using dynamically conformal arcs or intensity modulated beams. Multi-target treatments were delivered using multiple isocenters, one for each target.more » These plans were recalculated for the purpose of this study using a single isocenter. The CT image sets along with the plan, doses and structures were DICOM exported to Monaco TPS and the dose was recalculated using the same voxel resolution and monitor units. Benchmark data was also generated prior to patient calculations to assess the accuracy of the two TPS against measurements using a micro ionization chamber in solid water. Results: Good agreement, within −0.4% for Monaco and +2.2% for iPlan were observed for measurements in water phantom. Doses in patient geometry revealed up to 9.6% differences for single target plans and 9.3% for multiple-target-multiple-isocenter plans. The average dose differences for multi-target-single-isocenter plans were approximately 1.4%. Similar differences were observed for the OARs and integral dose. Conclusion: Accuracy of the beam is crucial for the dose calculation especially in the case of small fields such as those used in SRS treatments. A superior dose calculation algorithm such as Monte Carlo, with properly commissioned beam models, which is unaffected by the lack of electronic equilibrium should be preferred for the calculation of small fields to improve accuracy.« less

  11. Mechanism-based risk assessment strategy for drug-induced cholestasis using the transcriptional benchmark dose derived by toxicogenomics.

    PubMed

    Kawamoto, Taisuke; Ito, Yuichi; Morita, Osamu; Honda, Hiroshi

    2017-01-01

    Cholestasis is one of the major causes of drug-induced liver injury (DILI), which can result in withdrawal of approved drugs from the market. Early identification of cholestatic drugs is difficult due to the complex mechanisms involved. In order to develop a strategy for mechanism-based risk assessment of cholestatic drugs, we analyzed gene expression data obtained from the livers of rats that had been orally administered with 12 known cholestatic compounds repeatedly for 28 days at three dose levels. Qualitative analyses were performed using two statistical approaches (hierarchical clustering and principle component analysis), in addition to pathway analysis. The transcriptional benchmark dose (tBMD) and tBMD 95% lower limit (tBMDL) were used for quantitative analyses, which revealed three compound sub-groups that produced different types of differential gene expression; these groups of genes were mainly involved in inflammation, cholesterol biosynthesis, and oxidative stress. Furthermore, the tBMDL values for each test compound were in good agreement with the relevant no observed adverse effect level. These results indicate that our novel strategy for drug safety evaluation using mechanism-based classification and tBMDL would facilitate the application of toxicogenomics for risk assessment of cholestatic DILI.

  12. Quality management benchmarking: FDA compliance in pharmaceutical industry.

    PubMed

    Jochem, Roland; Landgraf, Katja

    2010-01-01

    By analyzing and comparing industry and business best practice, processes can be optimized and become more successful mainly because efficiency and competitiveness increase. This paper aims to focus on some examples. Case studies are used to show knowledge exchange in the pharmaceutical industry. Best practice solutions were identified in two companies using a benchmarking method and five-stage model. Despite large administrations, there is much potential regarding business process organization. This project makes it possible for participants to fully understand their business processes. The benchmarking method gives an opportunity to critically analyze value chains (a string of companies or players working together to satisfy market demands for a special product). Knowledge exchange is interesting for companies that like to be global players. Benchmarking supports information exchange and improves competitive ability between different enterprises. Findings suggest that the five-stage model improves efficiency and effectiveness. Furthermore, the model increases the chances for reaching targets. The method gives security to partners that did not have benchmarking experience. The study identifies new quality management procedures. Process management and especially benchmarking is shown to support pharmaceutical industry improvements.

  13. Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)

    2013-01-01

    Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.

  14. Contributions to Integral Nuclear Data in ICSBEP and IRPhEP since ND 2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Briggs, J. Blair; Gulliford, Jim

    2016-09-01

    The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) was last discussed directly with the international nuclear data community at ND2013. Since ND2013, integral benchmark data that are available for nuclear data testing has continued to increase. The status of the international benchmark efforts and the latest contributions to integral nuclear data for testing is discussed. Select benchmark configurations that have been added to the ICSBEP and IRPhEP Handbooks since ND2013 are highlighted. The 2015 edition of the ICSBEP Handbook now contains 567 evaluations with benchmark specifications for 4,874more » critical, near-critical, or subcritical configurations, 31 criticality alarm placement/shielding configuration with multiple dose points apiece, and 207 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications. The 2015 edition of the IRPhEP Handbook contains data from 143 different experimental series that were performed at 50 different nuclear facilities. Currently 139 of the 143 evaluations are published as approved benchmarks with the remaining four evaluations published in draft format only. Measurements found in the IRPhEP Handbook include criticality, buckling and extrapolation length, spectral characteristics, reactivity effects, reactivity coefficients, kinetics, reaction-rate distributions, power distributions, isotopic compositions, and/or other miscellaneous types of measurements for various types of reactor systems. Annual technical review meetings for both projects were held in April 2016; additional approved benchmark evaluations will be included in the 2016 editions of these handbooks.« less

  15. Multidimensional dosimetry of {sup 106}Ru eye plaques using EBT3 films and its impact on treatment planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heilemann, G., E-mail: gerd.heilemann@meduniwien.ac.at; Kostiukhina, N.; Nesvacil, N.

    2015-10-15

    Purpose: The purpose of this study was to establish a method to perform multidimensional radiochromic film measurements of {sup 106}Ru plaques and to benchmark the resulting dose distributions against Monte Carlo simulations (MC), microdiamond, and diode measurements. Methods: Absolute dose rates and relative dose distributions in multiple planes were determined for three different plaque models (CCB, CCA, and COB), and three different plaques per model, using EBT3 films in an in-house developed polystyrene phantom and the MCNP6 MC code. Dose difference maps were generated to analyze interplaque variations for a specific type, and for comparing measurements against MC simulations. Furthermore,more » dose distributions were validated against values specified by the manufacturer (BEBIG) and microdiamond and diode measurements in a water scanning phantom. Radial profiles were assessed and used to estimate dosimetric margins for a given combination of representative tumor geometry and plaque size. Results: Absolute dose rates at a reference depth of 2 mm on the central axis of the plaque show an agreement better than 5% (10%) when comparing film measurements (MCNP6) to the manufacturer’s data. The reproducibility of depth-dose profile measurements was <7% (2 SD) for all investigated detectors and plaque types. Dose difference maps revealed minor interplaque deviations for a specific plaque type due to inhomogeneities of the active layer. The evaluation of dosimetric margins showed that for a majority of the investigated cases, the tumor was not completely covered by the 100% isodose prescribed to the tumor apex if the difference between geometrical plaque size and tumor base ≤4 mm. Conclusions: EBT3 film dosimetry in an in-house developed phantom was successfully used to characterize the dosimetric properties of different {sup 106}Ru plaque models. The film measurements were validated against MC calculations and other experimental methods and showed a good agreement with data from BEBIG well within published tolerances. The dosimetric information as well as interplaque comparison can be used for comprehensive quality assurance and for considerations in the treatment planning of ophthalmic brachytherapy.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, R; Zhu, X; Li, S

    Purpose: High Dose Rate (HDR) brachytherapy forward planning is principally an iterative process; hence, plan quality is affected by planners’ experiences and limited planning time. Thus, this may lead to sporadic errors and inconsistencies in planning. A statistical tool based on previous approved clinical treatment plans would help to maintain the consistency of planning quality and improve the efficiency of second checking. Methods: An independent dose calculation tool was developed from commercial software. Thirty-three previously approved cervical HDR plans with the same prescription dose (550cGy), applicator type, and treatment protocol were examined, and ICRU defined reference point doses (bladder, vaginalmore » mucosa, rectum, and points A/B) along with dwell times were collected. Dose calculation tool then calculated appropriate range with a 95% confidence interval for each parameter obtained, which would be used as the benchmark for evaluation of those parameters in future HDR treatment plans. Model quality was verified using five randomly selected approved plans from the same dataset. Results: Dose variations appears to be larger at the reference point of bladder and mucosa as compared with rectum. Most reference point doses from verification plans fell between the predicted range, except the doses of two points of rectum and two points of reference position A (owing to rectal anatomical variations & clinical adjustment in prescription points, respectively). Similar results were obtained for tandem and ring dwell times despite relatively larger uncertainties. Conclusion: This statistical tool provides an insight into clinically acceptable range of cervical HDR plans, which could be useful in plan checking and identifying potential planning errors, thus improving the consistency of plan quality.« less

  17. Effective Dose in Nuclear Medicine Studies and SPECT/CT: Dosimetry Survey Across Quebec Province.

    PubMed

    Charest, Mathieu; Asselin, Chantal

    2018-06-01

    The aims of the current study were to draw a portrait of the delivered dose in selected nuclear medicine studies in Québec province and to assess the degree of change between an earlier survey performed in 2010 and a later survey performed in 2014. Methods: Each surveyed nuclear medicine department had to complete 2 forms: the first, about the administered activity in selected nuclear medicine studies, and the second, about the CT parameters used in SPECT/CT imaging, if available. The administered activities were converted into effective doses using the most recent conversion factors. Diagnostic reference levels were computed for each imaging procedure to obtain a benchmark for comparison. Results: The distributions of administered activity in various nuclear medicine studies, along with the corresponding distribution of the effective doses, were determined. Excluding 131 I for thyroid studies, 67 Ga-citrate for infectious workups, and combined stress and rest myocardial perfusion studies, the remainder of the 99m Tc-based studies delivered average effective doses clustered below 10 mSv. Between the 2010 survey and the 2014 survey, there was a statistically significant decrease in delivered dose from 18.3 to 14.5 mSv. 67 Ga-citrate studies for infectious workups also showed a significant decrease in delivered dose from 31.0 to 26.2 mSv. The standardized CT portion of SPECT/CT studies yielded a mean effective dose 14 times lower than the radiopharmaceutical portion of the study. Conclusion: Between 2010 and 2014, there was a significant decrease in the delivered effective dose in myocardial perfusion and 67 Ga-citrate studies. The CT portions of the surveyed SPECT/CT studies contributed a relatively small fraction of the total delivered effective dose. © 2018 by the Society of Nuclear Medicine and Molecular Imaging.

  18. U.S. EPA Superfund Program's Policy for Risk and Dose Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Stuart

    2008-01-15

    The Environmental Protection Agency (EPA) Office of Superfund Remediation and Technology Innovation (OSRTI) has primary responsibility for implementing the long-term (non-emergency) portion of a key U.S. law regulating cleanup: the Comprehensive Environmental Response, Compensation and Liability Act, CERCLA, nicknamed 'Superfund'. The purpose of the Superfund program is to protect human health and the environment over the long term from releases or potential releases of hazardous substances from abandoned or uncontrolled hazardous waste sites. The focus of this paper is on risk and dose assessment policies and tools for addressing radioactively contaminated sites by the Superfund program. EPA has almost completedmore » two risk assessment tools that are particularly relevant to decommissioning activities conducted under CERCLA authority. These are the: 1. Building Preliminary Remediation Goals for Radionuclides (BPRG) electronic calculator, and 2. Radionuclide Outdoor Surfaces Preliminary Remediation Goals (SPRG) electronic calculator. EPA developed the BPRG calculator to help standardize the evaluation and cleanup of radiologically contaminated buildings at which risk is being assessed for occupancy. BPRGs are radionuclide concentrations in dust, air and building materials that correspond to a specified level of human cancer risk. The intent of SPRG calculator is to address hard outside surfaces such as building slabs, outside building walls, sidewalks and roads. SPRGs are radionuclide concentrations in dust and hard outside surface materials. EPA is also developing the 'Radionuclide Ecological Benchmark' calculator. This calculator provides biota concentration guides (BCGs), also known as ecological screening benchmarks, for use in ecological risk assessments at CERCLA sites. This calculator is intended to develop ecological benchmarks as part of the EPA guidance 'Ecological Risk Assessment Guidance for Superfund: Process for Designing and Conducting Ecological Risk Assessments'. The calculator develops ecological benchmarks for ionizing radiation based on cell death only.« less

  19. GeneNetWeaver: in silico benchmark generation and performance profiling of network inference methods.

    PubMed

    Schaffter, Thomas; Marbach, Daniel; Floreano, Dario

    2011-08-15

    Over the last decade, numerous methods have been developed for inference of regulatory networks from gene expression data. However, accurate and systematic evaluation of these methods is hampered by the difficulty of constructing adequate benchmarks and the lack of tools for a differentiated analysis of network predictions on such benchmarks. Here, we describe a novel and comprehensive method for in silico benchmark generation and performance profiling of network inference methods available to the community as an open-source software called GeneNetWeaver (GNW). In addition to the generation of detailed dynamical models of gene regulatory networks to be used as benchmarks, GNW provides a network motif analysis that reveals systematic prediction errors, thereby indicating potential ways of improving inference methods. The accuracy of network inference methods is evaluated using standard metrics such as precision-recall and receiver operating characteristic curves. We show how GNW can be used to assess the performance and identify the strengths and weaknesses of six inference methods. Furthermore, we used GNW to provide the international Dialogue for Reverse Engineering Assessments and Methods (DREAM) competition with three network inference challenges (DREAM3, DREAM4 and DREAM5). GNW is available at http://gnw.sourceforge.net along with its Java source code, user manual and supporting data. Supplementary data are available at Bioinformatics online. dario.floreano@epfl.ch.

  20. An automated protocol for performance benchmarking a widefield fluorescence microscope.

    PubMed

    Halter, Michael; Bier, Elianna; DeRose, Paul C; Cooksey, Gregory A; Choquette, Steven J; Plant, Anne L; Elliott, John T

    2014-11-01

    Widefield fluorescence microscopy is a highly used tool for visually assessing biological samples and for quantifying cell responses. Despite its widespread use in high content analysis and other imaging applications, few published methods exist for evaluating and benchmarking the analytical performance of a microscope. Easy-to-use benchmarking methods would facilitate the use of fluorescence imaging as a quantitative analytical tool in research applications, and would aid the determination of instrumental method validation for commercial product development applications. We describe and evaluate an automated method to characterize a fluorescence imaging system's performance by benchmarking the detection threshold, saturation, and linear dynamic range to a reference material. The benchmarking procedure is demonstrated using two different materials as the reference material, uranyl-ion-doped glass and Schott 475 GG filter glass. Both are suitable candidate reference materials that are homogeneously fluorescent and highly photostable, and the Schott 475 GG filter glass is currently commercially available. In addition to benchmarking the analytical performance, we also demonstrate that the reference materials provide for accurate day to day intensity calibration. Published 2014 Wiley Periodicals Inc. Published 2014 Wiley Periodicals Inc. This article is a US government work and, as such, is in the public domain in the United States of America.

  1. Benchmarking Strategies for Measuring the Quality of Healthcare: Problems and Prospects

    PubMed Central

    Lovaglio, Pietro Giorgio

    2012-01-01

    Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed. PMID:22666140

  2. Benchmarking strategies for measuring the quality of healthcare: problems and prospects.

    PubMed

    Lovaglio, Pietro Giorgio

    2012-01-01

    Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed.

  3. Issues to consider in the derivation of water quality benchmarks for the protection of aquatic life.

    PubMed

    Schneider, Uwe

    2014-01-01

    While water quality benchmarks for the protection of aquatic life have been in use in some jurisdictions for several decades (USA, Canada, several European countries), more and more countries are now setting up their own national water quality benchmark development programs. In doing so, they either adopt an existing method from another jurisdiction, update on an existing approach, or develop their own new derivation method. Each approach has its own advantages and disadvantages, and many issues have to be addressed when setting up a water quality benchmark development program or when deriving a water quality benchmark. Each of these tasks requires a special expertise. They may seem simple, but are complex in their details. The intention of this paper was to provide some guidance for this process of water quality benchmark development on the program level, for the derivation methodology development, and in the actual benchmark derivation step, as well as to point out some issues (notably the inclusion of adapted populations and cryptic species and points to consider in the use of the species sensitivity distribution approach) and future opportunities (an international data repository and international collaboration in water quality benchmark development).

  4. Personalized Assessment of kV Cone Beam Computed Tomography Doses in Image-guided Radiotherapy of Pediatric Cancer Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Yibao; Yan Yulong; Nath, Ravinder

    2012-08-01

    Purpose: To develop a quantitative method for the estimation of kV cone beam computed tomography (kVCBCT) doses in pediatric patients undergoing image-guided radiotherapy. Methods and Materials: Forty-two children were retrospectively analyzed in subgroups of different scanned regions: one group in the head-and-neck and the other group in the pelvis. Critical structures in planning CT images were delineated on an Eclipse treatment planning system before being converted into CT phantoms for Monte Carlo simulations. A benchmarked EGS4 Monte Carlo code was used to calculate three-dimensional dose distributions of kVCBCT scans with full-fan high-quality head or half-fan pelvis protocols predefined by themore » manufacturer. Based on planning CT images and structures exported in DICOM RT format, occipital-frontal circumferences (OFC) were calculated for head-and-neck patients using DICOMan software. Similarly, hip circumferences (HIP) were acquired for the pelvic group. Correlations between mean organ doses and age, weight, OFC, and HIP values were analyzed with SigmaPlot software suite, where regression performances were analyzed with relative dose differences (RDD) and coefficients of determination (R{sup 2}). Results: kVCBCT-contributed mean doses to all critical structures decreased monotonically with studied parameters, with a steeper decrease in the pelvis than in the head. Empirical functions have been developed for a dose estimation of the major organs at risk in the head and pelvis, respectively. If evaluated with physical parameters other than age, a mean RDD of up to 7.9% was observed for all the structures in our population of 42 patients. Conclusions: kVCBCT doses are highly correlated with patient size. According to this study, weight can be used as a primary index for dose assessment in both head and pelvis scans, while OFC and HIP may serve as secondary indices for dose estimation in corresponding regions. With the proposed empirical functions, it is possible to perform an individualized quantitative dose assessment of kVCBCT scans.« less

  5. A flexible Monte Carlo tool for patient or phantom specific calculations: comparison with preliminary validation measurements

    NASA Astrophysics Data System (ADS)

    Davidson, S.; Cui, J.; Followill, D.; Ibbott, G.; Deasy, J.

    2008-02-01

    The Dose Planning Method (DPM) is one of several 'fast' Monte Carlo (MC) computer codes designed to produce an accurate dose calculation for advanced clinical applications. We have developed a flexible machine modeling process and validation tests for open-field and IMRT calculations. To complement the DPM code, a practical and versatile source model has been developed, whose parameters are derived from a standard set of planning system commissioning measurements. The primary photon spectrum and the spectrum resulting from the flattening filter are modeled by a Fatigue function, cut-off by a multiplying Fermi function, which effectively regularizes the difficult energy spectrum determination process. Commonly-used functions are applied to represent the off-axis softening, increasing primary fluence with increasing angle ('the horn effect'), and electron contamination. The patient dependent aspect of the MC dose calculation utilizes the multi-leaf collimator (MLC) leaf sequence file exported from the treatment planning system DICOM output, coupled with the source model, to derive the particle transport. This model has been commissioned for Varian 2100C 6 MV and 18 MV photon beams using percent depth dose, dose profiles, and output factors. A 3-D conformal plan and an IMRT plan delivered to an anthropomorphic thorax phantom were used to benchmark the model. The calculated results were compared to Pinnacle v7.6c results and measurements made using radiochromic film and thermoluminescent detectors (TLD).

  6. Dose-Response Analysis of RNA-Seq Profiles in Archival ...

    EPA Pesticide Factsheets

    Use of archival resources has been limited to date by inconsistent methods for genomic profiling of degraded RNA from formalin-fixed paraffin-embedded (FFPE) samples. RNA-sequencing offers a promising way to address this problem. Here we evaluated transcriptomic dose responses using RNA-sequencing in paired FFPE and frozen (FROZ) samples from two archival studies in mice, one 20 years old. Experimental treatments included 3 different doses of di(2-ethylhexyl)phthalate or dichloroacetic acid for the recently archived and older studies, respectively. Total RNA was ribo-depleted and sequenced using the Illumina HiSeq platform. In the recently archived study, FFPE samples had 35% lower total counts compared to FROZ samples but high concordance in fold-change values of differentially expressed genes (DEGs) (r2 = 0.99), highly enriched pathways (90% overlap with FROZ), and benchmark dose estimates for preselected target genes (2% difference vs FROZ). In contrast, older FFPE samples had markedly lower total counts (3% of FROZ) and poor concordance in global DEGs and pathways. However, counts from FFPE and FROZ samples still positively correlated (r2 = 0.84 across all transcripts) and showed comparable dose responses for more highly expressed target genes. These findings highlight potential applications and issues in using RNA-sequencing data from FFPE samples. Recently archived FFPE samples were highly similar to FROZ samples in sequencing q

  7. Dose audit for patients undergoing two common radiography examinations with digital radiology systems

    PubMed Central

    İnal, Tolga; Ataç, Gökçe

    2014-01-01

    PURPOSE We aimed to determine the radiation doses delivered to patients undergoing general examinations using computed or digital radiography systems in Turkey. MATERIALS AND METHODS Radiographs of 20 patients undergoing posteroanterior chest X-ray and of 20 patients undergoing anteroposterior kidney-ureter-bladder radiography were evaluated in five X-ray rooms at four local hospitals in the Ankara region. Currently, almost all radiology departments in Turkey have switched from conventional radiography systems to computed radiography or digital radiography systems. Patient dose was measured for both systems. The results were compared with published diagnostic reference levels (DRLs) from the European Union and International Atomic Energy Agency. RESULTS The average entrance surface doses (ESDs) for chest examinations exceeded established international DRLs at two of the X-ray rooms in a hospital with computed radiography. All of the other ESD measurements were approximately equal to or below the DRLs for both examinations in all of the remaining hospitals. Improper adjustment of the exposure parameters, uncalibrated automatic exposure control systems, and failure of the technologists to choose exposure parameters properly were problems we noticed during the study. CONCLUSION This study is an initial attempt at establishing local DRL values for digital radiography systems, and will provide a benchmark so that the authorities can establish reference dose levels for diagnostic radiology in Turkey. PMID:24317331

  8. Practical examples of modeling choices and their consequences for risk assessment

    EPA Science Inventory

    Although benchmark dose (BMD) modeling has become the preferred approach to identifying a point of departure (POD) over the No Observed Adverse Effect Level, there remain challenges to its application in human health risk assessment. BMD modeling, as currently implemented by the...

  9. The U. S. Environmental Protection Agency's inhalation RfD methodology: Risk assessment for air toxics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarabek, A.M.; Menache, M.G.; Overton, J.H. Jr.

    1990-10-01

    The U.S. Environmental Protection Agency (U.S. EPA) has advocated the establishment of general and scientific guidelines for the evaluation of toxicological data and their use in deriving benchmark values to protect exposed populations from adverse health effects. The Agency's reference dose (RfD) methodology for deriving benchmark values for noncancer toxicity originally addressed risk assessment of oral exposures. This paper presents a brief background on the development of the inhalation reference dose (RfDi) methodology, including concepts and issues related to addressing the dynamics of the respiratory system as the portal of entry. Different dosimetric adjustments are described that were incorporated intomore » the methodology to account for the nature of the inhaled agent (particle or gas) and the site of the observed toxic effects (respiratory or extra-respiratory). Impacts of these adjustments on the extrapolation of toxicity data of inhaled agents for human health risk assessment and future research directions are also discussed.« less

  10. U. S. Environmental Protection Agency's inhalation RFD methodology: Risk assessment for air toxics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarabek, A.M.; Menache, M.G.; Overton, J.H.

    1989-01-01

    The U.S. Environmental Protection Agency (U.S. EPA) has advocated the establishment of general and scientific guidelines for the evaluation of toxicological data and their use in deriving benchmark values to protect exposed populations from adverse health effects. The Agency's reference dose (RfD) methodology for deriving benchmark values for noncancer toxicity originally addressed risk assessment of oral exposures. The paper presents a brief background on the development of the inhalation reference dose (RFDi) methodology, including concepts and issues related to addressing the dynamics of the respiratory system as the portal of entry. Different dosimetric adjustments are described that were incorporated intomore » the methodology to account for the nature of the inhaled agent (particle or gas) and the site of the observed toxic effects (respiratory or extrarespiratory). Impacts of these adjustments on the extrapolation of toxicity data of inhaled agents for human health risk assessment and future research directions are also discussed.« less

  11. Benchmarking the performance of fixed-image receptor digital radiography systems. Part 2: system performance metric.

    PubMed

    Lee, Kam L; Bernardo, Michael; Ireland, Timothy A

    2016-06-01

    This is part two of a two-part study in benchmarking system performance of fixed digital radiographic systems. The study compares the system performance of seven fixed digital radiography systems based on quantitative metrics like modulation transfer function (sMTF), normalised noise power spectrum (sNNPS), detective quantum efficiency (sDQE) and entrance surface air kerma (ESAK). It was found that the most efficient image receptors (greatest sDQE) were not necessarily operating at the lowest ESAK. In part one of this study, sMTF is shown to depend on system configuration while sNNPS is shown to be relatively consistent across systems. Systems are ranked on their signal-to-noise ratio efficiency (sDQE) and their ESAK. Systems using the same equipment configuration do not necessarily have the same system performance. This implies radiographic practice at the site will have an impact on the overall system performance. In general, systems are more dose efficient at low dose settings.

  12. In Search of a Time Efficient Approach to Crack and Delamination Growth Predictions in Composites

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Carvalho, Nelson

    2016-01-01

    Analysis benchmarking was used to assess the accuracy and time efficiency of algorithms suitable for automated delamination growth analysis. First, the Floating Node Method (FNM) was introduced and its combination with a simple exponential growth law (Paris Law) and Virtual Crack Closure technique (VCCT) was discussed. Implementation of the method into a user element (UEL) in Abaqus/Standard(Registered TradeMark) was also presented. For the assessment of growth prediction capabilities, an existing benchmark case based on the Double Cantilever Beam (DCB) specimen was briefly summarized. Additionally, the development of new benchmark cases based on the Mixed-Mode Bending (MMB) specimen to assess the growth prediction capabilities under mixed-mode I/II conditions was discussed in detail. A comparison was presented, in which the benchmark cases were used to assess the existing low-cycle fatigue analysis tool in Abaqus/Standard(Registered TradeMark) in comparison to the FNM-VCCT fatigue growth analysis implementation. The low-cycle fatigue analysis tool in Abaqus/Standard(Registered TradeMark) was able to yield results that were in good agreement with the DCB benchmark example. Results for the MMB benchmark cases, however, only captured the trend correctly. The user element (FNM-VCCT) always yielded results that were in excellent agreement with all benchmark cases, at a fraction of the analysis time. The ability to assess the implementation of two methods in one finite element code illustrated the value of establishing benchmark solutions.

  13. Extensions to the integral line-beam method for gamma-ray skyshine analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shultis, J.K.; Faw, R.E.

    1995-08-01

    A computationally simple method for estimating gamma-ray skyshine dose rates has been developed on the basis of the line-beam response function. Both Monte Carlo and pointkernel calculations that account for both annihilation and bremsstrahlung were used in the generation of line beam response functions (LBRF) for gamma-ray energies between 10 and 100 MeV. The LBRF is approximated by a three-parameter formula. By combining results with those obtained in an earlier study for gamma energies below 10 MeV, LBRF values are readily and accurately evaluated for source energies between 0.02 and 100 MeV, for source-to-detector distances between 1 and 3000 m,more » and beam angles as great as 180 degrees. Tables of the parameters for the approximate LBRF are presented. The new response functions are then applied to three simple skyshine geometries, an open silo geometry, an infinite wall, and a rectangular four-wall building. Results are compared to those of previous calculations and to benchmark measurements. A new approach is introduced to account for overhead shielding of the skyshine source and compared to the simplistic exponential-attenuation method used in earlier studies. The effect of the air-ground interface, usually neglected in gamma skyshine studies, is also examined and an empirical correction factor is introduced. Finally, a revised code based on the improved LBRF approximations and the treatment of the overhead shielding is presented, and results shown for several benchmark problems.« less

  14. SU-F-T-153: Experimental Validation and Calculation Benchmark for a Commercial Monte Carlo Pencil Beam Scanning Proton Therapy Treatment Planning System in Water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, L; Huang, S; Kang, M

    Purpose: Eclipse proton Monte Carlo AcurosPT 13.7 was commissioned and experimentally validated for an IBA dedicated PBS nozzle in water. Topas 1.3 was used to isolate the cause of differences in output and penumbra between simulation and experiment. Methods: The spot profiles were measured in air at five locations using Lynx. PTW-34070 Bragg peak chamber (Freiburg, Germany) was used to collect the relative integral Bragg peak for 15 proton energies from 100 MeV to 225 MeV. The phase space parameters (σx, σθ, ρxθ) number of protons per MU, energy spread and calculated mean energy provided by AcurosPT were identically implementedmore » into Topas. The absolute dose, profiles and field size factors measured using ionization chamber arrays were compared with both AcurosPT and Topas. Results: The beam spot size, σx, and the angular spread, σθ, in air were both energy-dependent: in particular, the spot size in air at isocentre ranged from 2.8 to 5.3 mm, and the angular spread ranged from 2.7 mrad to 6 mrad. The number of protons per MU increased from ∼9E7 at 100 MeV to ∼1.5E8 at 225 MeV. Both AcurosPT and TOPAS agree with experiment within 2 mm penumbra difference or 3% dose difference for scenarios including central axis depth dose and profiles at two depths in multi-spot square fields, from 40 to 200 mm, for all the investigated single-energy and multi-energy beams, indicating clinically acceptable source model and radiation transport algorithm in water. Conclusion: By comparing measured data and TOPAS simulation using the same source model, the AcurosPT 13.7 was validated in water within 2 mm penumbra difference or 3% dose difference. Benchmarks versus an independent Monte Carlo code are recommended to study the agreement in output, filed size factors and penumbra differences. This project is partially supported by the Varian grant under the master agreement between University of Pennsylvania and Varian.« less

  15. Gaining competitive advantage in personal dosimetry services through ISO 9001 certification.

    PubMed

    Noriah, M A

    2007-01-01

    This paper discusses the advantage of certification process in the quality assurance of individual dose monitoring in Malaysia. The demand by customers and the regulatory authority for a higher degree of quality service requires a switch in emphasis from a technically focused quality assurance program to a comprehensive quality management for service provision. Achieving the ISO 9001:2000 certification by an accredited third party demonstrates acceptable recognition and documents the fact that the methods used are capable of generating results that satisfy the performance criteria of the certification program. It also offers a proof of the commitment to quality and, as a benchmark, allows measurement of the progress for continual improvement of service performance.

  16. Benchmarking, Total Quality Management, and Libraries.

    ERIC Educational Resources Information Center

    Shaughnessy, Thomas W.

    1993-01-01

    Discussion of the use of Total Quality Management (TQM) in higher education and academic libraries focuses on the identification, collection, and use of reliable data. Methods for measuring quality, including benchmarking, are described; performance measures are considered; and benchmarking techniques are examined. (11 references) (MES)

  17. Comment on ‘egs_brachy: a versatile and fast Monte Carlo code for brachytherapy’

    NASA Astrophysics Data System (ADS)

    Yegin, Gultekin

    2018-02-01

    In a recent paper (Chamberland et al 2016 Phys. Med. Biol. 61 8214) develop a new Monte Carlo code called egs_brachy for brachytherapy treatments. It is based on EGSnrc, and written in the C++ programming language. In order to benchmark the egs_brachy code, the authors use it in various test case scenarios in which complex geometry conditions exist. Another EGSnrc based brachytherapy dose calculation engine, BrachyDose, is used for dose comparisons. The authors fail to prove that egs_brachy can produce reasonable dose values for brachytherapy sources in a given medium. The dose comparisons in the paper are erroneous and misleading. egs_brachy should not be used in any further research studies unless and until all the potential bugs are fixed in the code.

  18. An Unbiased Method To Build Benchmarking Sets for Ligand-Based Virtual Screening and its Application To GPCRs

    PubMed Central

    2015-01-01

    Benchmarking data sets have become common in recent years for the purpose of virtual screening, though the main focus had been placed on the structure-based virtual screening (SBVS) approaches. Due to the lack of crystal structures, there is great need for unbiased benchmarking sets to evaluate various ligand-based virtual screening (LBVS) methods for important drug targets such as G protein-coupled receptors (GPCRs). To date these ready-to-apply data sets for LBVS are fairly limited, and the direct usage of benchmarking sets designed for SBVS could bring the biases to the evaluation of LBVS. Herein, we propose an unbiased method to build benchmarking sets for LBVS and validate it on a multitude of GPCRs targets. To be more specific, our methods can (1) ensure chemical diversity of ligands, (2) maintain the physicochemical similarity between ligands and decoys, (3) make the decoys dissimilar in chemical topology to all ligands to avoid false negatives, and (4) maximize spatial random distribution of ligands and decoys. We evaluated the quality of our Unbiased Ligand Set (ULS) and Unbiased Decoy Set (UDS) using three common LBVS approaches, with Leave-One-Out (LOO) Cross-Validation (CV) and a metric of average AUC of the ROC curves. Our method has greatly reduced the “artificial enrichment” and “analogue bias” of a published GPCRs benchmarking set, i.e., GPCR Ligand Library (GLL)/GPCR Decoy Database (GDD). In addition, we addressed an important issue about the ratio of decoys per ligand and found that for a range of 30 to 100 it does not affect the quality of the benchmarking set, so we kept the original ratio of 39 from the GLL/GDD. PMID:24749745

  19. An unbiased method to build benchmarking sets for ligand-based virtual screening and its application to GPCRs.

    PubMed

    Xia, Jie; Jin, Hongwei; Liu, Zhenming; Zhang, Liangren; Wang, Xiang Simon

    2014-05-27

    Benchmarking data sets have become common in recent years for the purpose of virtual screening, though the main focus had been placed on the structure-based virtual screening (SBVS) approaches. Due to the lack of crystal structures, there is great need for unbiased benchmarking sets to evaluate various ligand-based virtual screening (LBVS) methods for important drug targets such as G protein-coupled receptors (GPCRs). To date these ready-to-apply data sets for LBVS are fairly limited, and the direct usage of benchmarking sets designed for SBVS could bring the biases to the evaluation of LBVS. Herein, we propose an unbiased method to build benchmarking sets for LBVS and validate it on a multitude of GPCRs targets. To be more specific, our methods can (1) ensure chemical diversity of ligands, (2) maintain the physicochemical similarity between ligands and decoys, (3) make the decoys dissimilar in chemical topology to all ligands to avoid false negatives, and (4) maximize spatial random distribution of ligands and decoys. We evaluated the quality of our Unbiased Ligand Set (ULS) and Unbiased Decoy Set (UDS) using three common LBVS approaches, with Leave-One-Out (LOO) Cross-Validation (CV) and a metric of average AUC of the ROC curves. Our method has greatly reduced the "artificial enrichment" and "analogue bias" of a published GPCRs benchmarking set, i.e., GPCR Ligand Library (GLL)/GPCR Decoy Database (GDD). In addition, we addressed an important issue about the ratio of decoys per ligand and found that for a range of 30 to 100 it does not affect the quality of the benchmarking set, so we kept the original ratio of 39 from the GLL/GDD.

  20. The impact of a scheduling change on ninth grade high school performance on biology benchmark exams and the California Standards Test

    NASA Astrophysics Data System (ADS)

    Leonardi, Marcelo

    The primary purpose of this study was to examine the impact of a scheduling change from a trimester 4x4 block schedule to a modified hybrid schedule on student achievement in ninth grade biology courses. This study examined the impact of the scheduling change on student achievement through teacher created benchmark assessments in Genetics, DNA, and Evolution and on the California Standardized Test in Biology. The secondary purpose of this study examined the ninth grade biology teacher perceptions of ninth grade biology student achievement. Using a mixed methods research approach, data was collected both quantitatively and qualitatively as aligned to research questions. Quantitative methods included gathering data from departmental benchmark exams and California Standardized Test in Biology and conducting multiple analysis of covariance and analysis of covariance to determine significance differences. Qualitative methods include journal entries questions and focus group interviews. The results revealed a statistically significant increase in scores on both the DNA and Evolution benchmark exams. DNA and Evolution benchmark exams showed significant improvements from a change in scheduling format. The scheduling change was responsible for 1.5% of the increase in DNA benchmark scores and 2% of the increase in Evolution benchmark scores. The results revealed a statistically significant decrease in scores on the Genetics Benchmark exam as a result of the scheduling change. The scheduling change was responsible for 1% of the decrease in Genetics benchmark scores. The results also revealed a statistically significant increase in scores on the CST Biology exam. The scheduling change was responsible for .7% of the increase in CST Biology scores. Results of the focus group discussions indicated that all teachers preferred the modified hybrid schedule over the trimester schedule and that it improved student achievement.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dourson, M.L.

    The quantitative procedures associated with noncancer risk assessment include reference dose (RfD), benchmark dose, and severity modeling. The RfD, which is part of the EPA risk assessment guidelines, is an estimation of a level that is likely to be without any health risk to sensitive individuals. The RfD requires two major judgments: the first is choice of a critical effect(s) and its No Observed Adverse Effect Level (NOAEL); the second judgment is choice of an uncertainty factor. This paper discusses major assumptions and limitations of the RfD model.

  2. MIPS bacterial genomes functional annotation benchmark dataset.

    PubMed

    Tetko, Igor V; Brauner, Barbara; Dunger-Kaltenbach, Irmtraud; Frishman, Goar; Montrone, Corinna; Fobo, Gisela; Ruepp, Andreas; Antonov, Alexey V; Surmeli, Dimitrij; Mewes, Hans-Wernen

    2005-05-15

    Any development of new methods for automatic functional annotation of proteins according to their sequences requires high-quality data (as benchmark) as well as tedious preparatory work to generate sequence parameters required as input data for the machine learning methods. Different program settings and incompatible protocols make a comparison of the analyzed methods difficult. The MIPS Bacterial Functional Annotation Benchmark dataset (MIPS-BFAB) is a new, high-quality resource comprising four bacterial genomes manually annotated according to the MIPS functional catalogue (FunCat). These resources include precalculated sequence parameters, such as sequence similarity scores, InterPro domain composition and other parameters that could be used to develop and benchmark methods for functional annotation of bacterial protein sequences. These data are provided in XML format and can be used by scientists who are not necessarily experts in genome annotation. BFAB is available at http://mips.gsf.de/proj/bfab

  3. A broad scope knowledge based model for optimization of VMAT in esophageal cancer: validation and assessment of plan quality among different treatment centers.

    PubMed

    Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Laksar, Sarbani; Tozzi, Angelo; Scorsetti, Marta; Cozzi, Luca

    2015-10-31

    To evaluate the performance of a broad scope model-based optimisation process for volumetric modulated arc therapy applied to esophageal cancer. A set of 70 previously treated patients in two different institutions, were selected to train a model for the prediction of dose-volume constraints. The model was built with a broad-scope purpose, aiming to be effective for different dose prescriptions and tumour localisations. It was validated on three groups of patients from the same institution and from another clinic not providing patients for the training phase. Comparison of the automated plans was done against reference cases given by the clinically accepted plans. Quantitative improvements (statistically significant for the majority of the analysed dose-volume parameters) were observed between the benchmark and the test plans. Of 624 dose-volume objectives assessed for plan evaluation, in 21 cases (3.3 %) the reference plans failed to respect the constraints while the model-based plans succeeded. Only in 3 cases (<0.5 %) the reference plans passed the criteria while the model-based failed. In 5.3 % of the cases both groups of plans failed and in the remaining cases both passed the tests. Plans were optimised using a broad scope knowledge-based model to determine the dose-volume constraints. The results showed dosimetric improvements when compared to the benchmark data. Particularly the plans optimised for patients from the third centre, not participating to the training, resulted in superior quality. The data suggests that the new engine is reliable and could encourage its application to clinical practice.

  4. Marginal Iodide Deficiency and Thyroid Function: Dose-response analysis for quantitative pharmacokinetic modeling

    EPA Science Inventory

    Severe iodine deficiency is known to cause adverse health outcomes and remains a benchmark for understanding the effects of hypothyroidism. However, the implications of marginal iodine deficiency on function of the thyroid axis remain less well known. The current study examined t...

  5. CatReg Software for Categorical Regression Analysis (May 2016)

    EPA Science Inventory

    CatReg 3.0 is a Microsoft Windows enhanced version of the Agency’s categorical regression analysis (CatReg) program. CatReg complements EPA’s existing Benchmark Dose Software (BMDS) by greatly enhancing a risk assessor’s ability to determine whether data from separate toxicologic...

  6. Recent Additions for 1998

    EPA Science Inventory

    December 22, 1998
    Benchmark Dose Software

    December 16, 1998
    Extension of PENELOPE to protons: Simulation of nuclear reactions and benchmark with Geant4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sterpin, E.; Sorriaux, J.; Vynckier, S.

    2013-11-15

    Purpose: Describing the implementation of nuclear reactions in the extension of the Monte Carlo code (MC) PENELOPE to protons (PENH) and benchmarking with Geant4.Methods: PENH is based on mixed-simulation mechanics for both elastic and inelastic electromagnetic collisions (EM). The adopted differential cross sections for EM elastic collisions are calculated using the eikonal approximation with the Dirac–Hartree–Fock–Slater atomic potential. Cross sections for EM inelastic collisions are computed within the relativistic Born approximation, using the Sternheimer–Liljequist model of the generalized oscillator strength. Nuclear elastic and inelastic collisions were simulated using explicitly the scattering analysis interactive dialin database for {sup 1}H and ICRUmore » 63 data for {sup 12}C, {sup 14}N, {sup 16}O, {sup 31}P, and {sup 40}Ca. Secondary protons, alphas, and deuterons were all simulated as protons, with the energy adapted to ensure consistent range. Prompt gamma emission can also be simulated upon user request. Simulations were performed in a water phantom with nuclear interactions switched off or on and integral depth–dose distributions were compared. Binary-cascade and precompound models were used for Geant4. Initial energies of 100 and 250 MeV were considered. For cases with no nuclear interactions simulated, additional simulations in a water phantom with tight resolution (1 mm in all directions) were performed with FLUKA. Finally, integral depth–dose distributions for a 250 MeV energy were computed with Geant4 and PENH in a homogeneous phantom with, first, ICRU striated muscle and, second, ICRU compact bone.Results: For simulations with EM collisions only, integral depth–dose distributions were within 1%/1 mm for doses higher than 10% of the Bragg-peak dose. For central-axis depth–dose and lateral profiles in a phantom with tight resolution, there are significant deviations between Geant4 and PENH (up to 60%/1 cm for depth–dose distributions). The agreement is much better with FLUKA, with deviations within 3%/3 mm. When nuclear interactions were turned on, agreement (within 6% before the Bragg-peak) between PENH and Geant4 was consistent with uncertainties on nuclear models and cross sections, whatever the material simulated (water, muscle, or bone).Conclusions: A detailed and flexible description of nuclear reactions has been implemented in the PENH extension of PENELOPE to protons, which utilizes a mixed-simulation scheme for both elastic and inelastic EM collisions, analogous to the well-established algorithm for electrons/positrons. PENH is compatible with all current main programs that use PENELOPE as the MC engine. The nuclear model of PENH is realistic enough to give dose distributions in fair agreement with those computed by Geant4.« less

  7. Accuracy of effective dose estimation in personal dosimetry: a comparison between single-badge and double-badge methods and the MOSFET method.

    PubMed

    Januzis, Natalie; Belley, Matthew D; Nguyen, Giao; Toncheva, Greta; Lowry, Carolyn; Miller, Michael J; Smith, Tony P; Yoshizumi, Terry T

    2014-05-01

    The purpose of this study was three-fold: (1) to measure the transmission properties of various lead shielding materials, (2) to benchmark the accuracy of commercial film badge readings, and (3) to compare the accuracy of effective dose (ED) conversion factors (CF) of the U.S. Nuclear Regulatory Commission methods to the MOSFET method. The transmission properties of lead aprons and the accuracy of film badges were studied using an ion chamber and monitor. ED was determined using an adult male anthropomorphic phantom that was loaded with 20 diagnostic MOSFET detectors and scanned with a whole body CT protocol at 80, 100, and 120 kVp. One commercial film badge was placed at the collar and one at the waist. Individual organ doses and waist badge readings were corrected for lead apron attenuation. ED was computed using ICRP 103 tissue weighting factors, and ED CFs were calculated by taking the ratio of ED and badge reading. The measured single badge CFs were 0.01 (±14.9%), 0.02 (±9.49%), and 0.04 (±15.7%) for 80, 100, and 120 kVp, respectively. Current regulatory ED CF for the single badge method is 0.3; for the double-badge system, they are 0.04 (collar) and 1.5 (under lead apron at the waist). The double-badge system provides a better coefficient for the collar at 0.04; however, exposure readings under the apron are usually negligible to zero. Based on these findings, the authors recommend the use of ED CF of 0.01 for the single badge system from 80 kVp (effective energy 50.4 keV) data.

  8. Determining the sample size required to establish whether a medical device is non-inferior to an external benchmark.

    PubMed

    Sayers, Adrian; Crowther, Michael J; Judge, Andrew; Whitehouse, Michael R; Blom, Ashley W

    2017-08-28

    The use of benchmarks to assess the performance of implants such as those used in arthroplasty surgery is a widespread practice. It provides surgeons, patients and regulatory authorities with the reassurance that implants used are safe and effective. However, it is not currently clear how or how many implants should be statistically compared with a benchmark to assess whether or not that implant is superior, equivalent, non-inferior or inferior to the performance benchmark of interest.We aim to describe the methods and sample size required to conduct a one-sample non-inferiority study of a medical device for the purposes of benchmarking. Simulation study. Simulation study of a national register of medical devices. We simulated data, with and without a non-informative competing risk, to represent an arthroplasty population and describe three methods of analysis (z-test, 1-Kaplan-Meier and competing risks) commonly used in surgical research. We evaluate the performance of each method using power, bias, root-mean-square error, coverage and CI width. 1-Kaplan-Meier provides an unbiased estimate of implant net failure, which can be used to assess if a surgical device is non-inferior to an external benchmark. Small non-inferiority margins require significantly more individuals to be at risk compared with current benchmarking standards. A non-inferiority testing paradigm provides a useful framework for determining if an implant meets the required performance defined by an external benchmark. Current contemporary benchmarking standards have limited power to detect non-inferiority, and substantially larger samples sizes, in excess of 3200 procedures, are required to achieve a power greater than 60%. It is clear when benchmarking implant performance, net failure estimated using 1-KM is preferential to crude failure estimated by competing risk models. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  9. International benchmarking of specialty hospitals. A series of case studies on comprehensive cancer centres

    PubMed Central

    2010-01-01

    Background Benchmarking is one of the methods used in business that is applied to hospitals to improve the management of their operations. International comparison between hospitals can explain performance differences. As there is a trend towards specialization of hospitals, this study examines the benchmarking process and the success factors of benchmarking in international specialized cancer centres. Methods Three independent international benchmarking studies on operations management in cancer centres were conducted. The first study included three comprehensive cancer centres (CCC), three chemotherapy day units (CDU) were involved in the second study and four radiotherapy departments were included in the final study. Per multiple case study a research protocol was used to structure the benchmarking process. After reviewing the multiple case studies, the resulting description was used to study the research objectives. Results We adapted and evaluated existing benchmarking processes through formalizing stakeholder involvement and verifying the comparability of the partners. We also devised a framework to structure the indicators to produce a coherent indicator set and better improvement suggestions. Evaluating the feasibility of benchmarking as a tool to improve hospital processes led to mixed results. Case study 1 resulted in general recommendations for the organizations involved. In case study 2, the combination of benchmarking and lean management led in one CDU to a 24% increase in bed utilization and a 12% increase in productivity. Three radiotherapy departments of case study 3, were considering implementing the recommendations. Additionally, success factors, such as a well-defined and small project scope, partner selection based on clear criteria, stakeholder involvement, simple and well-structured indicators, analysis of both the process and its results and, adapt the identified better working methods to the own setting, were found. Conclusions The improved benchmarking process and the success factors can produce relevant input to improve the operations management of specialty hospitals. PMID:20807408

  10. Do fungi need to be included within environmental radiation protection assessment models?

    PubMed

    Guillén, J; Baeza, A; Beresford, N A; Wood, M D

    2017-09-01

    Fungi are used as biomonitors of forest ecosystems, having comparatively high uptakes of anthropogenic and naturally occurring radionuclides. However, whilst they are known to accumulate radionuclides they are not typically considered in radiological assessment tools for environmental (non-human biota) assessment. In this paper the total dose rate to fungi is estimated using the ERICA Tool, assuming different fruiting body geometries, a single ellipsoid and more complex geometries considering the different components of the fruit body and their differing radionuclide contents based upon measurement data. Anthropogenic and naturally occurring radionuclide concentrations from the Mediterranean ecosystem (Spain) were used in this assessment. The total estimated weighted dose rate was in the range 0.31-3.4 μGy/h (5 th -95 th percentile), similar to natural exposure rates reported for other wild groups. The total estimated dose was dominated by internal exposure, especially from 226 Ra and 210 Po. Differences in dose rate between complex geometries and a simple ellipsoid model were negligible. Therefore, the simple ellipsoid model is recommended to assess dose rates to fungal fruiting bodies. Fungal mycelium was also modelled assuming a long filament. Using these geometries, assessments for fungal fruiting bodies and mycelium under different scenarios (post-accident, planned release and existing exposure) were conducted, each being based on available monitoring data. The estimated total dose rate in each case was below the ERICA screening benchmark dose, except for the example post-accident existing exposure scenario (the Chernobyl Exclusion Zone) for which a dose rate in excess of 35 μGy/h was estimated for the fruiting body. Estimated mycelium dose rate in this post-accident existing exposure scenario was close to the 400 μGy/h benchmark for plants, although fungi are generally considered to be less radiosensitive than plants. Further research on appropriate mycelium geometries and their radionuclide content is required. Based on the assessments presented in this paper, there is no need to recommend that fungi should be added to the existing assessment tools and frameworks; if required some tools allow a geometry representing fungi to be created and used within a dose assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. SU-E-T-556: Monte Carlo Generated Dose Distributions for Orbital Irradiation Using a Single Anterior-Posterior Electron Beam and a Hanging Lens Shield

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duwel, D; Lamba, M; Elson, H

    Purpose: Various cancers of the eye are successfully treated with radiotherapy utilizing one anterior-posterior (A/P) beam that encompasses the entire content of the orbit. In such cases, a hanging lens shield can be used to spare dose to the radiosensitive lens of the eye to prevent cataracts. Methods: This research focused on Monte Carlo characterization of dose distributions resulting from a single A-P field to the orbit with a hanging shield in place. Monte Carlo codes were developed which calculated dose distributions for various electron radiation energies, hanging lens shield radii, shield heights above the eye, and beam spoiler configurations.more » Film dosimetry was used to benchmark the coding to ensure it was calculating relative dose accurately. Results: The Monte Carlo dose calculations indicated that lateral and depth dose profiles are insensitive to changes in shield height and electron beam energy. Dose deposition was sensitive to shield radius and beam spoiler composition and height above the eye. Conclusion: The use of a single A/P electron beam to treat cancers of the eye while maintaining adequate lens sparing is feasible. Shield radius should be customized to have the same radius as the patient’s lens. A beam spoiler should be used if it is desired to substantially dose the eye tissues lying posterior to the lens in the shadow of the lens shield. The compromise between lens sparing and dose to diseased tissues surrounding the lens can be modulated by varying the beam spoiler thickness, spoiler material composition, and spoiler height above the eye. The sparing ratio is a metric that can be used to evaluate the compromise between lens sparing and dose to surrounding tissues. The higher the ratio, the more dose received by the tissues immediately posterior to the lens relative to the dose received by the lens.« less

  12. THE EFFECT OF BACKGROUND SIGNAL AND ITS REPRESENTATION IN DECONVOLUTION OF EPR SPECTRA ON ACCURACY OF EPR DOSIMETRY IN BONE.

    PubMed

    Ciesielski, Bartlomiej; Marciniak, Agnieszka; Zientek, Agnieszka; Krefft, Karolina; Cieszyński, Mateusz; Boguś, Piotr; Prawdzik-Dampc, Anita

    2016-12-01

    This study is about the accuracy of EPR dosimetry in bones based on deconvolution of the experimental spectra into the background (BG) and the radiation-induced signal (RIS) components. The model RIS's were represented by EPR spectra from irradiated enamel or bone powder; the model BG signals by EPR spectra of unirradiated bone samples or by simulated spectra. Samples of compact and trabecular bones were irradiated in the 30-270 Gy range and the intensities of their RIS's were calculated using various combinations of those benchmark spectra. The relationships between the dose and the RIS were linear (R 2  > 0.995), with practically no difference between results obtained when using signals from irradiated enamel or bone as the model RIS. Use of different experimental spectra for the model BG resulted in variations in intercepts of the dose-RIS calibration lines, leading to systematic errors in reconstructed doses, in particular for high- BG samples of trabecular bone. These errors were reduced when simulated spectra instead of the experimental ones were used as the benchmark BG signal in the applied deconvolution procedures. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Benchmark Dose Software Development and Maintenance Ten Berge Cxt Models

    EPA Science Inventory

    This report is intended to provide an overview of beta version 1.0 of the implementation of a concentration-time (CxT) model originally programmed and provided by Wil ten Berge (referred to hereafter as the ten Berge model). The recoding and development described here represent ...

  14. Comparison of methods for individualized astronaut organ dosimetry: Morphometry-based phantom library versus body contour autoscaling of a reference phantom

    NASA Astrophysics Data System (ADS)

    Sands, Michelle M.; Borrego, David; Maynard, Matthew R.; Bahadori, Amir A.; Bolch, Wesley E.

    2017-11-01

    One of the hazards faced by space crew members in low-Earth orbit or in deep space is exposure to ionizing radiation. It has been shown previously that while differences in organ-specific and whole-body risk estimates due to body size variations are small for highly-penetrating galactic cosmic rays, large differences in these quantities can result from exposure to shorter-range trapped proton or solar particle event radiations. For this reason, it is desirable to use morphometrically accurate computational phantoms representing each astronaut for a risk analysis, especially in the case of a solar particle event. An algorithm was developed to automatically sculpt and scale the UF adult male and adult female hybrid reference phantom to the individual outer body contour of a given astronaut. This process begins with the creation of a laser-measured polygon mesh model of the astronaut's body contour. Using the auto-scaling program and selecting several anatomical landmarks, the UF adult male or female phantom is adjusted to match the laser-measured outer body contour of the astronaut. A dosimetry comparison study was conducted to compare the organ dose accuracy of both the autoscaled phantom and that based upon a height-weight matched phantom from the UF/NCI Computational Phantom Library. Monte Carlo methods were used to simulate the environment of the August 1972 and February 1956 solar particle events. Using a series of individual-specific voxel phantoms as a local benchmark standard, autoscaled phantom organ dose estimates were shown to provide a 1% and 10% improvement in organ dose accuracy for a population of females and males, respectively, as compared to organ doses derived from height-weight matched phantoms from the UF/NCI Computational Phantom Library. In addition, this slight improvement in organ dose accuracy from the autoscaled phantoms is accompanied by reduced computer storage requirements and a more rapid method for individualized phantom generation when compared to the UF/NCI Computational Phantom Library.

  15. Genetic toxicology at the crossroads-from qualitative hazard evaluation to quantitative risk assessment.

    PubMed

    White, Paul A; Johnson, George E

    2016-05-01

    Applied genetic toxicology is undergoing a transition from qualitative hazard identification to quantitative dose-response analysis and risk assessment. To facilitate this change, the Health and Environmental Sciences Institute (HESI) Genetic Toxicology Technical Committee (GTTC) sponsored a workshop held in Lancaster, UK on July 10-11, 2014. The event included invited speakers from several institutions and the contents was divided into three themes-1: Point-of-departure Metrics for Quantitative Dose-Response Analysis in Genetic Toxicology; 2: Measurement and Estimation of Exposures for Better Extrapolation to Humans and 3: The Use of Quantitative Approaches in Genetic Toxicology for human health risk assessment (HHRA). A host of pertinent issues were discussed relating to the use of in vitro and in vivo dose-response data, the development of methods for in vitro to in vivo extrapolation and approaches to use in vivo dose-response data to determine human exposure limits for regulatory evaluations and decision-making. This Special Issue, which was inspired by the workshop, contains a series of papers that collectively address topics related to the aforementioned themes. The Issue includes contributions that collectively evaluate, describe and discuss in silico, in vitro, in vivo and statistical approaches that are facilitating the shift from qualitative hazard evaluation to quantitative risk assessment. The use and application of the benchmark dose approach was a central theme in many of the workshop presentations and discussions, and the Special Issue includes several contributions that outline novel applications for the analysis and interpretation of genetic toxicity data. Although the contents of the Special Issue constitutes an important step towards the adoption of quantitative methods for regulatory assessment of genetic toxicity, formal acceptance of quantitative methods for HHRA and regulatory decision-making will require consensus regarding the relationships between genetic damage and disease, and the concomitant ability to use genetic toxicity results per se. © Her Majesty the Queen in Right of Canada 2016. Reproduced with the permission of the Minister of Health.

  16. Establishing Benchmarks for Outcome Indicators: A Statistical Approach to Developing Performance Standards.

    ERIC Educational Resources Information Center

    Henry, Gary T.; And Others

    1992-01-01

    A statistical technique is presented for developing performance standards based on benchmark groups. The benchmark groups are selected using a multivariate technique that relies on a squared Euclidean distance method. For each observation unit (a school district in the example), a unique comparison group is selected. (SLD)

  17. Apples to Oranges: Benchmarking Vocational Education and Training Programmes

    ERIC Educational Resources Information Center

    Bogetoft, Peter; Wittrup, Jesper

    2017-01-01

    This paper discusses methods for benchmarking vocational education and training colleges and presents results from a number of models. It is conceptually difficult to benchmark vocational colleges. The colleges typically offer a wide range of course programmes, and the students come from different socioeconomic backgrounds. We solve the…

  18. Practical Considerations when Using Benchmarking for Accountability in Higher Education

    ERIC Educational Resources Information Center

    Achtemeier, Sue D.; Simpson, Ronald D.

    2005-01-01

    The qualitative study on which this article is based examined key individuals' perceptions, both within a research university community and beyond in its external governing board, of how to improve benchmarking as an accountability method in higher education. Differing understanding of benchmarking revealed practical implications for using it as…

  19. A preliminary study of in-house Monte Carlo simulations: an integrated Monte Carlo verification system.

    PubMed

    Mukumoto, Nobutaka; Tsujii, Katsutomo; Saito, Susumu; Yasunaga, Masayoshi; Takegawa, Hideki; Yamamoto, Tokihiro; Numasaki, Hodaka; Teshima, Teruki

    2009-10-01

    To develop an infrastructure for the integrated Monte Carlo verification system (MCVS) to verify the accuracy of conventional dose calculations, which often fail to accurately predict dose distributions, mainly due to inhomogeneities in the patient's anatomy, for example, in lung and bone. The MCVS consists of the graphical user interface (GUI) based on a computational environment for radiotherapy research (CERR) with MATLAB language. The MCVS GUI acts as an interface between the MCVS and a commercial treatment planning system to import the treatment plan, create MC input files, and analyze MC output dose files. The MCVS consists of the EGSnrc MC codes, which include EGSnrc/BEAMnrc to simulate the treatment head and EGSnrc/DOSXYZnrc to calculate the dose distributions in the patient/phantom. In order to improve computation time without approximations, an in-house cluster system was constructed. The phase-space data of a 6-MV photon beam from a Varian Clinac unit was developed and used to establish several benchmarks under homogeneous conditions. The MC results agreed with the ionization chamber measurements to within 1%. The MCVS GUI could import and display the radiotherapy treatment plan created by the MC method and various treatment planning systems, such as RTOG and DICOM-RT formats. Dose distributions could be analyzed by using dose profiles and dose volume histograms and compared on the same platform. With the cluster system, calculation time was improved in line with the increase in the number of central processing units (CPUs) at a computation efficiency of more than 98%. Development of the MCVS was successful for performing MC simulations and analyzing dose distributions.

  1. Funnel plot control limits to identify poorly performing healthcare providers when there is uncertainty in the value of the benchmark.

    PubMed

    Manktelow, Bradley N; Seaton, Sarah E; Evans, T Alun

    2016-12-01

    There is an increasing use of statistical methods, such as funnel plots, to identify poorly performing healthcare providers. Funnel plots comprise the construction of control limits around a benchmark and providers with outcomes falling outside the limits are investigated as potential outliers. The benchmark is usually estimated from observed data but uncertainty in this estimate is usually ignored when constructing control limits. In this paper, the use of funnel plots in the presence of uncertainty in the value of the benchmark is reviewed for outcomes from a Binomial distribution. Two methods to derive the control limits are shown: (i) prediction intervals; (ii) tolerance intervals Tolerance intervals formally include the uncertainty in the value of the benchmark while prediction intervals do not. The probability properties of 95% control limits derived using each method were investigated through hypothesised scenarios. Neither prediction intervals nor tolerance intervals produce funnel plot control limits that satisfy the nominal probability characteristics when there is uncertainty in the value of the benchmark. This is not necessarily to say that funnel plots have no role to play in healthcare, but that without the development of intervals satisfying the nominal probability characteristics they must be interpreted with care. © The Author(s) 2014.

  2. Benchmark Dose Modeling Estimates of the Concentrations of Inorganic Arsenic That Induce Changes to the Neonatal Transcriptome, Proteome, and Epigenome in a Pregnancy Cohort.

    PubMed

    Rager, Julia E; Auerbach, Scott S; Chappell, Grace A; Martin, Elizabeth; Thompson, Chad M; Fry, Rebecca C

    2017-10-16

    Prenatal inorganic arsenic (iAs) exposure influences the expression of critical genes and proteins associated with adverse outcomes in newborns, in part through epigenetic mediators. The doses at which these genomic and epigenomic changes occur have yet to be evaluated in the context of dose-response modeling. The goal of the present study was to estimate iAs doses that correspond to changes in transcriptomic, proteomic, epigenomic, and integrated multi-omic signatures in human cord blood through benchmark dose (BMD) modeling. Genome-wide DNA methylation, microRNA expression, mRNA expression, and protein expression levels in cord blood were modeled against total urinary arsenic (U-tAs) levels from pregnant women exposed to varying levels of iAs. Dose-response relationships were modeled in BMDExpress, and BMDs representing 10% response levels were estimated. Overall, DNA methylation changes were estimated to occur at lower exposure concentrations in comparison to other molecular endpoints. Multi-omic module eigengenes were derived through weighted gene co-expression network analysis, representing co-modulated signatures across transcriptomic, proteomic, and epigenomic profiles. One module eigengene was associated with decreased gestational age occurring alongside increased iAs exposure. Genes/proteins within this module eigengene showed enrichment for organismal development, including potassium voltage-gated channel subfamily Q member 1 (KCNQ1), an imprinted gene showing differential methylation and expression in response to iAs. Modeling of this prioritized multi-omic module eigengene resulted in a BMD(BMDL) of 58(45) μg/L U-tAs, which was estimated to correspond to drinking water arsenic concentrations of 51(40) μg/L. Results are in line with epidemiological evidence supporting effects of prenatal iAs occurring at levels <100 μg As/L urine. Together, findings present a variety of BMD measures to estimate doses at which prenatal iAs exposure influences neonatal outcome-relevant transcriptomic, proteomic, and epigenomic profiles.

  3. A benchmark study of the sea-level equation in GIA modelling

    NASA Astrophysics Data System (ADS)

    Martinec, Zdenek; Klemann, Volker; van der Wal, Wouter; Riva, Riccardo; Spada, Giorgio; Simon, Karen; Blank, Bas; Sun, Yu; Melini, Daniele; James, Tom; Bradley, Sarah

    2017-04-01

    The sea-level load in glacial isostatic adjustment (GIA) is described by the so called sea-level equation (SLE), which represents the mass redistribution between ice sheets and oceans on a deforming earth. Various levels of complexity of SLE have been proposed in the past, ranging from a simple mean global sea level (the so-called eustatic sea level) to the load with a deforming ocean bottom, migrating coastlines and a changing shape of the geoid. Several approaches to solve the SLE have been derived, from purely analytical formulations to fully numerical methods. Despite various teams independently investigating GIA, there has been no systematic intercomparison amongst the solvers through which the methods may be validated. The goal of this paper is to present a series of benchmark experiments designed for testing and comparing numerical implementations of the SLE. Our approach starts with simple load cases even though the benchmark will not result in GIA predictions for a realistic loading scenario. In the longer term we aim for a benchmark with a realistic loading scenario, and also for benchmark solutions with rotational feedback. The current benchmark uses an earth model for which Love numbers have been computed and benchmarked in Spada et al (2011). In spite of the significant differences in the numerical methods employed, the test computations performed so far show a satisfactory agreement between the results provided by the participants. The differences found can often be attributed to the different approximations inherent to the various algorithms. Literature G. Spada, V. R. Barletta, V. Klemann, R. E. M. Riva, Z. Martinec, P. Gasperini, B. Lund, D. Wolf, L. L. A. Vermeersen, and M. A. King, 2011. A benchmark study for glacial isostatic adjustment codes. Geophys. J. Int. 185: 106-132 doi:10.1111/j.1365-

  4. Within-Group Effect-Size Benchmarks for Trauma-Focused Cognitive Behavioral Therapy with Children and Adolescents

    ERIC Educational Resources Information Center

    Rubin, Allen; Washburn, Micki; Schieszler, Christine

    2017-01-01

    Purpose: This article provides benchmark data on within-group effect sizes from published randomized clinical trials (RCTs) supporting the efficacy of trauma-focused cognitive behavioral therapy (TF-CBT) for traumatized children. Methods: Within-group effect-size benchmarks for symptoms of trauma, anxiety, and depression were calculated via the…

  5. Alternative industrial carbon emissions benchmark based on input-output analysis

    NASA Astrophysics Data System (ADS)

    Han, Mengyao; Ji, Xi

    2016-12-01

    Some problems exist in the current carbon emissions benchmark setting systems. The primary consideration for industrial carbon emissions standards highly relate to direct carbon emissions (power-related emissions) and only a portion of indirect emissions are considered in the current carbon emissions accounting processes. This practice is insufficient and may cause double counting to some extent due to mixed emission sources. To better integrate and quantify direct and indirect carbon emissions, an embodied industrial carbon emissions benchmark setting method is proposed to guide the establishment of carbon emissions benchmarks based on input-output analysis. This method attempts to link direct carbon emissions with inter-industrial economic exchanges and systematically quantifies carbon emissions embodied in total product delivery chains. The purpose of this study is to design a practical new set of embodied intensity-based benchmarks for both direct and indirect carbon emissions. Beijing, at the first level of carbon emissions trading pilot schemes in China, plays a significant role in the establishment of these schemes and is chosen as an example in this study. The newly proposed method tends to relate emissions directly to each responsibility in a practical way through the measurement of complex production and supply chains and reduce carbon emissions from their original sources. This method is expected to be developed under uncertain internal and external contexts and is further expected to be generalized to guide the establishment of industrial benchmarks for carbon emissions trading schemes in China and other countries.

  6. Benchmark dose and the three Rs. Part I. Getting more information from the same number of animals.

    PubMed

    Slob, Wout

    2014-08-01

    Evaluating dose-response data using the Benchmark dose (BMD) approach rather than by the no observed adverse effect (NOAEL) approach implies a considerable step forward from the perspective of the Reduction, Replacement, and Refinement, three Rs, in particular the R of reduction: more information is obtained from the same number of animals, or, vice versa, similar information may be obtained from fewer animals. The first part of this twin paper focusses on the former, the second on the latter aspect. Regarding the former, the BMD approach provides more information from any given dose-response dataset in various ways. First, the BMDL (= BMD lower confidence bound) provides more information by its more explicit definition. Further, as compared to the NOAEL approach the BMD approach results in more statistical precision in the value of the point of departure (PoD), for deriving exposure limits. While part of the animals in the study do not directly contribute to the numerical value of a NOAEL, all animals are effectively used and do contribute to a BMDL. In addition, the BMD approach allows for combining similar datasets for the same chemical (e.g., both sexes) in a single analysis, which further increases precision. By combining a dose-response dataset with similar historical data for other chemicals, the precision can even be substantially increased. Further, the BMD approach results in more precise estimates for relative potency factors (RPFs, or TEFs). And finally, the BMD approach is not only more precise, it also allows for quantification of the precision in the BMD estimate, which is not possible in the NOAEL approach.

  7. Impact of Genomics Platform and Statistical Filtering on Transcriptional Benchmark Doses (BMD) and Multiple Approaches for Selection of Chemical Point of Departure (PoD)

    PubMed Central

    Webster, A. Francina; Chepelev, Nikolai; Gagné, Rémi; Kuo, Byron; Recio, Leslie; Williams, Andrew; Yauk, Carole L.

    2015-01-01

    Many regulatory agencies are exploring ways to integrate toxicogenomic data into their chemical risk assessments. The major challenge lies in determining how to distill the complex data produced by high-content, multi-dose gene expression studies into quantitative information. It has been proposed that benchmark dose (BMD) values derived from toxicogenomics data be used as point of departure (PoD) values in chemical risk assessments. However, there is limited information regarding which genomics platforms are most suitable and how to select appropriate PoD values. In this study, we compared BMD values modeled from RNA sequencing-, microarray-, and qPCR-derived gene expression data from a single study, and explored multiple approaches for selecting a single PoD from these data. The strategies evaluated include several that do not require prior mechanistic knowledge of the compound for selection of the PoD, thus providing approaches for assessing data-poor chemicals. We used RNA extracted from the livers of female mice exposed to non-carcinogenic (0, 2 mg/kg/day, mkd) and carcinogenic (4, 8 mkd) doses of furan for 21 days. We show that transcriptional BMD values were consistent across technologies and highly predictive of the two-year cancer bioassay-based PoD. We also demonstrate that filtering data based on statistically significant changes in gene expression prior to BMD modeling creates more conservative BMD values. Taken together, this case study on mice exposed to furan demonstrates that high-content toxicogenomics studies produce robust data for BMD modelling that are minimally affected by inter-technology variability and highly predictive of cancer-based PoD doses. PMID:26313361

  8. Solution of the neutronics code dynamic benchmark by finite element method

    NASA Astrophysics Data System (ADS)

    Avvakumov, A. V.; Vabishchevich, P. N.; Vasilev, A. O.; Strizhov, V. F.

    2016-10-01

    The objective is to analyze the dynamic benchmark developed by Atomic Energy Research for the verification of best-estimate neutronics codes. The benchmark scenario includes asymmetrical ejection of a control rod in a water-type hexagonal reactor at hot zero power. A simple Doppler feedback mechanism assuming adiabatic fuel temperature heating is proposed. The finite element method on triangular calculation grids is used to solve the three-dimensional neutron kinetics problem. The software has been developed using the engineering and scientific calculation library FEniCS. The matrix spectral problem is solved using the scalable and flexible toolkit SLEPc. The solution accuracy of the dynamic benchmark is analyzed by condensing calculation grid and varying degree of finite elements.

  9. SU-F-BRD-07: Fast Monte Carlo-Based Biological Optimization of Proton Therapy Treatment Plans for Thyroid Tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan Chan Tseung, H; Ma, J; Ma, D

    2015-06-15

    Purpose: To demonstrate the feasibility of fast Monte Carlo (MC) based biological planning for the treatment of thyroid tumors in spot-scanning proton therapy. Methods: Recently, we developed a fast and accurate GPU-based MC simulation of proton transport that was benchmarked against Geant4.9.6 and used as the dose calculation engine in a clinically-applicable GPU-accelerated IMPT optimizer. Besides dose, it can simultaneously score the dose-averaged LET (LETd), which makes fast biological dose (BD) estimates possible. To convert from LETd to BD, we used a linear relation based on cellular irradiation data. Given a thyroid patient with a 93cc tumor volume, we createdmore » a 2-field IMPT plan in Eclipse (Varian Medical Systems). This plan was re-calculated with our MC to obtain the BD distribution. A second 5-field plan was made with our in-house optimizer, using pre-generated MC dose and LETd maps. Constraints were placed to maintain the target dose to within 25% of the prescription, while maximizing the BD. The plan optimization and calculation of dose and LETd maps were performed on a GPU cluster. The conventional IMPT and biologically-optimized plans were compared. Results: The mean target physical and biological doses from our biologically-optimized plan were, respectively, 5% and 14% higher than those from the MC re-calculation of the IMPT plan. Dose sparing to critical structures in our plan was also improved. The biological optimization, including the initial dose and LETd map calculations, can be completed in a clinically viable time (∼30 minutes) on a cluster of 25 GPUs. Conclusion: Taking advantage of GPU acceleration, we created a MC-based, biologically optimized treatment plan for a thyroid patient. Compared to a standard IMPT plan, a 5% increase in the target’s physical dose resulted in ∼3 times as much increase in the BD. Biological planning was thus effective in escalating the target BD.« less

  10. Sulfur activation at the Little Boy-Comet Critical Assembly: a replica of the Hiroshima bomb

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerr, G.D.; Emery, J.F.; Pace, J.V. III

    1985-04-01

    Studies have been completed on the activation of sulfur by fast neutrons from the Little Boy-Comet Critical Assembly which replicates the general features of the Hiroshima bomb. The complex effects of the bomb's design and construction on leakage of sulfur-activation neutrons were investigated both experimentally and theoretically. Our sulfur activation studies were performed as part of a larger program to provide benchmark data for testing of methods used in recent source-term calculations for the Hiroshima bomb. Source neutrons capable of activating sulfur play an important role in determining neutron doses in Hiroshima at a kilometer or more from the pointmore » of explosion. 37 refs., 5 figs., 6 tabs.« less

  11. History of dose specification in Brachytherapy: From Threshold Erythema Dose to Computational Dosimetry

    NASA Astrophysics Data System (ADS)

    Williamson, Jeffrey F.

    2006-09-01

    This paper briefly reviews the evolution of brachytherapy dosimetry from 1900 to the present. Dosimetric practices in brachytherapy fall into three distinct eras: During the era of biological dosimetry (1900-1938), radium pioneers could only specify Ra-226 and Rn-222 implants in terms of the mass of radium encapsulated within the implanted sources. Due to the high energy of its emitted gamma rays and the long range of its secondary electrons in air, free-air chambers could not be used to quantify the output of Ra-226 sources in terms of exposure. Biological dosimetry, most prominently the threshold erythema dose, gained currency as a means of intercomparing radium treatments with exposure-calibrated orthovoltage x-ray units. The classical dosimetry era (1940-1980) began with successful exposure standardization of Ra-226 sources by Bragg-Gray cavity chambers. Classical dose-computation algorithms, based upon 1-D buildup factor measurements and point-source superposition computational algorithms, were able to accommodate artificial radionuclides such as Co-60, Ir-192, and Cs-137. The quantitative dosimetry era (1980- ) arose in response to the increasing utilization of low energy K-capture radionuclides such as I-125 and Pd-103 for which classical approaches could not be expected to estimate accurate correct doses. This led to intensive development of both experimental (largely TLD-100 dosimetry) and Monte Carlo dosimetry techniques along with more accurate air-kerma strength standards. As a result of extensive benchmarking and intercomparison of these different methods, single-seed low-energy radionuclide dose distributions are now known with a total uncertainty of 3%-5%.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less thanmore » these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red-tailed hawk, osprey) (scientific names for both the mammalian and avian species are presented in Appendix B). [In this document, NOAEL refers to both dose (mg contaminant per kg animal body weight per day) and concentration (mg contaminant per kg of food or L of drinking water)]. The 20 wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at U.S. Department of Energy (DOE) waste sites. The NOAEL-based benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species; LOAEL-based benchmarks represent threshold levels at which adverse effects are likely to become evident. These benchmarks consider contaminant exposure through oral ingestion of contaminated media only. Exposure through inhalation and/or direct dermal exposure are not considered in this report.« less

  13. SU-E-J-08: Comparison of Unintended Radiation Doses to Organs at Risk Resulting From the Out-Of-Field Therapeutic Beams and From Image-Guidance X-Ray Procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, G; Wang, L

    Purpose: The unintended radiation dose to organs at risk (OAR) can be contributed from imaging guidance procedures as well as from leakage and scatter of therapeutic beams. This study compares the imaging dose with the unintended out-of-field therapeutic dose to patient sensitive organs. Methods: The Monte Carlo EGSnrc user codes, BEAMnrc and DOSXYZnrc, were used to simulate kV X-ray sources from imaging devices as well as the therapeutic IMRT/VMAT beams and to calculate doses to target and OARs on patient treatment planning CT images. The accuracy of the Monte Carlo simulations was benchmarked against measurements in phantoms. The dose-volume histogrammore » was utilized in analyzing the patient organ doses. Results: The dose resulting from Standard Head kV-CBCT scans to bone and soft tissues ranges from 0.7 to 1.1 cGy and from 0.03 to 0.3 cGy, respectively. The dose resulting from Thorax scans on the chest to bone and soft tissues ranges from 1.1 to 1.8 cGy and from 0.3 to 0.6 cGy, respectively. The dose resulting from Pelvis scans on the abdomen to bone and soft tissues range from 3.2 to 4.2 cGy and from 1.2 to 2.2 cGy, respectively. The out-of-field doses to OAR are sensitive to the distance between the treated target and the OAR. For a typical Head-and-Neck IMRT/VMAT treatment the out-of-field doses to eyes are 1–3% of the target dose, or 2–6 cGy per fraction. Conclusion: The imaging doses to OAR are predictable based on the imaging protocols used when OARs are within the imaged volume and can be estimated and accounted for by using tabulated values. The unintended out-of-field doses are proportional to the target dose, strongly depend on the distance between the treated target and OAR, and are generally higher comparing to the imaging dose. This work was partially supported by Varian research grant VUMC40590.« less

  14. TH-E-BRE-07: Development of Dose Calculation Error Predictors for a Widely Implemented Clinical Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Egan, A; Laub, W

    2014-06-15

    Purpose: Several shortcomings of the current implementation of the analytic anisotropic algorithm (AAA) may lead to dose calculation errors in highly modulated treatments delivered to highly heterogeneous geometries. Here we introduce a set of dosimetric error predictors that can be applied to a clinical treatment plan and patient geometry in order to identify high risk plans. Once a problematic plan is identified, the treatment can be recalculated with more accurate algorithm in order to better assess its viability. Methods: Here we focus on three distinct sources dosimetric error in the AAA algorithm. First, due to a combination of discrepancies inmore » smallfield beam modeling as well as volume averaging effects, dose calculated through small MLC apertures can be underestimated, while that behind small MLC blocks can overestimated. Second, due the rectilinear scaling of the Monte Carlo generated pencil beam kernel, energy is not properly transported through heterogeneities near, but not impeding, the central axis of the beamlet. And third, AAA overestimates dose in regions very low density (< 0.2 g/cm{sup 3}). We have developed an algorithm to detect the location and magnitude of each scenario within the patient geometry, namely the field-size index (FSI), the heterogeneous scatter index (HSI), and the lowdensity index (LDI) respectively. Results: Error indices successfully identify deviations between AAA and Monte Carlo dose distributions in simple phantom geometries. Algorithms are currently implemented in the MATLAB computing environment and are able to run on a typical RapidArc head and neck geometry in less than an hour. Conclusion: Because these error indices successfully identify each type of error in contrived cases, with sufficient benchmarking, this method can be developed into a clinical tool that may be able to help estimate AAA dose calculation errors and when it might be advisable to use Monte Carlo calculations.« less

  15. Bayesian Dose-Response Modeling in Sparse Data

    NASA Astrophysics Data System (ADS)

    Kim, Steven B.

    This book discusses Bayesian dose-response modeling in small samples applied to two different settings. The first setting is early phase clinical trials, and the second setting is toxicology studies in cancer risk assessment. In early phase clinical trials, experimental units are humans who are actual patients. Prior to a clinical trial, opinions from multiple subject area experts are generally more informative than the opinion of a single expert, but we may face a dilemma when they have disagreeing prior opinions. In this regard, we consider compromising the disagreement and compare two different approaches for making a decision. In addition to combining multiple opinions, we also address balancing two levels of ethics in early phase clinical trials. The first level is individual-level ethics which reflects the perspective of trial participants. The second level is population-level ethics which reflects the perspective of future patients. We extensively compare two existing statistical methods which focus on each perspective and propose a new method which balances the two conflicting perspectives. In toxicology studies, experimental units are living animals. Here we focus on a potential non-monotonic dose-response relationship which is known as hormesis. Briefly, hormesis is a phenomenon which can be characterized by a beneficial effect at low doses and a harmful effect at high doses. In cancer risk assessments, the estimation of a parameter, which is known as a benchmark dose, can be highly sensitive to a class of assumptions, monotonicity or hormesis. In this regard, we propose a robust approach which considers both monotonicity and hormesis as a possibility. In addition, We discuss statistical hypothesis testing for hormesis and consider various experimental designs for detecting hormesis based on Bayesian decision theory. Past experiments have not been optimally designed for testing for hormesis, and some Bayesian optimal designs may not be optimal under a wrong parametric assumption. In this regard, we consider a robust experimental design which does not require any parametric assumption.

  16. Developing chemical criteria for wildlife: The benchmark dose versus NOAEL approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linder, G.

    1995-12-31

    Wildlife may be exposed to a wide variety of chemicals in their environment, and various strategies for evaluating wildlife risk for these chemicals have been developed. One, a ``no-observable-adverse-effects-level`` or NOAEL-approach has increasingly been applied to develop chemical criteria for wildlife. In this approach, the NOAEL represents the highest experimental concentration at which there is no statistically significant change in some toxicity endpoint relative to a control. Another, the ``benchmark dose`` or BMD-approach relies on the lower confidence limit for a concentration that corresponds to a small, but statistically significant, change in effect over some reference condition. Rather than correspondingmore » to a single experimental concentration as does the NOAEL, the BMD-approach considers the full concentration response curve for derivation of the BMD. Here, using a variety of vertebrates and an assortment of chemicals (including carbofuran, paraquat, methylmercury, cadmium, zinc, and copper), the NOAEL-approach will be critically evaluated relative to the BMD approach. Statistical models used in the BMD approach suggest these methods are potentially available for eliminating safety factors in risk calculations. A reluctance to recommend this, however, stems from the uncertainty associated with the shape of concentration-response curves at low concentrations. Also, with existing data the derivation of BMDs has shortcomings when sample size is small (10 or fewer animals per treatment). The success of BMD models clearly depends upon the continued collection of wildlife data in the field and laboratory, the design of toxicity studies sufficient for BMD calculations, and complete reporting of these results in the literature. Overall, the BMD approach for developing chemical criteria for wildlife should be given further consideration, since it more fully evaluates concentration-response data.« less

  17. NetBenchmark: a bioconductor package for reproducible benchmarks of gene regulatory network inference.

    PubMed

    Bellot, Pau; Olsen, Catharina; Salembier, Philippe; Oliveras-Vergés, Albert; Meyer, Patrick E

    2015-09-29

    In the last decade, a great number of methods for reconstructing gene regulatory networks from expression data have been proposed. However, very few tools and datasets allow to evaluate accurately and reproducibly those methods. Hence, we propose here a new tool, able to perform a systematic, yet fully reproducible, evaluation of transcriptional network inference methods. Our open-source and freely available Bioconductor package aggregates a large set of tools to assess the robustness of network inference algorithms against different simulators, topologies, sample sizes and noise intensities. The benchmarking framework that uses various datasets highlights the specialization of some methods toward network types and data. As a result, it is possible to identify the techniques that have broad overall performances.

  18. Anatomical robust optimization to account for nasal cavity filling variation during intensity-modulated proton therapy: a comparison with conventional and adaptive planning strategies

    NASA Astrophysics Data System (ADS)

    van de Water, Steven; Albertini, Francesca; Weber, Damien C.; Heijmen, Ben J. M.; Hoogeman, Mischa S.; Lomax, Antony J.

    2018-01-01

    The aim of this study is to develop an anatomical robust optimization method for intensity-modulated proton therapy (IMPT) that accounts for interfraction variations in nasal cavity filling, and to compare it with conventional single-field uniform dose (SFUD) optimization and online plan adaptation. We included CT data of five patients with tumors in the sinonasal region. Using the planning CT, we generated for each patient 25 ‘synthetic’ CTs with varying nasal cavity filling. The robust optimization method available in our treatment planning system ‘Erasmus-iCycle’ was extended to also account for anatomical uncertainties by including (synthetic) CTs with varying patient anatomy as error scenarios in the inverse optimization. For each patient, we generated treatment plans using anatomical robust optimization and, for benchmarking, using SFUD optimization and online plan adaptation. Clinical target volume (CTV) and organ-at-risk (OAR) doses were assessed by recalculating the treatment plans on the synthetic CTs, evaluating dose distributions individually and accumulated over an entire fractionated 50 GyRBE treatment, assuming each synthetic CT to correspond to a 2 GyRBE fraction. Treatment plans were also evaluated using actual repeat CTs. Anatomical robust optimization resulted in adequate CTV doses (V95%  ⩾  98% and V107%  ⩽  2%) if at least three synthetic CTs were included in addition to the planning CT. These CTV requirements were also fulfilled for online plan adaptation, but not for the SFUD approach, even when applying a margin of 5 mm. Compared with anatomical robust optimization, OAR dose parameters for the accumulated dose distributions were on average 5.9 GyRBE (20%) higher when using SFUD optimization and on average 3.6 GyRBE (18%) lower for online plan adaptation. In conclusion, anatomical robust optimization effectively accounted for changes in nasal cavity filling during IMPT, providing substantially improved CTV and OAR doses compared with conventional SFUD optimization. OAR doses can be further reduced by using online plan adaptation.

  19. Poster – 13: Evaluation of an in-house CCD camera film dosimetry imaging system for small field deliveries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lalonde, Michel; Alexander, Kevin; Olding, Tim

    Purpose: Radiochromic film dosimetry is a standard technique used in clinics to verify modern conformal radiation therapy delivery, and sometimes in research to validate other dosimeters. We are using film as a standard for comparison as we improve high-resolution three-dimensional gel systems for small field dosimetry; however, precise film dosimetry can be technically challenging. We report here measurements for fractionated stereotactic radiation therapy (FSRT) delivered using volumetric modulated arc therapy (VMAT) to investigate the accuracy and reproducibility of film measurements with a novel in-house readout system. We show that radiochromic film can accurately and reproducibly validate FSRT deliveries and alsomore » benchmark our gel dosimetry work. Methods: VMAT FSRT plans for metastases alone (PTV{sub MET}) and whole brain plus metastases (WB+PTV{sub MET}) were delivered onto a multi-configurational phantom with a sheet of EBT3 Gafchromic film inserted mid-plane. A dose of 400 cGy was prescribed to 4 small PTV{sub MET} structures in the phantom, while a WB structure was prescribed a dose of 200 cGy in the WB+PTV{sub MET} iterations. Doses generated from film readout with our in-house system were compared to treatment planned doses. Each delivery was repeated multiple times to assess reproducibility. Results and Conclusions: The reproducibility of film optical density readout was excellent throughout all experiments. Doses measured from the film agreed well with plans for the WB+PTV{sub MET} delivery. But, film doses for PTV{sub MET} only deliveries were significantly below planned doses. This discrepancy is due to stray/scattered light perturbations in our system during readout. Corrections schemes will be presented.« less

  20. Cloud-Based CT Dose Monitoring using the DICOM-Structured Report: Fully Automated Analysis in Regard to National Diagnostic Reference Levels.

    PubMed

    Boos, J; Meineke, A; Rubbert, C; Heusch, P; Lanzman, R S; Aissa, J; Antoch, G; Kröpil, P

    2016-03-01

    To implement automated CT dose data monitoring using the DICOM-Structured Report (DICOM-SR) in order to monitor dose-related CT data in regard to national diagnostic reference levels (DRLs). We used a novel in-house co-developed software tool based on the DICOM-SR to automatically monitor dose-related data from CT examinations. The DICOM-SR for each CT examination performed between 09/2011 and 03/2015 was automatically anonymized and sent from the CT scanners to a cloud server. Data was automatically analyzed in accordance with body region, patient age and corresponding DRL for volumetric computed tomography dose index (CTDIvol) and dose length product (DLP). Data of 36,523 examinations (131,527 scan series) performed on three different CT scanners and one PET/CT were analyzed. The overall mean CTDIvol and DLP were 51.3% and 52.8% of the national DRLs, respectively. CTDIvol and DLP reached 43.8% and 43.1% for abdominal CT (n=10,590), 66.6% and 69.6% for cranial CT (n=16,098) and 37.8% and 44.0% for chest CT (n=10,387) of the compared national DRLs, respectively. Overall, the CTDIvol exceeded national DRLs in 1.9% of the examinations, while the DLP exceeded national DRLs in 2.9% of the examinations. Between different CT protocols of the same body region, radiation exposure varied up to 50% of the DRLs. The implemented cloud-based CT dose monitoring based on the DICOM-SR enables automated benchmarking in regard to national DRLs. Overall the local dose exposure from CT reached approximately 50% of these DRLs indicating that DRL actualization as well as protocol-specific DRLs are desirable. The cloud-based approach enables multi-center dose monitoring and offers great potential to further optimize radiation exposure in radiological departments. • The newly developed software based on the DICOM-Structured Report enables large-scale cloud-based CT dose monitoring • The implemented software solution enables automated benchmarking in regard to national DRLs • The local radiation exposure from CT reached approximately 50 % of the national DRLs • The cloud-based approach offers great potential for multi-center dose analysis. © Georg Thieme Verlag KG Stuttgart · New York.

  1. Clear, Complete, and Justified Problem Formulations for Aquatic Life Benchmark Values: Specifying the Dimensions

    EPA Science Inventory

    Nations that develop water quality benchmark values have relied primarily on standard data and methods. However, experience with chemicals such as Se, ammonia, and tributyltin has shown that standard methods do not adequately address some taxa, modes of exposure and effects. Deve...

  2. CLEAR, COMPLETE, AND JUSTIFIED PROBLEM FORMULATIONS FOR AQUATIC LIFE BENCHMARK VALUES: SPECIFYING THE DIMENSIONS

    EPA Science Inventory

    Nations that develop water quality benchmark values have relied primarily on standard data and methods. However, experience with chemicals such as Se, ammonia, and tributyltin has shown that standard methods do not adequately address some taxa, modes of exposure and effects. Deve...

  3. A Field-Based Aquatic Life Benchmark for Conductivity in ...

    EPA Pesticide Factsheets

    EPA announced the availability of the final report, A Field-Based Aquatic Life Benchmark for Conductivity in Central Appalachian Streams. This report describes a method to characterize the relationship between the extirpation (the effective extinction) of invertebrate genera and salinity (measured as conductivity) and from that relationship derives a freshwater aquatic life benchmark. This benchmark of 300 µS/cm may be applied to waters in Appalachian streams that are dominated by calcium and magnesium salts of sulfate and bicarbonate at circum-neutral to mildly alkaline pH. This report provides scientific evidence for a conductivity benchmark in a specific region rather than for the entire United States.

  4. Toxicological benchmarks for potential contaminants of concern for effects on soil and litter invertebrates and heterotrophic process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Will, M.E.; Suter, G.W. II

    1995-09-01

    An important step in ecological risk assessments is screening the chemicals occur-ring on a site for contaminants of potential concern. Screening may be accomplished by comparing reported ambient concentrations to a set of toxicological benchmarks. Multiple endpoints for assessing risks posed by soil-borne contaminants to organisms directly impacted by them have been established. This report presents benchmarks for soil invertebrates and microbial processes and addresses only chemicals found at United States Department of Energy (DOE) sites. No benchmarks for pesticides are presented. After discussing methods, this report presents the results of the literature review and benchmark derivation for toxicity tomore » earthworms (Sect. 3), heterotrophic microbes and their processes (Sect. 4), and other invertebrates (Sect. 5). The final sections compare the benchmarks to other criteria and background and draw conclusions concerning the utility of the benchmarks.« less

  5. Numerical benchmarking of a Coarse-Mesh Transport (COMET) Method for medical physics applications

    NASA Astrophysics Data System (ADS)

    Blackburn, Megan Satterfield

    2009-12-01

    Radiation therapy has become a very import method for treating cancer patients. Thus, it is extremely important to accurately determine the location of energy deposition during these treatments, maximizing dose to the tumor region and minimizing it to healthy tissue. A Coarse-Mesh Transport Method (COMET) has been developed at the Georgia Institute of Technology in the Computational Reactor and Medical Physics Group for use very successfully with neutron transport to analyze whole-core criticality. COMET works by decomposing a large, heterogeneous system into a set of smaller fixed source problems. For each unique local problem that exists, a solution is obtained that we call a response function. These response functions are pre-computed and stored in a library for future use. The overall solution to the global problem can then be found by a linear superposition of these local problems. This method has now been extended to the transport of photons and electrons for use in medical physics problems to determine energy deposition from radiation therapy treatments. The main goal of this work was to develop benchmarks for testing in order to evaluate the COMET code to determine its strengths and weaknesses for these medical physics applications. For response function calculations, legendre polynomial expansions are necessary for space, angle, polar angle, and azimuthal angle. An initial sensitivity study was done to determine the best orders for future testing. After the expansion orders were found, three simple benchmarks were tested: a water phantom, a simplified lung phantom, and a non-clinical slab phantom. Each of these benchmarks was decomposed into 1cm x 1cm and 0.5cm x 0.5cm coarse meshes. Three more clinically relevant problems were developed from patient CT scans. These benchmarks modeled a lung patient, a prostate patient, and a beam re-entry situation. As before, the problems were divided into 1cm x 1cm, 0.5cm x 0.5cm, and 0.25cm x 0.25cm coarse mesh cases. Multiple beam energies were also tested for each case. The COMET solutions for each case were compared to a reference solution obtained by pure Monte Carlo results from EGSnrc. When comparing the COMET results to the reference cases, a pattern of differences appeared in each phantom case. It was found that better results were obtained for lower energy incident photon beams as well as for larger mesh sizes. Possible changes may need to be made with the expansion orders used for energy and angle to better model high energy secondary electrons. Heterogeneity also did not pose a problem for the COMET methodology. Heterogeneous results were found in a comparable amount of time to the homogeneous water phantom. The COMET results were typically found in minutes to hours of computational time, whereas the reference cases typically required hundreds or thousands of hours. A second sensitivity study was also performed on a more stringent problem and with smaller coarse meshes. Previously, the same expansion order was used for each incident photon beam energy so better comparisons could be made. From this second study, it was found that it is optimal to have different expansion orders based on the incident beam energy. Recommendations for future work with this method include more testing on higher expansion orders or possible code modification to better handle secondary electrons. The method also needs to handle more clinically relevant beam descriptions with an energy and angular distribution associated with it.

  6. MoleculeNet: a benchmark for molecular machine learning† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c7sc02664a

    PubMed Central

    Wu, Zhenqin; Ramsundar, Bharath; Feinberg, Evan N.; Gomes, Joseph; Geniesse, Caleb; Pappu, Aneesh S.; Leswing, Karl

    2017-01-01

    Molecular machine learning has been maturing rapidly over the last few years. Improved methods and the presence of larger datasets have enabled machine learning algorithms to make increasingly accurate predictions about molecular properties. However, algorithmic progress has been limited due to the lack of a standard benchmark to compare the efficacy of proposed methods; most new algorithms are benchmarked on different datasets making it challenging to gauge the quality of proposed methods. This work introduces MoleculeNet, a large scale benchmark for molecular machine learning. MoleculeNet curates multiple public datasets, establishes metrics for evaluation, and offers high quality open-source implementations of multiple previously proposed molecular featurization and learning algorithms (released as part of the DeepChem open source library). MoleculeNet benchmarks demonstrate that learnable representations are powerful tools for molecular machine learning and broadly offer the best performance. However, this result comes with caveats. Learnable representations still struggle to deal with complex tasks under data scarcity and highly imbalanced classification. For quantum mechanical and biophysical datasets, the use of physics-aware featurizations can be more important than choice of particular learning algorithm. PMID:29629118

  7. SU-C-BRC-01: A Monte Carlo Study of Out-Of-Field Doses From Cobalt-60 Teletherapy Units Intended for Historical Correlations of Dose to Normal Tissue

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petroccia, H; Olguin, E; Culberson, W

    2016-06-15

    Purpose: Innovations in radiotherapy treatments, such as dynamic IMRT, VMAT, and SBRT/SRS, result in larger proportions of low-dose regions where normal tissues are exposed to low doses levels. Low doses of radiation have been linked to secondary cancers and cardiac toxicities. The AAPM TG Committee No.158 entitled, ‘Measurements and Calculations of Doses outside the Treatment Volume from External-Beam Radiation Therapy’, has been formed to review the dosimetry of non-target and out-of-field exposures using experimental and computational approaches. Studies on historical patients can provide comprehensive information about secondary effects from out-of-field doses when combined with long-term patient follow-up, thus providing significantmore » insight into projecting future outcomes of patients undergoing modern-day treatments. Methods: We present a Monte Carlo model of a Theratron-1000 cobalt-60 teletherapy unit, which historically treated patients at the University of Florida, as a means of determining doses located outside the primary beam. Experimental data for a similar Theratron-1000 was obtained at the University of Wisconsin’s ADCL to benchmark the model for out-of-field dosimetry. An Exradin A12 ion chamber and TLD100 chips were used to measure doses in an extended water phantom to 60 cm outside the primary field at 5 and 10 cm depths. Results: Comparison between simulated and experimental measurements of PDDs and lateral profiles show good agreement for in-field and out-of-field doses. At 10 cm away from the edge of a 6×6, 10×10, and 20×20 cm2 field, relative out-of-field doses were measured in the range of 0.5% to 3% of the dose measured at 5 cm depth along the CAX. Conclusion: Out-of-field doses can be as high as 90 to 180 cGy assuming historical prescription doses of 30 to 60 Gy and should be considered when correlating late effects with normal tissue dose.« less

  8. Modification and validation of an analytical source model for external beam radiotherapy Monte Carlo dose calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidson, Scott E., E-mail: sedavids@utmb.edu

    Purpose: A dose calculation tool, which combines the accuracy of the dose planning method (DPM) Monte Carlo code and the versatility of a practical analytical multisource model, which was previously reported has been improved and validated for the Varian 6 and 10 MV linear accelerators (linacs). The calculation tool can be used to calculate doses in advanced clinical application studies. One shortcoming of current clinical trials that report dose from patient plans is the lack of a standardized dose calculation methodology. Because commercial treatment planning systems (TPSs) have their own dose calculation algorithms and the clinical trial participant who usesmore » these systems is responsible for commissioning the beam model, variation exists in the reported calculated dose distributions. Today’s modern linac is manufactured to tight specifications so that variability within a linac model is quite low. The expectation is that a single dose calculation tool for a specific linac model can be used to accurately recalculate dose from patient plans that have been submitted to the clinical trial community from any institution. The calculation tool would provide for a more meaningful outcome analysis. Methods: The analytical source model was described by a primary point source, a secondary extra-focal source, and a contaminant electron source. Off-axis energy softening and fluence effects were also included. The additions of hyperbolic functions have been incorporated into the model to correct for the changes in output and in electron contamination with field size. A multileaf collimator (MLC) model is included to facilitate phantom and patient dose calculations. An offset to the MLC leaf positions was used to correct for the rudimentary assumed primary point source. Results: Dose calculations of the depth dose and profiles for field sizes 4 × 4 to 40 × 40 cm agree with measurement within 2% of the maximum dose or 2 mm distance to agreement (DTA) for 95% of the data points tested. The model was capable of predicting the depth of the maximum dose within 1 mm. Anthropomorphic phantom benchmark testing of modulated and patterned MLCs treatment plans showed agreement to measurement within 3% in target regions using thermoluminescent dosimeters (TLD). Using radiochromic film normalized to TLD, a gamma criteria of 3% of maximum dose and 2 mm DTA was applied with a pass rate of least 85% in the high dose, high gradient, and low dose regions. Finally, recalculations of patient plans using DPM showed good agreement relative to a commercial TPS when comparing dose volume histograms and 2D dose distributions. Conclusions: A unique analytical source model coupled to the dose planning method Monte Carlo dose calculation code has been modified and validated using basic beam data and anthropomorphic phantom measurement. While this tool can be applied in general use for a particular linac model, specifically it was developed to provide a singular methodology to independently assess treatment plan dose distributions from those clinical institutions participating in National Cancer Institute trials.« less

  9. Recommended approaches in the application of ...

    EPA Pesticide Factsheets

    ABSTRACT:Only a fraction of chemicals in commerce have been fully assessed for their potential hazards to human health due to difficulties involved in conventional regulatory tests. It has recently been proposed that quantitative transcriptomic data can be used to determine benchmark dose (BMD) and estimate a point of departure (POD). Several studies have shown that transcriptional PODs correlate with PODs derived from analysis of pathological changes, but there is no consensus on how the genes that are used to derive a transcriptional POD should be selected. Because of very large number of unrelated genes in gene expression data, the process of selecting subsets of informative genes is a major challenge. We used published microarray data from studies on rats exposed orally to multiple doses of six chemicals for 5, 14, 28, and 90 days. We evaluated eight different approaches to select genes for POD derivation and compared them to three previously proposed approaches. The relationship between transcriptional BMDs derived using these 11 approaches were compared with PODs derived from apical data that might be used in a human health risk assessment. We found that transcriptional benchmark dose values for all 11 approaches were remarkably aligned with different apical PODs, while a subset of between 3 and 8 of the approaches met standard statistical criteria across the 5-, 14-, 28-, and 90-day time points and thus qualify as effective estimates of apical PODs. Our r

  10. Journal Benchmarking for Strategic Publication Management and for Improving Journal Positioning in the World Ranking Systems

    ERIC Educational Resources Information Center

    Moskovkin, Vladimir M.; Bocharova, Emilia A.; Balashova, Oksana V.

    2014-01-01

    Purpose: The purpose of this paper is to introduce and develop the methodology of journal benchmarking. Design/Methodology/ Approach: The journal benchmarking method is understood to be an analytic procedure of continuous monitoring and comparing of the advance of specific journal(s) against that of competing journals in the same subject area,…

  11. Developing a benchmark for emotional analysis of music

    PubMed Central

    Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the ‘Emotion in Music’ task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER. PMID:28282400

  12. Methodology and issues of integral experiments selection for nuclear data validation

    NASA Astrophysics Data System (ADS)

    Tatiana, Ivanova; Ivanov, Evgeny; Hill, Ian

    2017-09-01

    Nuclear data validation involves a large suite of Integral Experiments (IEs) for criticality, reactor physics and dosimetry applications. [1] Often benchmarks are taken from international Handbooks. [2, 3] Depending on the application, IEs have different degrees of usefulness in validation, and usually the use of a single benchmark is not advised; indeed, it may lead to erroneous interpretation and results. [1] This work aims at quantifying the importance of benchmarks used in application dependent cross section validation. The approach is based on well-known General Linear Least Squared Method (GLLSM) extended to establish biases and uncertainties for given cross sections (within a given energy interval). The statistical treatment results in a vector of weighting factors for the integral benchmarks. These factors characterize the value added by a benchmark for nuclear data validation for the given application. The methodology is illustrated by one example, selecting benchmarks for 239Pu cross section validation. The studies were performed in the framework of Subgroup 39 (Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files) established at the Working Party on International Nuclear Data Evaluation Cooperation (WPEC) of the Nuclear Science Committee under the Nuclear Energy Agency (NEA/OECD).

  13. Decoys Selection in Benchmarking Datasets: Overview and Perspectives

    PubMed Central

    Réau, Manon; Langenfeld, Florent; Zagury, Jean-François; Lagarde, Nathalie; Montes, Matthieu

    2018-01-01

    Virtual Screening (VS) is designed to prospectively help identifying potential hits, i.e., compounds capable of interacting with a given target and potentially modulate its activity, out of large compound collections. Among the variety of methodologies, it is crucial to select the protocol that is the most adapted to the query/target system under study and that yields the most reliable output. To this aim, the performance of VS methods is commonly evaluated and compared by computing their ability to retrieve active compounds in benchmarking datasets. The benchmarking datasets contain a subset of known active compounds together with a subset of decoys, i.e., assumed non-active molecules. The composition of both the active and the decoy compounds subsets is critical to limit the biases in the evaluation of the VS methods. In this review, we focus on the selection of decoy compounds that has considerably changed over the years, from randomly selected compounds to highly customized or experimentally validated negative compounds. We first outline the evolution of decoys selection in benchmarking databases as well as current benchmarking databases that tend to minimize the introduction of biases, and secondly, we propose recommendations for the selection and the design of benchmarking datasets. PMID:29416509

  14. Developing a benchmark for emotional analysis of music.

    PubMed

    Aljanaki, Anna; Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.

  15. A large-scale benchmark of gene prioritization methods.

    PubMed

    Guala, Dimitri; Sonnhammer, Erik L L

    2017-04-21

    In order to maximize the use of results from high-throughput experimental studies, e.g. GWAS, for identification and diagnostics of new disease-associated genes, it is important to have properly analyzed and benchmarked gene prioritization tools. While prospective benchmarks are underpowered to provide statistically significant results in their attempt to differentiate the performance of gene prioritization tools, a strategy for retrospective benchmarking has been missing, and new tools usually only provide internal validations. The Gene Ontology(GO) contains genes clustered around annotation terms. This intrinsic property of GO can be utilized in construction of robust benchmarks, objective to the problem domain. We demonstrate how this can be achieved for network-based gene prioritization tools, utilizing the FunCoup network. We use cross-validation and a set of appropriate performance measures to compare state-of-the-art gene prioritization algorithms: three based on network diffusion, NetRank and two implementations of Random Walk with Restart, and MaxLink that utilizes network neighborhood. Our benchmark suite provides a systematic and objective way to compare the multitude of available and future gene prioritization tools, enabling researchers to select the best gene prioritization tool for the task at hand, and helping to guide the development of more accurate methods.

  16. Methodology and Data Sources for Assessing Extreme Charging Events within the Earth's Magnetosphere

    NASA Astrophysics Data System (ADS)

    Parker, L. N.; Minow, J. I.; Talaat, E. R.

    2016-12-01

    Spacecraft surface and internal charging is a potential threat to space technologies because electrostatic discharges on, or within, charged spacecraft materials can result in a number of adverse impacts to spacecraft systems. The Space Weather Action Plan (SWAP) ionizing radiation benchmark team recognized that spacecraft charging will need to be considered to complete the ionizing radiation benchmarks in order to evaluate the threat of charging to critical space infrastructure operating within the near-Earth ionizing radiation environments. However, the team chose to defer work on the lower energy charging environments and focus the initial benchmark efforts on the higher energy galactic cosmic ray, solar energetic particle, and trapped radiation belt particle environments of concern for radiation dose and single event effects in humans and hardware. Therefore, an initial set of 1 in 100 year spacecraft charging environment benchmarks remains to be defined to meet the SWAP goals. This presentation will discuss the available data sources and a methodology to assess the 1 in 100 year extreme space weather events that drive surface and internal charging threats to spacecraft. Environments to be considered are the hot plasmas in the outer magnetosphere during geomagnetic storms, relativistic electrons in the outer radiation belt, and energetic auroral electrons in low Earth orbit at high latitudes.

  17. DPM, a fast, accurate Monte Carlo code optimized for photon and electron radiotherapy treatment planning dose calculations

    NASA Astrophysics Data System (ADS)

    Sempau, Josep; Wilderman, Scott J.; Bielajew, Alex F.

    2000-08-01

    A new Monte Carlo (MC) algorithm, the `dose planning method' (DPM), and its associated computer program for simulating the transport of electrons and photons in radiotherapy class problems employing primary electron beams, is presented. DPM is intended to be a high-accuracy MC alternative to the current generation of treatment planning codes which rely on analytical algorithms based on an approximate solution of the photon/electron Boltzmann transport equation. For primary electron beams, DPM is capable of computing 3D dose distributions (in 1 mm3 voxels) which agree to within 1% in dose maximum with widely used and exhaustively benchmarked general-purpose public-domain MC codes in only a fraction of the CPU time. A representative problem, the simulation of 1 million 10 MeV electrons impinging upon a water phantom of 1283 voxels of 1 mm on a side, can be performed by DPM in roughly 3 min on a modern desktop workstation. DPM achieves this performance by employing transport mechanics and electron multiple scattering distribution functions which have been derived to permit long transport steps (of the order of 5 mm) which can cross heterogeneity boundaries. The underlying algorithm is a `mixed' class simulation scheme, with differential cross sections for hard inelastic collisions and bremsstrahlung events described in an approximate manner to simplify their sampling. The continuous energy loss approximation is employed for energy losses below some predefined thresholds, and photon transport (including Compton, photoelectric absorption and pair production) is simulated in an analogue manner. The δ-scattering method (Woodcock tracking) is adopted to minimize the computational costs of transporting photons across voxels.

  18. Patient-specific IMRT verification using independent fluence-based dose calculation software: experimental benchmarking and initial clinical experience.

    PubMed

    Georg, Dietmar; Stock, Markus; Kroupa, Bernhard; Olofsson, Jörgen; Nyholm, Tufve; Ahnesjö, Anders; Karlsson, Mikael

    2007-08-21

    Experimental methods are commonly used for patient-specific intensity-modulated radiotherapy (IMRT) verification. The purpose of this study was to investigate the accuracy and performance of independent dose calculation software (denoted as 'MUV' (monitor unit verification)) for patient-specific quality assurance (QA). 52 patients receiving step-and-shoot IMRT were considered. IMRT plans were recalculated by the treatment planning systems (TPS) in a dedicated QA phantom, in which an experimental 1D and 2D verification (0.3 cm(3) ionization chamber; films) was performed. Additionally, an independent dose calculation was performed. The fluence-based algorithm of MUV accounts for collimator transmission, rounded leaf ends, tongue-and-groove effect, backscatter to the monitor chamber and scatter from the flattening filter. The dose calculation utilizes a pencil beam model based on a beam quality index. DICOM RT files from patient plans, exported from the TPS, were directly used as patient-specific input data in MUV. For composite IMRT plans, average deviations in the high dose region between ionization chamber measurements and point dose calculations performed with the TPS and MUV were 1.6 +/- 1.2% and 0.5 +/- 1.1% (1 S.D.). The dose deviations between MUV and TPS slightly depended on the distance from the isocentre position. For individual intensity-modulated beams (total 367), an average deviation of 1.1 +/- 2.9% was determined between calculations performed with the TPS and with MUV, with maximum deviations up to 14%. However, absolute dose deviations were mostly less than 3 cGy. Based on the current results, we aim to apply a confidence limit of 3% (with respect to the prescribed dose) or 6 cGy for routine IMRT verification. For off-axis points at distances larger than 5 cm and for low dose regions, we consider 5% dose deviation or 10 cGy acceptable. The time needed for an independent calculation compares very favourably with the net time for an experimental approach. The physical effects modelled in the dose calculation software MUV allow accurate dose calculations in individual verification points. Independent calculations may be used to replace experimental dose verification once the IMRT programme is mature.

  19. An annotated bibliography of selected books and articles on benchmarking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allan, F.C.

    This bibliography contains 34 references concerning utilizing benchmarking in the management of businesses. Books and articles are both cited. Methods for gathering and utilizing information are emphasized. (GHH)

  20. An annotated bibliography of selected books and articles on benchmarking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allan, F.C.

    1992-01-01

    This bibliography contains 34 references concerning utilizing benchmarking in the management of businesses. Books and articles are both cited. Methods for gathering and utilizing information are emphasized. (GHH)

  1. The MCNP6 Analytic Criticality Benchmark Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less

  2. Developing Toxicogenomics as a Research Tool by Applying Benchmark Dose-Response Modeling to inform Chemical Mode of Action and Tumorigenic Potency

    EPA Science Inventory

    ABSTRACT Results of global gene expression profiling after short-term exposures can be used to inform tumorigenic potency and chemical mode of action (MOA) and thus serve as a strategy to prioritize future or data-poor chemicals for further evaluation. This compilation of cas...

  3. CHOLINESTERASE INHIBITION AND HYPOTHERMIA FOLLOWING EXPOSURE TO BINARY MIXTURES OF ANTICHOLINESTERASE AGENTS: LACK OF EVIDENCE FOR CAUSE-AND-EFFECT

    EPA Science Inventory

    Dose-additivity has been the default assumption in risk assessments of pesticides with a common mechanism of action but it has been suspected that there could be non-additive effects. Inhibition of plasma cholinesterase (ChE) activity and hypothermia were used as benchmarks of e...

  4. Consistency evaluation between EGSnrc and Geant4 charged particle transport in an equilibrium magnetic field.

    PubMed

    Yang, Y M; Bednarz, B

    2013-02-21

    Following the proposal by several groups to integrate magnetic resonance imaging (MRI) with radiation therapy, much attention has been afforded to examining the impact of strong (on the order of a Tesla) transverse magnetic fields on photon dose distributions. The effect of the magnetic field on dose distributions must be considered in order to take full advantage of the benefits of real-time intra-fraction imaging. In this investigation, we compared the handling of particle transport in magnetic fields between two Monte Carlo codes, EGSnrc and Geant4, to analyze various aspects of their electromagnetic transport algorithms; both codes are well-benchmarked for medical physics applications in the absence of magnetic fields. A water-air-water slab phantom and a water-lung-water slab phantom were used to highlight dose perturbations near high- and low-density interfaces. We have implemented a method of calculating the Lorentz force in EGSnrc based on theoretical models in literature, and show very good consistency between the two Monte Carlo codes. This investigation further demonstrates the importance of accurate dosimetry for MRI-guided radiation therapy (MRIgRT), and facilitates the integration of a ViewRay MRIgRT system in the University of Wisconsin-Madison's Radiation Oncology Department.

  5. Consistency evaluation between EGSnrc and Geant4 charged particle transport in an equilibrium magnetic field

    NASA Astrophysics Data System (ADS)

    Yang, Y. M.; Bednarz, B.

    2013-02-01

    Following the proposal by several groups to integrate magnetic resonance imaging (MRI) with radiation therapy, much attention has been afforded to examining the impact of strong (on the order of a Tesla) transverse magnetic fields on photon dose distributions. The effect of the magnetic field on dose distributions must be considered in order to take full advantage of the benefits of real-time intra-fraction imaging. In this investigation, we compared the handling of particle transport in magnetic fields between two Monte Carlo codes, EGSnrc and Geant4, to analyze various aspects of their electromagnetic transport algorithms; both codes are well-benchmarked for medical physics applications in the absence of magnetic fields. A water-air-water slab phantom and a water-lung-water slab phantom were used to highlight dose perturbations near high- and low-density interfaces. We have implemented a method of calculating the Lorentz force in EGSnrc based on theoretical models in literature, and show very good consistency between the two Monte Carlo codes. This investigation further demonstrates the importance of accurate dosimetry for MRI-guided radiation therapy (MRIgRT), and facilitates the integration of a ViewRay MRIgRT system in the University of Wisconsin-Madison's Radiation Oncology Department.

  6. Benchmarking FEniCS for mantle convection simulations

    NASA Astrophysics Data System (ADS)

    Vynnytska, L.; Rognes, M. E.; Clark, S. R.

    2013-01-01

    This paper evaluates the usability of the FEniCS Project for mantle convection simulations by numerical comparison to three established benchmarks. The benchmark problems all concern convection processes in an incompressible fluid induced by temperature or composition variations, and cover three cases: (i) steady-state convection with depth- and temperature-dependent viscosity, (ii) time-dependent convection with constant viscosity and internal heating, and (iii) a Rayleigh-Taylor instability. These problems are modeled by the Stokes equations for the fluid and advection-diffusion equations for the temperature and composition. The FEniCS Project provides a novel platform for the automated solution of differential equations by finite element methods. In particular, it offers a significant flexibility with regard to modeling and numerical discretization choices; we have here used a discontinuous Galerkin method for the numerical solution of the advection-diffusion equations. Our numerical results are in agreement with the benchmarks, and demonstrate the applicability of both the discontinuous Galerkin method and FEniCS for such applications.

  7. Benchmarking: a method for continuous quality improvement in health.

    PubMed

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-05-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical-social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted.

  8. Benchmarking of venous thromboembolism prophylaxis practice with ENT.UK guidelines.

    PubMed

    Al-Qahtani, Ali S

    2017-05-01

    The aim of this study was to benchmark our guidelines of prevention of venous thromboembolism (VTE) in ENT surgical population against ENT.UK guidelines, and also to encourage healthcare providers to utilize benchmarking as an effective method of improving performance. The study design is prospective descriptive analysis. The setting of this study is tertiary referral centre (Assir Central Hospital, Abha, Saudi Arabia). In this study, we are benchmarking our practice guidelines of the prevention of VTE in the ENT surgical population against that of ENT.UK guidelines to mitigate any gaps. ENT guidelines 2010 were downloaded from the ENT.UK Website. Our guidelines were compared with the possibilities that either our performance meets or fall short of ENT.UK guidelines. Immediate corrective actions will take place if there is quality chasm between the two guidelines. ENT.UK guidelines are evidence-based and updated which may serve as role-model for adoption and benchmarking. Our guidelines were accordingly amended to contain all factors required in providing a quality service to ENT surgical patients. While not given appropriate attention, benchmarking is a useful tool in improving quality of health care. It allows learning from others' practices and experiences, and works towards closing any quality gaps. In addition, benchmarking clinical outcomes is critical for quality improvement and informing decisions concerning service provision. It is recommended to be included on the list of quality improvement methods of healthcare services.

  9. Benchmarking protein-protein interface predictions: why you should care about protein size.

    PubMed

    Martin, Juliette

    2014-07-01

    A number of predictive methods have been developed to predict protein-protein binding sites. Each new method is traditionally benchmarked using sets of protein structures of various sizes, and global statistics are used to assess the quality of the prediction. Little attention has been paid to the potential bias due to protein size on these statistics. Indeed, small proteins involve proportionally more residues at interfaces than large ones. If a predictive method is biased toward small proteins, this can lead to an over-estimation of its performance. Here, we investigate the bias due to the size effect when benchmarking protein-protein interface prediction on the widely used docking benchmark 4.0. First, we simulate random scores that favor small proteins over large ones. Instead of the 0.5 AUC (Area Under the Curve) value expected by chance, these biased scores result in an AUC equal to 0.6 using hypergeometric distributions, and up to 0.65 using constant scores. We then use real prediction results to illustrate how to detect the size bias by shuffling, and subsequently correct it using a simple conversion of the scores into normalized ranks. In addition, we investigate the scores produced by eight published methods and show that they are all affected by the size effect, which can change their relative ranking. The size effect also has an impact on linear combination scores by modifying the relative contributions of each method. In the future, systematic corrections should be applied when benchmarking predictive methods using data sets with mixed protein sizes. © 2014 Wiley Periodicals, Inc.

  10. Paradoxical Acinetobacter-associated ventilator-associated pneumonia incidence rates within prevention studies using respiratory tract applications of topical polymyxin: benchmarking the evidence base.

    PubMed

    Hurley, J C

    2018-04-10

    Regimens containing topical polymyxin appear to be more effective in preventing ventilator-associated pneumonia (VAP) than other methods. To benchmark the incidence rates of Acinetobacter-associated VAP (AAVAP) within component (control and intervention) groups from concurrent controlled studies of polymyxin compared with studies of various VAP prevention methods other than polymyxin (non-polymyxin studies). An AAVAP benchmark was derived using data from 77 observational groups without any VAP prevention method under study. Data from 41 non-polymyxin studies provided additional points of reference. The benchmarking was undertaken by meta-regression using generalized estimating equation methods. Within 20 studies of topical polymyxin, the mean AAVAP was 4.6% [95% confidence interval (CI) 3.0-6.9] and 3.7% (95% CI 2.0-5.3) for control and intervention groups, respectively. In contrast, the AAVAP benchmark was 1.5% (95% CI 1.2-2.0). In the AAVAP meta-regression model, group origin from a trauma intensive care unit (+0.55; +0.16 to +0.94, P = 0.006) or membership of a polymyxin control group (+0.64; +0.21 to +1.31, P = 0.023), but not membership of a polymyxin intervention group (+0.24; -0.37 to +0.84, P = 0.45), were significant positive correlates. The mean incidence of AAVAP within the control groups of studies of topical polymyxin is more than double the benchmark, whereas the incidence rates within the groups of non-polymyxin studies and, paradoxically, polymyxin intervention groups are more similar to the benchmark. These incidence rates, which are paradoxical in the context of an apparent effect against VAP within controlled trials of topical polymyxin-based interventions, force a re-appraisal. Copyright © 2018 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  11. Recalculation with SEACAB of the activation by spent fuel neutrons and residual dose originated in the racks replaced at Cofrentes NPP

    NASA Astrophysics Data System (ADS)

    Ortego, Pedro; Rodriguez, Alain; Töre, Candan; Compadre, José Luis de Diego; Quesada, Baltasar Rodriguez; Moreno, Raul Orive

    2017-09-01

    In order to increase the storage capacity of the East Spent Fuel Pool at the Cofrentes NPP, located in Valencia province, Spain, the existing storage stainless steel racks were replaced by a new design of compact borated stainless steel racks allowing a 65% increase in fuel storing capacity. Calculation of the activation of the used racks was successfully performed with the use of MCNP4B code. Additionally the dose rate at contact with a row of racks in standing position and behind a wall of shielding material has been calculated using MCNP4B code as well. These results allowed a preliminary definition of the burnker required for the storage of racks. Recently the activity in the racks has been recalculated with SEACAB system which combines the mesh tally of MCNP codes with the activation code ACAB, applying the rigorous two-step method (R2S) developed at home, benchmarked with FNG irradiation experiments and usually applied in fusion calculations for ITER project.

  12. Poster — Thur Eve — 61: A new framework for MPERT plan optimization using MC-DAO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, M; Lloyd, S AM; Townson, R

    2014-08-15

    This work combines the inverse planning technique known as Direct Aperture Optimization (DAO) with Intensity Modulated Radiation Therapy (IMRT) and combined electron and photon therapy plans. In particular, determining conditions under which Modulated Photon/Electron Radiation Therapy (MPERT) produces better dose conformality and sparing of organs at risk than traditional IMRT plans is central to the project. Presented here are the materials and methods used to generate and manipulate the DAO procedure. Included is the introduction of a powerful Java-based toolkit, the Aperture-based Monte Carlo (MC) MPERT Optimizer (AMMO), that serves as a framework for optimization and provides streamlined access tomore » underlying particle transport packages. Comparison of the toolkit's dose calculations to those produced by the Eclipse TPS and the demonstration of a preliminary optimization are presented as first benchmarks. Excellent agreement is illustrated between the Eclipse TPS and AMMO for a 6MV photon field. The results of a simple optimization shows the functioning of the optimization framework, while significant research remains to characterize appropriate constraints.« less

  13. Monte Carlo simulations of patient dose perturbations in rotational-type radiotherapy due to a transverse magnetic field: A tomotherapy investigation

    PubMed Central

    Yang, Y. M.; Geurts, M.; Smilowitz, J. B.; Sterpin, E.; Bednarz, B. P.

    2015-01-01

    Purpose: Several groups are exploring the integration of magnetic resonance (MR) image guidance with radiotherapy to reduce tumor position uncertainty during photon radiotherapy. The therapeutic gain from reducing tumor position uncertainty using intrafraction MR imaging during radiotherapy could be partially offset if the negative effects of magnetic field-induced dose perturbations are not appreciated or accounted for. The authors hypothesize that a more rotationally symmetric modality such as helical tomotherapy will permit a systematic mediation of these dose perturbations. This investigation offers a unique look at the dose perturbations due to homogeneous transverse magnetic field during the delivery of Tomotherapy® Treatment System plans under varying degrees of rotational beamlet symmetry. Methods: The authors accurately reproduced treatment plan beamlet and patient configurations using the Monte Carlo code geant4. This code has a thoroughly benchmarked electromagnetic particle transport physics package well-suited for the radiotherapy energy regime. The three approved clinical treatment plans for this study were for a prostate, head and neck, and lung treatment. The dose heterogeneity index metric was used to quantify the effect of the dose perturbations to the target volumes. Results: The authors demonstrate the ability to reproduce the clinical dose–volume histograms (DVH) to within 4% dose agreement at each DVH point for the target volumes and most planning structures, and therefore, are able to confidently examine the effects of transverse magnetic fields on the plans. The authors investigated field strengths of 0.35, 0.7, 1, 1.5, and 3 T. Changes to the dose heterogeneity index of 0.1% were seen in the prostate and head and neck case, reflecting negligible dose perturbations to the target volumes, a change from 5.5% to 20.1% was observed with the lung case. Conclusions: This study demonstrated that the effect of external magnetic fields can be mitigated by exploiting a more rotationally symmetric treatment modality. PMID:25652485

  14. Direct potable reuse microbial risk assessment methodology: Sensitivity analysis and application to State log credit allocations.

    PubMed

    Soller, Jeffrey A; Eftim, Sorina E; Nappier, Sharon P

    2018-01-01

    Understanding pathogen risks is a critically important consideration in the design of water treatment, particularly for potable reuse projects. As an extension to our published microbial risk assessment methodology to estimate infection risks associated with Direct Potable Reuse (DPR) treatment train unit process combinations, herein, we (1) provide an updated compilation of pathogen density data in raw wastewater and dose-response models; (2) conduct a series of sensitivity analyses to consider potential risk implications using updated data; (3) evaluate the risks associated with log credit allocations in the United States; and (4) identify reference pathogen reductions needed to consistently meet currently applied benchmark risk levels. Sensitivity analyses illustrated changes in cumulative annual risks estimates, the significance of which depends on the pathogen group driving the risk for a given treatment train. For example, updates to norovirus (NoV) raw wastewater values and use of a NoV dose-response approach, capturing the full range of uncertainty, increased risks associated with one of the treatment trains evaluated, but not the other. Additionally, compared to traditional log-credit allocation approaches, our results indicate that the risk methodology provides more nuanced information about how consistently public health benchmarks are achieved. Our results indicate that viruses need to be reduced by 14 logs or more to consistently achieve currently applied benchmark levels of protection associated with DPR. The refined methodology, updated model inputs, and log credit allocation comparisons will be useful to regulators considering DPR projects and design engineers as they consider which unit treatment processes should be employed for particular projects. Published by Elsevier Ltd.

  15. A Benchmark and Comparative Study of Video-Based Face Recognition on COX Face Database.

    PubMed

    Huang, Zhiwu; Shan, Shiguang; Wang, Ruiping; Zhang, Haihong; Lao, Shihong; Kuerban, Alifu; Chen, Xilin

    2015-12-01

    Face recognition with still face images has been widely studied, while the research on video-based face recognition is inadequate relatively, especially in terms of benchmark datasets and comparisons. Real-world video-based face recognition applications require techniques for three distinct scenarios: 1) Videoto-Still (V2S); 2) Still-to-Video (S2V); and 3) Video-to-Video (V2V), respectively, taking video or still image as query or target. To the best of our knowledge, few datasets and evaluation protocols have benchmarked for all the three scenarios. In order to facilitate the study of this specific topic, this paper contributes a benchmarking and comparative study based on a newly collected still/video face database, named COX(1) Face DB. Specifically, we make three contributions. First, we collect and release a largescale still/video face database to simulate video surveillance with three different video-based face recognition scenarios (i.e., V2S, S2V, and V2V). Second, for benchmarking the three scenarios designed on our database, we review and experimentally compare a number of existing set-based methods. Third, we further propose a novel Point-to-Set Correlation Learning (PSCL) method, and experimentally show that it can be used as a promising baseline method for V2S/S2V face recognition on COX Face DB. Extensive experimental results clearly demonstrate that video-based face recognition needs more efforts, and our COX Face DB is a good benchmark database for evaluation.

  16. SU-F-T-186: A Treatment Planning Study of Normal Tissue Sparing with Robustness Optimized IMPT, 4Pi IMRT, and VMAT for Head and Neck Cases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, J; Li, X; Ding, X

    Purpose: We performed a retrospective dosimetric comparison study between the robustness optimized Intensity Modulated Proton Therapy (RO-IMPT), volumetric-modulated arc therapy (VMAT), and the non-coplanar 4? intensity modulated radiation therapy (IMRT). These methods represent the most advanced radiation treatment methods clinically available. We compare their dosimetric performance for head and neck cancer treatments with special focus on the OAR sparing near the tumor volumes. Methods: A total of 11 head and neck cases, which include 10 recurrent cases and one bilateral case, were selected for the study. Different dose levels were prescribed to tumor target depending on disease and location. Threemore » treatment plans were created on commercial TPS systems for a novel noncoplanar 4π method (20 beams), VMAT, and RO-IMPT technique (maximum 4 fields). The maximum patient positioning error was set to 3 mm and the maximum proton range uncertainty was set to 3% for the robustness optimization. Line dose profiles were investigated for OARs close to tumor volumes. Results: All three techniques achieved 98% coverage of the CTV target and most photon plans had less than 110% of the hot spots. The RO-IMPT plans show superior tumor dose homogeneity than 4? and VMAT plans. Although RO-IMPT has greater R50 dose spillage to the surrounding normal tissue than 4π and VMAT, the RO-IMPT plans demonstrate better or comparable OAR (parotid, mandible, carotid, oral cavity, pharynx, and etc.) sparing for structures closely abutting tumor targets. Conclusion: The RO-IMPT’s ability of OAR sparing is benchmarked against the C-arm linac based non-coplanar 4π technique and the standard VMAT method. RO-IMPT consistently shows better or comparable OAR sparing even for tissue structures closely abutting treatment target volume. RO-IMPT further reduces treatment uncertainty associated with proton therapy and delivers robust treatment plans to both unilateral and bilateral head and neck cancer patients with desirable treatment time.« less

  17. International benchmarking of specialty hospitals. A series of case studies on comprehensive cancer centres.

    PubMed

    van Lent, Wineke A M; de Beer, Relinde D; van Harten, Wim H

    2010-08-31

    Benchmarking is one of the methods used in business that is applied to hospitals to improve the management of their operations. International comparison between hospitals can explain performance differences. As there is a trend towards specialization of hospitals, this study examines the benchmarking process and the success factors of benchmarking in international specialized cancer centres. Three independent international benchmarking studies on operations management in cancer centres were conducted. The first study included three comprehensive cancer centres (CCC), three chemotherapy day units (CDU) were involved in the second study and four radiotherapy departments were included in the final study. Per multiple case study a research protocol was used to structure the benchmarking process. After reviewing the multiple case studies, the resulting description was used to study the research objectives. We adapted and evaluated existing benchmarking processes through formalizing stakeholder involvement and verifying the comparability of the partners. We also devised a framework to structure the indicators to produce a coherent indicator set and better improvement suggestions. Evaluating the feasibility of benchmarking as a tool to improve hospital processes led to mixed results. Case study 1 resulted in general recommendations for the organizations involved. In case study 2, the combination of benchmarking and lean management led in one CDU to a 24% increase in bed utilization and a 12% increase in productivity. Three radiotherapy departments of case study 3, were considering implementing the recommendations.Additionally, success factors, such as a well-defined and small project scope, partner selection based on clear criteria, stakeholder involvement, simple and well-structured indicators, analysis of both the process and its results and, adapt the identified better working methods to the own setting, were found. The improved benchmarking process and the success factors can produce relevant input to improve the operations management of specialty hospitals.

  18. Toxicological benchmarks for screening potential contaminants of concern for effects on terrestrial plants: 1994 revision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Will, M.E.; Suter, G.W. II

    1994-09-01

    One of the initial stages in ecological risk assessment for hazardous waste sites is screening contaminants to determine which of them are worthy of further consideration as contaminants of potential concern. This process is termed contaminant screening. It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to plants. This report presents a standard method for deriving benchmarks for this purpose (phytotoxicity benchmarks), a set of data concerning effects of chemicals in soil or soil solution on plants, and a setmore » of phytotoxicity benchmarks for 38 chemicals potentially associated with United States Department of Energy (DOE) sites. In addition, background information on the phytotoxicity and occurrence of the chemicals in soils is presented, and literature describing the experiments from which data were drawn for benchmark derivation is reviewed. Chemicals that are found in soil at concentrations exceeding both the phytotoxicity benchmark and the background concentration for the soil type should be considered contaminants of potential concern.« less

  19. Toxicological Benchmarks for Screening Potential Contaminants of Concern for Effects on Terrestrial Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W. II

    1993-01-01

    One of the initial stages in ecological risk assessment for hazardous waste sites is screening contaminants to determine which of them are worthy of further consideration as contaminants of potential concern. This process is termed contaminant screening. It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to plants. This report presents a standard method for deriving benchmarks for this purpose (phytotoxicity benchmarks), a set of data concerning effects of chemicals in soil or soil solution on plants, and a setmore » of phytotoxicity benchmarks for 38 chemicals potentially associated with United States Department of Energy (DOE) sites. In addition, background information on the phytotoxicity and occurrence of the chemicals in soils is presented, and literature describing the experiments from which data were drawn for benchmark derivation is reviewed. Chemicals that are found in soil at concentrations exceeding both the phytotoxicity benchmark and the background concentration for the soil type should be considered contaminants of potential concern.« less

  20. The derivation of water quality criteria of copper in Biliu River

    NASA Astrophysics Data System (ADS)

    Zheng, Hongbo; Jia, Xinru

    2018-03-01

    Excessive copper in water can be detrimental to the health of human and aquatic life. China has promulgated Environmental Quality Standards for Surface Water to control water pollution, but uniform standard values may cause under-protection or over-protection. Therefore, the basic research work on water quality criteria of water source or reservoir is urgently needed. This study deduces the acute and chronic Water Quality Criteria (WQC) of copper in Biliu River by Species Sensitivity Distribution method (SSD). The result shows that BiDoseResp is the most suitable model and the acute and chronic water quality benchmark of copper are 10.72 µg•L-1 and 5.86 µg•L-1. This study provides basis for the construction of water quality standard of Liaoning and the environmental management of Biliu River.

  1. High-Accuracy Finite Element Method: Benchmark Calculations

    NASA Astrophysics Data System (ADS)

    Gusev, Alexander; Vinitsky, Sergue; Chuluunbaatar, Ochbadrakh; Chuluunbaatar, Galmandakh; Gerdt, Vladimir; Derbov, Vladimir; Góźdź, Andrzej; Krassovitskiy, Pavel

    2018-02-01

    We describe a new high-accuracy finite element scheme with simplex elements for solving the elliptic boundary-value problems and show its efficiency on benchmark solutions of the Helmholtz equation for the triangle membrane and hypercube.

  2. Modification and validation of an analytical source model for external beam radiotherapy Monte Carlo dose calculations.

    PubMed

    Davidson, Scott E; Cui, Jing; Kry, Stephen; Deasy, Joseph O; Ibbott, Geoffrey S; Vicic, Milos; White, R Allen; Followill, David S

    2016-08-01

    A dose calculation tool, which combines the accuracy of the dose planning method (DPM) Monte Carlo code and the versatility of a practical analytical multisource model, which was previously reported has been improved and validated for the Varian 6 and 10 MV linear accelerators (linacs). The calculation tool can be used to calculate doses in advanced clinical application studies. One shortcoming of current clinical trials that report dose from patient plans is the lack of a standardized dose calculation methodology. Because commercial treatment planning systems (TPSs) have their own dose calculation algorithms and the clinical trial participant who uses these systems is responsible for commissioning the beam model, variation exists in the reported calculated dose distributions. Today's modern linac is manufactured to tight specifications so that variability within a linac model is quite low. The expectation is that a single dose calculation tool for a specific linac model can be used to accurately recalculate dose from patient plans that have been submitted to the clinical trial community from any institution. The calculation tool would provide for a more meaningful outcome analysis. The analytical source model was described by a primary point source, a secondary extra-focal source, and a contaminant electron source. Off-axis energy softening and fluence effects were also included. The additions of hyperbolic functions have been incorporated into the model to correct for the changes in output and in electron contamination with field size. A multileaf collimator (MLC) model is included to facilitate phantom and patient dose calculations. An offset to the MLC leaf positions was used to correct for the rudimentary assumed primary point source. Dose calculations of the depth dose and profiles for field sizes 4 × 4 to 40 × 40 cm agree with measurement within 2% of the maximum dose or 2 mm distance to agreement (DTA) for 95% of the data points tested. The model was capable of predicting the depth of the maximum dose within 1 mm. Anthropomorphic phantom benchmark testing of modulated and patterned MLCs treatment plans showed agreement to measurement within 3% in target regions using thermoluminescent dosimeters (TLD). Using radiochromic film normalized to TLD, a gamma criteria of 3% of maximum dose and 2 mm DTA was applied with a pass rate of least 85% in the high dose, high gradient, and low dose regions. Finally, recalculations of patient plans using DPM showed good agreement relative to a commercial TPS when comparing dose volume histograms and 2D dose distributions. A unique analytical source model coupled to the dose planning method Monte Carlo dose calculation code has been modified and validated using basic beam data and anthropomorphic phantom measurement. While this tool can be applied in general use for a particular linac model, specifically it was developed to provide a singular methodology to independently assess treatment plan dose distributions from those clinical institutions participating in National Cancer Institute trials.

  3. Benchmarking image fusion system design parameters

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.

    2013-06-01

    A clear and absolute method for discriminating between image fusion algorithm performances is presented. This method can effectively be used to assist in the design and modeling of image fusion systems. Specifically, it is postulated that quantifying human task performance using image fusion should be benchmarked to whether the fusion algorithm, at a minimum, retained the performance benefit achievable by each independent spectral band being fused. The established benchmark would then clearly represent the threshold that a fusion system should surpass to be considered beneficial to a particular task. A genetic algorithm is employed to characterize the fused system parameters using a Matlab® implementation of NVThermIP as the objective function. By setting the problem up as a mixed-integer constraint optimization problem, one can effectively look backwards through the image acquisition process: optimizing fused system parameters by minimizing the difference between modeled task difficulty measure and the benchmark task difficulty measure. The results of an identification perception experiment are presented, where human observers were asked to identify a standard set of military targets, and used to demonstrate the effectiveness of the benchmarking process.

  4. Toxicological Benchmarks for Screening Potential Contaminants of Concern for Effects on Soil and Litter Invertebrates and Heterotrophic Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Will, M.E.

    1994-01-01

    This report presents a standard method for deriving benchmarks for the purpose of ''contaminant screening,'' performed by comparing measured ambient concentrations of chemicals. The work was performed under Work Breakdown Structure 1.4.12.2.3.04.07.02 (Activity Data Sheet 8304). In addition, this report presents sets of data concerning the effects of chemicals in soil on invertebrates and soil microbial processes, benchmarks for chemicals potentially associated with United States Department of Energy sites, and literature describing the experiments from which data were drawn for benchmark derivation.

  5. The Army Pollution Prevention Program: Improving Performance Through Benchmarking.

    DTIC Science & Technology

    1995-06-01

    Washington, DC 20503. 1. AGENCY USE ONLY (Leave Blank) 2. REPORT DATE June 1995 3. REPORT TYPE AND DATES COVERED Final 4. TITLE AND SUBTITLE...unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (Maximum 200 words) This report investigates the feasibility of using benchmarking as a method for...could use to determine to what degree it should integrate benchmarking with other quality management tools to support the pollution prevention program

  6. Closed-Loop Neuromorphic Benchmarks

    PubMed Central

    Stewart, Terrence C.; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  7. Using Benchmarking To Strengthen the Assessment of Persistence.

    PubMed

    McLachlan, Michael S; Zou, Hongyan; Gouin, Todd

    2017-01-03

    Chemical persistence is a key property for assessing chemical risk and chemical hazard. Current methods for evaluating persistence are based on laboratory tests. The relationship between the laboratory based estimates and persistence in the environment is often unclear, in which case the current methods for evaluating persistence can be questioned. Chemical benchmarking opens new possibilities to measure persistence in the field. In this paper we explore how the benchmarking approach can be applied in both the laboratory and the field to deepen our understanding of chemical persistence in the environment and create a firmer scientific basis for laboratory to field extrapolation of persistence test results.

  8. Metastable Autoionizing States of Molecules and Radicals in Highly Energetic Environment

    DTIC Science & Technology

    2016-03-22

    electronic states. The specific aims are to develop and calibrate complex-scaled equation-of-motion coupled cluster (cs-EOM- CC ) and CAP (complex...absorbing potential) augmented EOM- CC methods. We have implemented and benchmarked cs-EOM-CCSD and CAP- augmented EOM-CCSD methods for excitation energies...motion coupled cluster (cs-EOM- CC ) and CAP (complex absorbing potential) augmented EOM- CC methods. We have implemented and benchmarked cs-EOM-CCSD and

  9. GROWTH OF THE INTERNATIONAL CRITICALITY SAFETY AND REACTOR PHYSICS EXPERIMENT EVALUATION PROJECTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. Blair Briggs; John D. Bess; Jim Gulliford

    2011-09-01

    Since the International Conference on Nuclear Criticality Safety (ICNC) 2007, the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) have continued to expand their efforts and broaden their scope. Eighteen countries participated on the ICSBEP in 2007. Now, there are 20, with recent contributions from Sweden and Argentina. The IRPhEP has also expanded from eight contributing countries in 2007 to 16 in 2011. Since ICNC 2007, the contents of the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments1' have increased from 442 evaluations (38000 pages), containing benchmark specifications for 3955 critical ormore » subcritical configurations to 516 evaluations (nearly 55000 pages), containing benchmark specifications for 4405 critical or subcritical configurations in the 2010 Edition of the ICSBEP Handbook. The contents of the Handbook have also increased from 21 to 24 criticality-alarm-placement/shielding configurations with multiple dose points for each, and from 20 to 200 configurations categorized as fundamental physics measurements relevant to criticality safety applications. Approximately 25 new evaluations and 150 additional configurations are expected to be added to the 2011 edition of the Handbook. Since ICNC 2007, the contents of the 'International Handbook of Evaluated Reactor Physics Benchmark Experiments2' have increased from 16 different experimental series that were performed at 12 different reactor facilities to 53 experimental series that were performed at 30 different reactor facilities in the 2011 edition of the Handbook. Considerable effort has also been made to improve the functionality of the searchable database, DICE (Database for the International Criticality Benchmark Evaluation Project) and verify the accuracy of the data contained therein. DICE will be discussed in separate papers at ICNC 2011. The status of the ICSBEP and the IRPhEP will be discussed in the full paper, selected benchmarks that have been added to the ICSBEP Handbook will be highlighted, and a preview of the new benchmarks that will appear in the September 2011 edition of the Handbook will be provided. Accomplishments of the IRPhEP will also be highlighted and the future of both projects will be discussed. REFERENCES (1) International Handbook of Evaluated Criticality Safety Benchmark Experiments, NEA/NSC/DOC(95)03/I-IX, Organisation for Economic Co-operation and Development-Nuclear Energy Agency (OECD-NEA), September 2010 Edition, ISBN 978-92-64-99140-8. (2) International Handbook of Evaluated Reactor Physics Benchmark Experiments, NEA/NSC/DOC(2006)1, Organisation for Economic Co-operation and Development-Nuclear Energy Agency (OECD-NEA), March 2011 Edition, ISBN 978-92-64-99141-5.« less

  10. Conformal image-guided microbeam radiation therapy at the ESRF biomedical beamline ID17

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donzelli, Mattia, E-mail: donzelli@esrf.fr; Bräuer-Krisch, Elke; Nemoz, Christian

    Purpose: Upcoming veterinary trials in microbeam radiation therapy (MRT) demand for more advanced irradiation techniques than in preclinical research with small animals. The treatment of deep-seated tumors in cats and dogs with MRT requires sophisticated irradiation geometries from multiple ports, which impose further efforts to spare the normal tissue surrounding the target. Methods: This work presents the development and benchmarking of a precise patient alignment protocol for MRT at the biomedical beamline ID17 of the European Synchrotron Radiation Facility (ESRF). The positioning of the patient prior to irradiation is verified by taking x-ray projection images from different angles. Results: Usingmore » four external fiducial markers of 1.7  mm diameter and computed tomography-based treatment planning, a target alignment error of less than 2  mm can be achieved with an angular deviation of less than 2{sup ∘}. Minor improvements on the protocol and the use of smaller markers indicate that even a precision better than 1  mm is technically feasible. Detailed investigations concerning the imaging dose lead to the conclusion that doses for skull radiographs lie in the same range as dose reference levels for human head radiographs. A currently used online dose monitor for MRT has been proven to give reliable results for the imaging beam. Conclusions: The ESRF biomedical beamline ID17 is technically ready to apply conformal image-guided MRT from multiple ports to large animals during future veterinary trials.« less

  11. Surgeon-Specific Reports in General Surgery: Establishing Benchmarks for Peer Comparison Within a Single Hospital.

    PubMed

    Hatfield, Mark D; Ashton, Carol M; Bass, Barbara L; Shirkey, Beverly A

    2016-02-01

    Methods to assess a surgeon's individual performance based on clinically meaningful outcomes have not been fully developed, due to small numbers of adverse outcomes and wide variation in case volumes. The Achievable Benchmark of Care (ABC) method addresses these issues by identifying benchmark-setting surgeons with high levels of performance and greater case volumes. This method was used to help surgeons compare their surgical practice to that of their peers by using merged National Surgical Quality Improvement Program (NSQIP) and Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program (MBSAQIP) data to generate surgeon-specific reports. A retrospective cohort study at a single institution's department of surgery was conducted involving 107 surgeons (8,660 cases) over 5.5 years. Stratification of more than 32,000 CPT codes into 16 CPT clusters served as the risk adjustment. Thirty-day outcomes of interest included surgical site infection (SSI), acute kidney injury (AKI), and mortality. Performance characteristics of the ABC method were explored by examining how many surgeons were identified as benchmark-setters in view of volume and outcome rates within CPT clusters. For the data captured, most surgeons performed cases spanning a median of 5 CPT clusters (range 1 to 15 clusters), with a median of 26 cases (range 1 to 776 cases) and a median of 2.8 years (range 0 to 5.5 years). The highest volume surgeon for that CPT cluster set the benchmark for 6 of 16 CPT clusters for SSIs, 8 of 16 CPT clusters for AKIs, and 9 of 16 CPT clusters for mortality. The ABC method appears to be a sound and useful approach to identifying benchmark-setting surgeons within a single institution. Such surgeons may be able to help their peers improve their performance. Copyright © 2016 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  12. Rural and urban transit district benchmarking : effectiveness and efficiency guidance document.

    DOT National Transportation Integrated Search

    2011-05-01

    Rural and urban transit systems have sought ways to compare performance across agencies, : identifying successful service delivery strategies and applying these concepts to achieve : successful results within their agency. Benchmarking is a method us...

  13. Benchmarking: A Method for Continuous Quality Improvement in Health

    PubMed Central

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-01-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical–social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted. PMID:23634166

  14. A call for benchmarking transposable element annotation methods.

    PubMed

    Hoen, Douglas R; Hickey, Glenn; Bourque, Guillaume; Casacuberta, Josep; Cordaux, Richard; Feschotte, Cédric; Fiston-Lavier, Anna-Sophie; Hua-Van, Aurélie; Hubley, Robert; Kapusta, Aurélie; Lerat, Emmanuelle; Maumus, Florian; Pollock, David D; Quesneville, Hadi; Smit, Arian; Wheeler, Travis J; Bureau, Thomas E; Blanchette, Mathieu

    2015-01-01

    DNA derived from transposable elements (TEs) constitutes large parts of the genomes of complex eukaryotes, with major impacts not only on genomic research but also on how organisms evolve and function. Although a variety of methods and tools have been developed to detect and annotate TEs, there are as yet no standard benchmarks-that is, no standard way to measure or compare their accuracy. This lack of accuracy assessment calls into question conclusions from a wide range of research that depends explicitly or implicitly on TE annotation. In the absence of standard benchmarks, toolmakers are impeded in improving their tools, annotators cannot properly assess which tools might best suit their needs, and downstream researchers cannot judge how accuracy limitations might impact their studies. We therefore propose that the TE research community create and adopt standard TE annotation benchmarks, and we call for other researchers to join the authors in making this long-overdue effort a success.

  15. SU-F-T-513: Dosimetric Validation of Spatially Fractionated Radiotherapy Using Gel Dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papanikolaou, P; Watts, L; Kirby, N

    2016-06-15

    Purpose: Spatially fractionated radiation therapy, also known as GRID therapy, is used to treat large solid tumors by irradiating the target to a single dose of 10–20Gy through spatially distributed beamlets. We have investigated the use of a 3D gel for dosimetric characterization of GRID therapy. Methods: GRID therapy is an external beam analog of volumetric brachytherapy, whereby we produce a distribution of hot and cold dose columns inside the tumor volume. Such distribution can be produced with a block or by using a checker-like pattern with MLC. We have studied both types of GRID delivery. A cube shaped acrylicmore » phantom was filled with polymer gel and served as a 3D dosimeter. The phantom was scanned and the CT images were used to produce two plans in Pinnacle, one with the grid block and one with the MLC defined grid. A 6MV beam was used for the plan with a prescription of 1500cGy at dmax. The irradiated phantom was scanned in a 3T MRI scanner. Results: 3D dose maps were derived from the MR scans of the gel dosimeter and were found to be in good agreement with the predicted dose distribution from the RTP system. Gamma analysis showed a passing rate of 93% for 5% dose and 2mm DTA scoring criteria. Both relative and absolute dose profiles are in good agreement, except in the peripheral beamlets where the gel measured slightly higher dose, possibly because of the changing head scatter conditions that the RTP is not fully accounting for. Our results have also been benchmarked against ionization chamber measurements. Conclusion: We have investigated the use of a polymer gel for the 3D dosimetric characterization and evaluation of GRID therapy. Our results demonstrated that the planning system can predict fairly accurately the dose distribution for GRID type therapy.« less

  16. Poster — Thur Eve — 36: Implementation of constant dose rate and gantry speed arc therapy(CDR-CAS-IMAT) for thoracic esophageal carcinoma on Varian 23EX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Ruohui; Department of Medical Physics, Medical Faculty Mannheim, Heidelberg University; Fan, Xiaomei

    2014-08-15

    Objective: The purpose of this study is to propose an alternative planning approach for VMAT using constant dose rate and gantry speed arc therapy(CDR-CAS-IMAT) implementation on conventional Linac Varian 23EX and used IMRT as a benchmark to evaluate the performance. Methods and materials: Eighteen patients with thoracic esophageal carcinoma who were previously treated with IMRT on Varian 23EX were retrospectively planned for CDR-CAS-IMAT plans. Dose prescription was set to 60 Gy to PTVs in 30 fractions. The planning objectives for PTVs and OAR were corresponding with the IMRT plans. Dose to the PTVs and OAR were compared to IMRT withmore » respect to plan quality, MU, treatment time and delivery accuracy. Results: CDR-CAS-IMAT plans led to equivalent or superior plan quality as compared to IMRT, PTV's CI relative increased 16.2%, while small deviations were observed on minimum dose for PTV. Volumes in the cord receiving 40Gy were increased from 3.6% with IMRT to 7.0%. Treatment times were reduced significantly with CDR-CAS-IMAT(mean 85.7s vs. 232.1s, p < .05), however, MU increased by a factor of 1.3 and lung V10/5/3.5/aver were relative increase 6.7%,12%,17.9%,4.2%, respectively. And increased the E-P low dose area volume decreased the hight dose area. There were no significant difference in Delta4 measurements results between both planning techniques. Conclusion: CDR-CAS-IMAT plans can be implemented smoothly and quickly into a busy cancer center, which improved PTV CI and reduces treatment time but increased the MU and low dose irradiated area. An evaluation of weight loss must be performed during treatment for CDR-CAS-IMAT patients.« less

  17. Benchmark of PENELOPE code for low-energy photon transport: dose comparisons with MCNP4 and EGS4.

    PubMed

    Ye, Sung-Joon; Brezovich, Ivan A; Pareek, Prem; Naqvi, Shahid A

    2004-02-07

    The expanding clinical use of low-energy photon emitting 125I and 103Pd seeds in recent years has led to renewed interest in their dosimetric properties. Numerous papers pointed out that higher accuracy could be obtained in Monte Carlo simulations by utilizing newer libraries for the low-energy photon cross-sections, such as XCOM and EPDL97. The recently developed PENELOPE 2001 Monte Carlo code is user friendly and incorporates photon cross-section data from the EPDL97. The code has been verified for clinical dosimetry of high-energy electron and photon beams, but has not yet been tested at low energies. In the present work, we have benchmarked the PENELOPE code for 10-150 keV photons. We computed radial dose distributions from 0 to 10 cm in water at photon energies of 10-150 keV using both PENELOPE and MCNP4C with either DLC-146 or DLC-200 cross-section libraries, assuming a point source located at the centre of a 30 cm diameter and 20 cm length cylinder. Throughout the energy range of simulated photons (except for 10 keV), PENELOPE agreed within statistical uncertainties (at worst +/- 5%) with MCNP/DLC-146 in the entire region of 1-10 cm and with published EGS4 data up to 5 cm. The dose at 1 cm (or dose rate constant) of PENELOPE agreed with MCNP/DLC-146 and EGS4 data within approximately +/- 2% in the range of 20-150 keV, while MCNP/DLC-200 produced values up to 9% lower in the range of 20-100 keV than PENELOPE or the other codes. However, the differences among the four datasets became negligible above 100 keV.

  18. Benchmark results for few-body hypernuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruffino, Fabrizio Ferrari; Lonardoni, Diego; Barnea, Nir

    2017-03-16

    Here, the Non-Symmetrized Hyperspherical Harmonics method (NSHH) is introduced in the hypernuclear sector and benchmarked with three different ab-initio methods, namely the Auxiliary Field Diffusion Monte Carlo method, the Faddeev–Yakubovsky approach and the Gaussian Expansion Method. Binding energies and hyperon separation energies of three- to five-body hypernuclei are calculated by employing the two-body ΛN component of the phenomenological Bodmer–Usmani potential, and a hyperon-nucleon interaction simulating the scattering phase shifts given by NSC97f. The range of applicability of the NSHH method is briefly discussed.

  19. Benchmarking: applications to transfusion medicine.

    PubMed

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. A benchmark for statistical microarray data analysis that preserves actual biological and technical variance.

    PubMed

    De Hertogh, Benoît; De Meulder, Bertrand; Berger, Fabrice; Pierre, Michael; Bareke, Eric; Gaigneaux, Anthoula; Depiereux, Eric

    2010-01-11

    Recent reanalysis of spike-in datasets underscored the need for new and more accurate benchmark datasets for statistical microarray analysis. We present here a fresh method using biologically-relevant data to evaluate the performance of statistical methods. Our novel method ranks the probesets from a dataset composed of publicly-available biological microarray data and extracts subset matrices with precise information/noise ratios. Our method can be used to determine the capability of different methods to better estimate variance for a given number of replicates. The mean-variance and mean-fold change relationships of the matrices revealed a closer approximation of biological reality. Performance analysis refined the results from benchmarks published previously.We show that the Shrinkage t test (close to Limma) was the best of the methods tested, except when two replicates were examined, where the Regularized t test and the Window t test performed slightly better. The R scripts used for the analysis are available at http://urbm-cluster.urbm.fundp.ac.be/~bdemeulder/.

  1. The Earthquake Source Inversion Validation (SIV) - Project: Summary, Status, Outlook

    NASA Astrophysics Data System (ADS)

    Mai, P. M.

    2017-12-01

    Finite-fault earthquake source inversions infer the (time-dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, this kinematic source inversion is ill-posed and returns non-unique solutions, as seen for instance in multiple source models for the same earthquake, obtained by different research teams, that often exhibit remarkable dissimilarities. To address the uncertainties in earthquake-source inversions and to understand strengths and weaknesses of various methods, the Source Inversion Validation (SIV) project developed a set of forward-modeling exercises and inversion benchmarks. Several research teams then use these validation exercises to test their codes and methods, but also to develop and benchmark new approaches. In this presentation I will summarize the SIV strategy, the existing benchmark exercises and corresponding results. Using various waveform-misfit criteria and newly developed statistical comparison tools to quantify source-model (dis)similarities, the SIV platforms is able to rank solutions and identify particularly promising source inversion approaches. Existing SIV exercises (with related data and descriptions) and all computational tools remain available via the open online collaboration platform; additional exercises and benchmark tests will be uploaded once they are fully developed. I encourage source modelers to use the SIV benchmarks for developing and testing new methods. The SIV efforts have already led to several promising new techniques for tackling the earthquake-source imaging problem. I expect that future SIV benchmarks will provide further innovations and insights into earthquake source kinematics that will ultimately help to better understand the dynamics of the rupture process.

  2. A time-implicit numerical method and benchmarks for the relativistic Vlasov–Ampere equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carrie, Michael; Shadwick, B. A.

    2016-01-04

    Here, we present a time-implicit numerical method to solve the relativistic Vlasov–Ampere system of equations on a two dimensional phase space grid. The time-splitting algorithm we use allows the generalization of the work presented here to higher dimensions keeping the linear aspect of the resulting discrete set of equations. The implicit method is benchmarked against linear theory results for the relativistic Landau damping for which analytical expressions using the Maxwell-Juttner distribution function are derived. We note that, independently from the shape of the distribution function, the relativistic treatment features collective behaviors that do not exist in the non relativistic case.more » The numerical study of the relativistic two-stream instability completes the set of benchmarking tests.« less

  3. A time-implicit numerical method and benchmarks for the relativistic Vlasov–Ampere equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carrié, Michael, E-mail: mcarrie2@unl.edu; Shadwick, B. A., E-mail: shadwick@mailaps.org

    2016-01-15

    We present a time-implicit numerical method to solve the relativistic Vlasov–Ampere system of equations on a two dimensional phase space grid. The time-splitting algorithm we use allows the generalization of the work presented here to higher dimensions keeping the linear aspect of the resulting discrete set of equations. The implicit method is benchmarked against linear theory results for the relativistic Landau damping for which analytical expressions using the Maxwell-Jüttner distribution function are derived. We note that, independently from the shape of the distribution function, the relativistic treatment features collective behaviours that do not exist in the nonrelativistic case. The numericalmore » study of the relativistic two-stream instability completes the set of benchmarking tests.« less

  4. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    NASA Astrophysics Data System (ADS)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  5. How to benchmark methods for structure-based virtual screening of large compound libraries.

    PubMed

    Christofferson, Andrew J; Huang, Niu

    2012-01-01

    Structure-based virtual screening is a useful computational technique for ligand discovery. To systematically evaluate different docking approaches, it is important to have a consistent benchmarking protocol that is both relevant and unbiased. Here, we describe the designing of a benchmarking data set for docking screen assessment, a standard docking screening process, and the analysis and presentation of the enrichment of annotated ligands among a background decoy database.

  6. Using chemical benchmarking to determine the persistence of chemicals in a Swedish lake.

    PubMed

    Zou, Hongyan; Radke, Michael; Kierkegaard, Amelie; MacLeod, Matthew; McLachlan, Michael S

    2015-02-03

    It is challenging to measure the persistence of chemicals under field conditions. In this work, two approaches for measuring persistence in the field were compared: the chemical mass balance approach, and a novel chemical benchmarking approach. Ten pharmaceuticals, an X-ray contrast agent, and an artificial sweetener were studied in a Swedish lake. Acesulfame K was selected as a benchmark to quantify persistence using the chemical benchmarking approach. The 95% confidence intervals of the half-life for transformation in the lake system ranged from 780-5700 days for carbamazepine to <1-2 days for ketoprofen. The persistence estimates obtained using the benchmarking approach agreed well with those from the mass balance approach (1-21% difference), indicating that chemical benchmarking can be a valid and useful method to measure the persistence of chemicals under field conditions. Compared to the mass balance approach, the benchmarking approach partially or completely eliminates the need to quantify mass flow of chemicals, so it is particularly advantageous when the quantification of mass flow of chemicals is difficult. Furthermore, the benchmarking approach allows for ready comparison and ranking of the persistence of different chemicals.

  7. Integrated control/structure optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Gilbert, Michael G.

    1990-01-01

    A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The system is fully decomposed into structural and control subsystem designs and an improved design is produced. Theory, implementation, and results for the method are presented and compared with the benchmark example.

  8. Inverse treatment planning for spinal robotic radiosurgery: an international multi-institutional benchmark trial.

    PubMed

    Blanck, Oliver; Wang, Lei; Baus, Wolfgang; Grimm, Jimm; Lacornerie, Thomas; Nilsson, Joakim; Luchkovskyi, Sergii; Cano, Isabel Palazon; Shou, Zhenyu; Ayadi, Myriam; Treuer, Harald; Viard, Romain; Siebert, Frank-Andre; Chan, Mark K H; Hildebrandt, Guido; Dunst, Jürgen; Imhoff, Detlef; Wurster, Stefan; Wolff, Robert; Romanelli, Pantaleo; Lartigau, Eric; Semrau, Robert; Soltys, Scott G; Schweikard, Achim

    2016-05-08

    Stereotactic radiosurgery (SRS) is the accurate, conformal delivery of high-dose radiation to well-defined targets while minimizing normal structure doses via steep dose gradients. While inverse treatment planning (ITP) with computerized optimization algorithms are routine, many aspects of the planning process remain user-dependent. We performed an international, multi-institutional benchmark trial to study planning variability and to analyze preferable ITP practice for spinal robotic radiosurgery. 10 SRS treatment plans were generated for a complex-shaped spinal metastasis with 21 Gy in 3 fractions and tight constraints for spinal cord (V14Gy < 2 cc, V18Gy < 0.1 cc) and target (coverage > 95%). The resulting plans were rated on a scale from 1 to 4 (excellent-poor) in five categories (constraint compliance, optimization goals, low-dose regions, ITP complexity, and clinical acceptability) by a blinded review panel. Additionally, the plans were mathemati-cally rated based on plan indices (critical structure and target doses, conformity, monitor units, normal tissue complication probability, and treatment time) and compared to the human rankings. The treatment plans and the reviewers' rankings varied substantially among the participating centers. The average mean overall rank was 2.4 (1.2-4.0) and 8/10 plans were rated excellent in at least one category by at least one reviewer. The mathematical rankings agreed with the mean overall human rankings in 9/10 cases pointing toward the possibility for sole mathematical plan quality comparison. The final rankings revealed that a plan with a well-balanced trade-off among all planning objectives was preferred for treatment by most par-ticipants, reviewers, and the mathematical ranking system. Furthermore, this plan was generated with simple planning techniques. Our multi-institutional planning study found wide variability in ITP approaches for spinal robotic radiosurgery. The participants', reviewers', and mathematical match on preferable treatment plans and ITP techniques indicate that agreement on treatment planning and plan quality can be reached for spinal robotic radiosurgery.

  9. Derivation of a no-significant-risk-level for tetrabromobisphenol A based on a threshold non-mutagenic cancer mode of action.

    PubMed

    Pecquet, Alison M; Martinez, Jeanelle M; Vincent, Melissa; Erraguntla, Neeraja; Dourson, Michael

    2018-06-01

    A no-significant-risk-level of 20 mg day -1 was derived for tetrabromobisphenol A (TBBPA). Uterine tumors (adenomas, adenocarcinomas, and malignant mixed Müllerian) observed in female Wistar Han rats from a National Toxicology Program 2-year cancer bioassay were identified as the critical effect. Studies suggest that TBBPA is acting through a non-mutagenic mode of action. Thus, the most appropriate approach to derivation of a cancer risk value based on US Environmental Protection Agency guidelines is a threshold approach, akin to a cancer safe dose (RfD cancer ). Using the National Toxicology Program data, we utilized Benchmark dose software to derive a benchmark dose lower limit (BMDL 10 ) as the point of departure (POD) of 103 mg kg -1  day -1 . The POD was adjusted to a human equivalent dose of 25.6 mg kg -1  day -1 using allometric scaling. We applied a composite adjustment factor of 100 to the POD to derive an RfD cancer of 0.26 mg kg -1  day -1 . Based on a human body weight of 70 kg, the RfD cancer was adjusted to a no-significant-risk-level of 20 mg day -1 . This was compared to other available non-cancer and cancer risk values, and aligns well with our understanding of the underlying biology based on the toxicology data. Overall, the weight of evidence from animal studies indicates that TBBPA has low toxicity and suggests that high doses over long exposure durations are needed to induce uterine tumor formation. Future research needs include a thorough and detailed vetting of the proposed adverse outcome pathway, including further support for key events leading to uterine tumor formation and a quantitative weight of evidence analysis. Copyright © 2018 John Wiley & Sons, Ltd.

  10. A 3D global-to-local deformable mesh model based registration and anatomy-constrained segmentation method for image guided prostate radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou Jinghao; Kim, Sung; Jabbour, Salma

    2010-03-15

    Purpose: In the external beam radiation treatment of prostate cancers, successful implementation of adaptive radiotherapy and conformal radiation dose delivery is highly dependent on precise and expeditious segmentation and registration of the prostate volume between the simulation and the treatment images. The purpose of this study is to develop a novel, fast, and accurate segmentation and registration method to increase the computational efficiency to meet the restricted clinical treatment time requirement in image guided radiotherapy. Methods: The method developed in this study used soft tissues to capture the transformation between the 3D planning CT (pCT) images and 3D cone-beam CTmore » (CBCT) treatment images. The method incorporated a global-to-local deformable mesh model based registration framework as well as an automatic anatomy-constrained robust active shape model (ACRASM) based segmentation algorithm in the 3D CBCT images. The global registration was based on the mutual information method, and the local registration was to minimize the Euclidian distance of the corresponding nodal points from the global transformation of deformable mesh models, which implicitly used the information of the segmented target volume. The method was applied on six data sets of prostate cancer patients. Target volumes delineated by the same radiation oncologist on the pCT and CBCT were chosen as the benchmarks and were compared to the segmented and registered results. The distance-based and the volume-based estimators were used to quantitatively evaluate the results of segmentation and registration. Results: The ACRASM segmentation algorithm was compared to the original active shape model (ASM) algorithm by evaluating the values of the distance-based estimators. With respect to the corresponding benchmarks, the mean distance ranged from -0.85 to 0.84 mm for ACRASM and from -1.44 to 1.17 mm for ASM. The mean absolute distance ranged from 1.77 to 3.07 mm for ACRASM and from 2.45 to 6.54 mm for ASM. The volume overlap ratio ranged from 79% to 91% for ACRASM and from 44% to 80% for ASM. These data demonstrated that the segmentation results of ACRASM were in better agreement with the corresponding benchmarks than those of ASM. The developed registration algorithm was quantitatively evaluated by comparing the registered target volumes from the pCT to the benchmarks on the CBCT. The mean distance and the root mean square error ranged from 0.38 to 2.2 mm and from 0.45 to 2.36 mm, respectively, between the CBCT images and the registered pCT. The mean overlap ratio of the prostate volumes ranged from 85.2% to 95% after registration. The average time of the ACRASM-based segmentation was under 1 min. The average time of the global transformation was from 2 to 4 min on two 3D volumes and the average time of the local transformation was from 20 to 34 s on two deformable superquadrics mesh models. Conclusions: A novel and fast segmentation and deformable registration method was developed to capture the transformation between the planning and treatment images for external beam radiotherapy of prostate cancers. This method increases the computational efficiency and may provide foundation to achieve real time adaptive radiotherapy.« less

  11. Using Spoken Language Benchmarks to Characterize the Expressive Language Skills of Young Children With Autism Spectrum Disorders

    PubMed Central

    Weismer, Susan Ellis

    2015-01-01

    Purpose Spoken language benchmarks proposed by Tager-Flusberg et al. (2009) were used to characterize communication profiles of toddlers with autism spectrum disorders and to investigate if there were differences in variables hypothesized to influence language development at different benchmark levels. Method The communication abilities of a large sample of toddlers with autism spectrum disorders (N = 105) were characterized in terms of spoken language benchmarks. The toddlers were grouped according to these benchmarks to investigate whether there were differences in selected variables across benchmark groups at a mean age of 2.5 years. Results The majority of children in the sample presented with uneven communication profiles with relative strengths in phonology and significant weaknesses in pragmatics. When children were grouped according to one expressive language domain, across-group differences were observed in response to joint attention and gestures but not cognition or restricted and repetitive behaviors. Conclusion The spoken language benchmarks are useful for characterizing early communication profiles and investigating features that influence expressive language growth. PMID:26254475

  12. Toxicological benchmarks for screening potential contaminants of concern for effects on soil and litter invertebrates and heterotrophic process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Will, M.E.; Suter, G.W. II

    1994-09-01

    One of the initial stages in ecological risk assessments for hazardous waste sites is the screening of contaminants to determine which of them are worthy of further consideration as {open_quotes}contaminants of potential concern.{close_quotes} This process is termed {open_quotes}contaminant screening.{close_quotes} It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to soil- and litter-dwelling invertebrates, including earthworms, other micro- and macroinvertebrates, or heterotrophic bacteria and fungi. This report presents a standard method for deriving benchmarks for this purpose, sets of data concerningmore » effects of chemicals in soil on invertebrates and soil microbial processes, and benchmarks for chemicals potentially associated with United States Department of Energy sites. In addition, literature describing the experiments from which data were drawn for benchmark derivation. Chemicals that are found in soil at concentrations exceeding both the benchmarks and the background concentration for the soil type should be considered contaminants of potential concern.« less

  13. Implementing benchmarking recommendations in the Offices of Construction for the Iowa DOT

    DOT National Transportation Integrated Search

    1998-01-01

    The Iowa DOT's Offices of Construction are seeking ways to use benchmarking, the concepts of quality management, and outside facilitation to improve their methods and processes. Iowa State University researchers and the Offices of Construction Benchm...

  14. Assessing human health response in life cycle assessment using ED10s and DALYs: part 2--Noncancer effects.

    PubMed

    Pennington, David; Crettaz, Pierre; Tauxe, Annick; Rhomberg, Lorenz; Brand, Kevin; Jolliet, Olivier

    2002-10-01

    In Part 1 of this article we developed an approach for the calculation of cancer effect measures for life cycle assessment (LCA). In this article, we propose and evaluate the method for the screening of noncancer toxicological health effects. This approach draws on the noncancer health risk assessment concept of benchmark dose, while noting important differences with regulatory applications in the objectives of an LCA study. We adopt the centraltendency estimate of the toxicological effect dose inducing a 10% response over background, ED10, to provide a consistent point of departure for default linear low-dose response estimates (betaED10). This explicit estimation of low-dose risks, while necessary in LCA, is in marked contrast to many traditional procedures for noncancer assessments. For pragmatic reasons, mechanistic thresholds and nonlinear low-dose response curves were not implemented in the presented framework. In essence, for the comparative needs of LCA, we propose that one initially screens alternative activities or products on the degree to which the associated chemical emissions erode their margins of exposure, which may or may not be manifested as increases in disease incidence. We illustrate the method here by deriving the betaED10 slope factors from bioassay data for 12 chemicals and outline some of the possibilities for extrapolation from other more readily available measures, such as the no observable adverse effect levels (NOAEL), avoiding uncertainty factors that lead to inconsistent degrees of conservatism from chemical to chemical. These extrapolations facilitated the initial calculation of slope factors for an additional 403 compounds; ranging from 10(-6) to 10(3) (risk per mg/kg-day dose). The potential consequences of the effects are taken into account in a preliminary approach by combining the betaED10 with the severity measure disability adjusted life years (DALY), providing a screening-level estimate of the potential consequences associated with exposures, integrated over time and space, to a given mass of chemical released into the environment for use in LCA.

  15. Medicare Part D Roulette: Potential Implications of Random Assignment and Plan Restrictions

    PubMed Central

    Patel, Rajul A.; Walberg, Mark P.; Woelfel, Joseph A.; Amaral, Michelle M.; Varu, Paresh

    2013-01-01

    Background Dual-eligible (Medicare/Medicaid) beneficiaries are randomly assigned to a benchmark plan, which provides prescription drug coverage under the Part D benefit without consideration of their prescription drug profile. To date, the potential for beneficiary assignment to a plan with poor formulary coverage has been minimally studied and the resultant financial impact to beneficiaries unknown. Objective We sought to determine cost variability and drug use restrictions under each available 2010 California benchmark plan. Methods Dual-eligible beneficiaries were provided Part D plan assistance during the 2010 annual election period. The Medicare Web site was used to determine benchmark plan costs and prescription utilization restrictions for each of the six California benchmark plans available for random assignment in 2010. A standardized survey was used to record all de-identified beneficiary demographic and plan specific data. For each low-income subsidy-recipient (n = 113), cost, rank, number of non-formulary medications, and prescription utilization restrictions were recorded for each available 2010 California benchmark plan. Formulary matching rates (percent of beneficiary's medications on plan formulary) were calculated for each benchmark plan. Results Auto-assigned beneficiaries had only a 34% chance of being assigned to the lowest cost plan; the remainder faced potentially significant avoidable out-of-pocket costs. Wide variations between benchmark plans were observed for plan cost, formulary coverage, formulary matching rates, and prescription utilization restrictions. Conclusions Beneficiaries had a 66% chance of being assigned to a sub-optimal plan; thereby, they faced significant avoidable out-of-pocket costs. Alternative methods of beneficiary assignment could decrease beneficiary and Medicare costs while also reducing medication non-compliance. PMID:24753963

  16. Integrated control/structure optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Gilbert, Michael G.

    1990-01-01

    A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The present paper fully decomposes the system into structural and control subsystem designs and produces an improved design. Theory, implementation, and results for the method are presented and compared with the benchmark example.

  17. Creation of problem-dependent Doppler-broadened cross sections in the KENO Monte Carlo code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Shane W. D.; Celik, Cihangir; Maldonado, G. Ivan

    2015-11-06

    In this paper, we introduce a quick method for improving the accuracy of Monte Carlo simulations by generating one- and two-dimensional cross sections at a user-defined temperature before performing transport calculations. A finite difference method is used to Doppler-broaden cross sections to the desired temperature, and unit-base interpolation is done to generate the probability distributions for double differential two-dimensional thermal moderator cross sections at any arbitrarily user-defined temperature. The accuracy of these methods is tested using a variety of contrived problems. In addition, various benchmarks at elevated temperatures are modeled, and results are compared with benchmark results. Lastly, the problem-dependentmore » cross sections are observed to produce eigenvalue estimates that are closer to the benchmark results than those without the problem-dependent cross sections.« less

  18. A comparison of five benchmarks

    NASA Technical Reports Server (NTRS)

    Huss, Janice E.; Pennline, James A.

    1987-01-01

    Five benchmark programs were obtained and run on the NASA Lewis CRAY X-MP/24. A comparison was made between the programs codes and between the methods for calculating performance figures. Several multitasking jobs were run to gain experience in how parallel performance is measured.

  19. SU-C-202-05: Pilot Study of Online Treatment Evaluation and Adaptive Re-Planning for Laryngeal SBRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mao, W; Henry Ford Health System, Detroit, MI; Liu, C

    Purpose: We have instigated a phase I trial of 5-fraction stereotactic body radiotherapy (SBRT) for advanced-stage laryngeal cancer. We conducted this pilot dosimetric study to confirm the potential utility of online adaptive re-planning to preserve treatment quality. Methods: Ten cases of larynx cancer were evaluated. Baseline and daily SBRT treatment plans were generated per trial protocol. Daily volumetric images were acquired prior to every fraction of treatment. Reference simulation CT images were deformably registered to daily volumetric images using Eclipse. Planning contours were then deformably propagated to daily images. Reference SBRT plans were directly copied to calculate delivered dose distributionsmore » on deformed reference CT images. In-house software platform has been developed to calculate cumulative dose over a course of treatment in four steps: 1) deforming delivered dose grid to reference CT images using deformation information exported from Eclipse; 2) generating tetrahedrons using deformed dose grid as vertices; 3) resampling dose to a high resolution within every tetrahedron; 4) calculating dose-volume histograms. Our inhouse software was benchmarked with a commercial software, Mirada. Results: In all ten cases including 49 fractions of treatments, delivered daily doses were completely evaluated and treatment could be re-planned within 10 minutes. Prescription dose coverage of PTV was less than intended in 53% of fractions of treatment (mean: 94%, range: 84%–98%) while minimum coverage of CTV and GTV was 94% and 97%, respectively. Maximum bystander point dose limits to arytenoids, parotids, and spinal cord remained respected in all cases, although variances in carotid artery doses were observed in a minority of cases. Conclusion: Although GTV and CTV coverage is preserved by in-room 3D image guidance of larynx SBRT, PTV coverage can vary significantly from intended plans. Online adaptive treatment evaluation and re-planning is potentially necessary and our procedure is clinically applicable to fully preserve treatment quality. This project is supported by CPRIT Individual Investigator Research Award RP150386.« less

  20. MO-F-16A-06: Implementation of a Radiation Exposure Monitoring System for Surveillance of Multi-Modality Radiation Dose Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, B; Kanal, K; Dickinson, R

    2014-06-15

    Purpose: We have implemented a commercially available Radiation Exposure Monitoring System (REMS) to enhance the processes of radiation dose data collection, analysis and alerting developed over the past decade at our sites of practice. REMS allows for consolidation of multiple radiation dose information sources and quicker alerting than previously developed processes. Methods: Thirty-nine x-ray producing imaging modalities were interfaced with the REMS: thirteen computed tomography scanners, sixteen angiography/interventional systems, nine digital radiography systems and one mammography system. A number of methodologies were used to provide dose data to the REMS: Modality Performed Procedure Step (MPPS) messages, DICOM Radiation Dose Structuredmore » Reports (RDSR), and DICOM header information. Once interfaced, the dosimetry information from each device underwent validation (first 15–20 exams) before release for viewing by end-users: physicians, medical physicists, technologists and administrators. Results: Before REMS, our diagnostic physics group pulled dosimetry data from seven disparate databases throughout the radiology, radiation oncology, cardiology, electrophysiology, anesthesiology/pain management and vascular surgery departments at two major medical centers and four associated outpatient clinics. With the REMS implementation, we now have one authoritative source of dose information for alerting, longitudinal analysis, dashboard/graphics generation and benchmarking. REMS provides immediate automatic dose alerts utilizing thresholds calculated through daily statistical analysis. This has streamlined our Closing the Loop process for estimated skin exposures in excess of our institutional specific substantial radiation dose level which relied on technologist notification of the diagnostic physics group and daily report from the radiology information system (RIS). REMS also automatically calculates the CT size-specific dose estimate (SSDE) as well as provides two-dimensional angulation dose maps for angiography/interventional procedures. Conclusion: REMS implementation has streamlined and consolidated the dosimetry data collection and analysis process at our institutions while eliminating manual entry error and providing immediate alerting and access to dosimetry data to both physicists and physicians. Brent Stewart has funded research through GE Healthcare.« less

  1. Initial characterization, dosimetric benchmark and performance validation of Dynamic Wave Arc.

    PubMed

    Burghelea, Manuela; Verellen, Dirk; Poels, Kenneth; Hung, Cecilia; Nakamura, Mitsuhiro; Dhont, Jennifer; Gevaert, Thierry; Van den Begin, Robbe; Collen, Christine; Matsuo, Yukinori; Kishi, Takahiro; Simon, Viorica; Hiraoka, Masahiro; de Ridder, Mark

    2016-04-29

    Dynamic Wave Arc (DWA) is a clinical approach designed to maximize the versatility of Vero SBRT system by synchronizing the gantry-ring noncoplanar movement with D-MLC optimization. The purpose of this study was to verify the delivery accuracy of DWA approach and to evaluate the potential dosimetric benefits. DWA is an extended form of VMAT with a continuous varying ring position. The main difference in the optimization modules of VMAT and DWA is during the angular spacing, where the DWA algorithm does not consider the gantry spacing, but only the Euclidian norm of the ring and gantry angle. A preclinical version of RayStation v4.6 (RaySearch Laboratories, Sweden) was used to create patient specific wave arc trajectories for 31 patients with various anatomical tumor regions (prostate, oligometatstatic cases, centrally-located non-small cell lung cancer (NSCLC) and locally advanced pancreatic cancer-LAPC). DWA was benchmarked against the current clinical approaches and coplanar VMAT. Each plan was evaluated with regards to dose distribution, modulation complexity (MCS), monitor units and treatment time efficiency. The delivery accuracy was evaluated using a 2D diode array that takes in consideration the multi-dimensionality of DWA during dose reconstruction. In centrally-located NSCLC cases, DWA improved the low dose spillage with 20 %, while the target coverage was increased with 17 % compared to 3D CRT. The structures that significantly benefited from using DWA were proximal bronchus and esophagus, with the maximal dose being reduced by 17 % and 24 %, respectively. For prostate and LAPC, neither technique seemed clearly superior to the other; however, DWA reduced with more than 65 % of the delivery time over IMRT. A steeper dose gradient outside the target was observed for all treatment sites (p < 0.01) with DWA. Except the oligometastatic cases, where the DWA-MCSs indicate a higher modulation, both DWA and VMAT modalities provide plans of similar complexity. The average ɣ (3 % /3 mm) passing rate for DWA plans was 99.2 ± 1 % (range from 96.8 to 100 %). DWA proven to be a fully functional treatment technique, allowing additional flexibility in dose shaping, while preserving dosimetrically robust delivery and treatment times comparable with coplanar VMAT.

  2. Dose and Effect Thresholds for Early Key Events in a Mode of ...

    EPA Pesticide Factsheets

    ABSTRACT Strategies for predicting adverse health outcomes of environmental chemicals are centered on early key events in toxicity pathways. However, quantitative relationships between early molecular changes in a given pathway and later health effects are often poorly defined. The goal of this study was to evaluate short-term key event indicators using qualitative and quantitative methods in an established pathway of mouse liver tumorigenesis mediated by peroxisome proliferator-activated receptor-alpha (PPARα). Male B6C3F1 mice were exposed for 7 days to di(2-ethylhexyl) phthalate (DEHP), di-n-octyl phthalate (DNOP), and n-butyl benzyl phthalate (BBP), which vary in PPARα activity and liver tumorigenicity. Each phthalate increased expression of select PPARα target genes at 7 days, while only DEHP significantly increased liver cell proliferation labeling index (LI). Transcriptional benchmark dose (BMDT) estimates for dose-related genomic markers stratified phthalates according to hypothetical tumorigenic potencies, unlike BMDs for non-genomic endpoints (liver weights or proliferation). The 7-day BMDT values for Acot1 as a surrogate measure for PPARα activation were 29, 370, and 676 mg/kg-d for DEHP, DNOP, and BBP, respectively, distinguishing DEHP (liver tumor BMD of 35 mg/kg-d) from non-tumorigenic DNOP and BBP. Effect thresholds were generated using linear regression of DEHP effects at 7 days and 2-year tumor incidence values to anchor early response molec

  3. Monte Carlo simulations for angular and spatial distributions in therapeutic-energy proton beams

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Chun; Pan, C. Y.; Chiang, K. J.; Yuan, M. C.; Chu, C. H.; Tsai, Y. W.; Teng, P. K.; Lin, C. H.; Chao, T. C.; Lee, C. C.; Tung, C. J.; Chen, A. E.

    2017-11-01

    The purpose of this study is to compare the angular and spatial distributions of therapeutic-energy proton beams obtained from the FLUKA, GEANT4 and MCNP6 Monte Carlo codes. The Monte Carlo simulations of proton beams passing through two thin targets and a water phantom were investigated to compare the primary and secondary proton fluence distributions and dosimetric differences among these codes. The angular fluence distributions, central axis depth-dose profiles, and lateral distributions of the Bragg peak cross-field were calculated to compare the proton angular and spatial distributions and energy deposition. Benchmark verifications from three different Monte Carlo simulations could be used to evaluate the residual proton fluence for the mean range and to estimate the depth and lateral dose distributions and the characteristic depths and lengths along the central axis as the physical indices corresponding to the evaluation of treatment effectiveness. The results showed a general agreement among codes, except that some deviations were found in the penumbra region. These calculated results are also particularly helpful for understanding primary and secondary proton components for stray radiation calculation and reference proton standard determination, as well as for determining lateral dose distribution performance in proton small-field dosimetry. By demonstrating these calculations, this work could serve as a guide to the recent field of Monte Carlo methods for therapeutic-energy protons.

  4. Interim methods for development of inhalation reference concentrations. Draft report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blackburn, K.; Dourson, M.; Erdreich, L.

    1990-08-01

    An inhalation reference concentration (RfC) is an estimate of continuous inhalation exposure over a human lifetime that is unlikely to pose significant risk of adverse noncancer health effects and serves as a benchmark value for assisting in risk management decisions. Derivation of an RfC involves dose-response assessment of animal data to determine the exposure levels at which no significant increase in the frequency or severity of adverse effects between the exposed population and its appropriate control exists. The assessment requires an interspecies dose extrapolation from a no-observed-adverse-effect level (NOAEL) exposure concentration of an animal to a human equivalent NOAEL (NOAEL(HBC)).more » The RfC is derived from the NOAEL(HBC) by the application of generally order-of-magnitude uncertainty factors. Intermittent exposure scenarios in animals are extrapolated to chronic continuous human exposures. Relationships between external exposures and internal doses depend upon complex simultaneous and consecutive processes of absorption, distribution, metabolism, storage, detoxification, and elimination. To estimate NOAEL(HBC)s when chemical-specific physiologically-based pharmacokinetic models are not available, a dosimetric extrapolation procedure based on anatomical and physiological parameters of the exposed human and animal and the physical parameters of the toxic chemical has been developed which gives equivalent or more conservative exposure concentrations values than those that would be obtained with a PB-PK model.« less

  5. Evaluation of metrics for benchmarking antimicrobial use in the UK dairy industry.

    PubMed

    Mills, Harriet L; Turner, Andrea; Morgans, Lisa; Massey, Jonathan; Schubert, Hannah; Rees, Gwen; Barrett, David; Dowsey, Andrew; Reyher, Kristen K

    2018-03-31

    The issue of antimicrobial resistance is of global concern across human and animal health. In 2016, the UK government committed to new targets for reducing antimicrobial use (AMU) in livestock. Although a number of metrics for quantifying AMU are defined in the literature, all give slightly different interpretations. This paper evaluates a selection of metrics for AMU in the dairy industry: total mg, total mg/kg, daily dose and daily course metrics. Although the focus is on their application to the dairy industry, the metrics and issues discussed are relevant across livestock sectors. In order to be used widely, a metric should be understandable and relevant to the veterinarians and farmers who are prescribing and using antimicrobials. This means that clear methods, assumptions (and possible biases), standardised values and exceptions should be published for all metrics. Particularly relevant are assumptions around the number and weight of cattle at risk of treatment and definitions of dose rates and course lengths; incorrect assumptions can mean metrics over-represent or under-represent AMU. The authors recommend that the UK dairy industry work towards the UK-specific metrics using the UK-specific medicine dose and course regimens as well as cattle weights in order to monitor trends nationally. © British Veterinary Association (unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  6. The philosophy of benchmark testing a standards-based picture archiving and communications system.

    PubMed

    Richardson, N E; Thomas, J A; Lyche, D K; Romlein, J; Norton, G S; Dolecek, Q E

    1999-05-01

    The Department of Defense issued its requirements for a Digital Imaging Network-Picture Archiving and Communications System (DIN-PACS) in a Request for Proposals (RFP) to industry in January 1997, with subsequent contracts being awarded in November 1997 to the Agfa Division of Bayer and IBM Global Government Industry. The Government's technical evaluation process consisted of evaluating a written technical proposal as well as conducting a benchmark test of each proposed system at the vendor's test facility. The purpose of benchmark testing was to evaluate the performance of the fully integrated system in a simulated operational environment. The benchmark test procedures and test equipment were developed through a joint effort between the Government, academic institutions, and private consultants. Herein the authors discuss the resources required and the methods used to benchmark test a standards-based PACS.

  7. Toward multimodal signal detection of adverse drug reactions.

    PubMed

    Harpaz, Rave; DuMouchel, William; Schuemie, Martijn; Bodenreider, Olivier; Friedman, Carol; Horvitz, Eric; Ripple, Anna; Sorbello, Alfred; White, Ryen W; Winnenburg, Rainer; Shah, Nigam H

    2017-12-01

    Improving mechanisms to detect adverse drug reactions (ADRs) is key to strengthening post-marketing drug safety surveillance. Signal detection is presently unimodal, relying on a single information source. Multimodal signal detection is based on jointly analyzing multiple information sources. Building on, and expanding the work done in prior studies, the aim of the article is to further research on multimodal signal detection, explore its potential benefits, and propose methods for its construction and evaluation. Four data sources are investigated; FDA's adverse event reporting system, insurance claims, the MEDLINE citation database, and the logs of major Web search engines. Published methods are used to generate and combine signals from each data source. Two distinct reference benchmarks corresponding to well-established and recently labeled ADRs respectively are used to evaluate the performance of multimodal signal detection in terms of area under the ROC curve (AUC) and lead-time-to-detection, with the latter relative to labeling revision dates. Limited to our reference benchmarks, multimodal signal detection provides AUC improvements ranging from 0.04 to 0.09 based on a widely used evaluation benchmark, and a comparative added lead-time of 7-22 months relative to labeling revision dates from a time-indexed benchmark. The results support the notion that utilizing and jointly analyzing multiple data sources may lead to improved signal detection. Given certain data and benchmark limitations, the early stage of development, and the complexity of ADRs, it is currently not possible to make definitive statements about the ultimate utility of the concept. Continued development of multimodal signal detection requires a deeper understanding the data sources used, additional benchmarks, and further research on methods to generate and synthesize signals. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. The demographic impact and development benefits of meeting demand for family planning with modern contraceptive methods.

    PubMed

    Goodkind, Daniel; Lollock, Lisa; Choi, Yoonjoung; McDevitt, Thomas; West, Loraine

    2018-01-01

    Meeting demand for family planning can facilitate progress towards all major themes of the United Nations Sustainable Development Goals (SDGs): people, planet, prosperity, peace, and partnership. Many policymakers have embraced a benchmark goal that at least 75% of the demand for family planning in all countries be satisfied with modern contraceptive methods by the year 2030. This study examines the demographic impact (and development implications) of achieving the 75% benchmark in 13 developing countries that are expected to be the furthest from achieving that benchmark. Estimation of the demographic impact of achieving the 75% benchmark requires three steps in each country: 1) translate contraceptive prevalence assumptions (with and without intervention) into future fertility levels based on biometric models, 2) incorporate each pair of fertility assumptions into separate population projections, and 3) compare the demographic differences between the two population projections. Data are drawn from the United Nations, the US Census Bureau, and Demographic and Health Surveys. The demographic impact of meeting the 75% benchmark is examined via projected differences in fertility rates (average expected births per woman's reproductive lifetime), total population, growth rates, age structure, and youth dependency. On average, meeting the benchmark would imply a 16 percentage point increase in modern contraceptive prevalence by 2030 and a 20% decline in youth dependency, which portends a potential demographic dividend to spur economic growth. Improvements in meeting the demand for family planning with modern contraceptive methods can bring substantial benefits to developing countries. To our knowledge, this is the first study to show formally how such improvements can alter population size and age structure. Declines in youth dependency portend a demographic dividend, an added bonus to the already well-known benefits of meeting existing demands for family planning.

  9. An automated benchmarking platform for MHC class II binding prediction methods.

    PubMed

    Andreatta, Massimo; Trolle, Thomas; Yan, Zhen; Greenbaum, Jason A; Peters, Bjoern; Nielsen, Morten

    2018-05-01

    Computational methods for the prediction of peptide-MHC binding have become an integral and essential component for candidate selection in experimental T cell epitope discovery studies. The sheer amount of published prediction methods-and often discordant reports on their performance-poses a considerable quandary to the experimentalist who needs to choose the best tool for their research. With the goal to provide an unbiased, transparent evaluation of the state-of-the-art in the field, we created an automated platform to benchmark peptide-MHC class II binding prediction tools. The platform evaluates the absolute and relative predictive performance of all participating tools on data newly entered into the Immune Epitope Database (IEDB) before they are made public, thereby providing a frequent, unbiased assessment of available prediction tools. The benchmark runs on a weekly basis, is fully automated, and displays up-to-date results on a publicly accessible website. The initial benchmark described here included six commonly used prediction servers, but other tools are encouraged to join with a simple sign-up procedure. Performance evaluation on 59 data sets composed of over 10 000 binding affinity measurements suggested that NetMHCIIpan is currently the most accurate tool, followed by NN-align and the IEDB consensus method. Weekly reports on the participating methods can be found online at: http://tools.iedb.org/auto_bench/mhcii/weekly/. mniel@bioinformatics.dtu.dk. Supplementary data are available at Bioinformatics online.

  10. A Field-Based Aquatic Life Benchmark for Conductivity in ...

    EPA Pesticide Factsheets

    This report adapts the standard U.S. EPA methodology for deriving ambient water quality criteria. Rather than use toxicity test results, the adaptation uses field data to determine the loss of 5% of genera from streams. The method is applied to derive effect benchmarks for dissolved salts as measured by conductivity in Central Appalachian streams using data from West Virginia and Kentucky. This report provides scientific evidence for a conductivity benchmark in a specific region rather than for the entire United States.

  11. Paradoxical ventilator associated pneumonia incidences among selective digestive decontamination studies versus other studies of mechanically ventilated patients: benchmarking the evidence base

    PubMed Central

    2011-01-01

    Introduction Selective digestive decontamination (SDD) appears to have a more compelling evidence base than non-antimicrobial methods for the prevention of ventilator associated pneumonia (VAP). However, the striking variability in ventilator associated pneumonia-incidence proportion (VAP-IP) among the SDD studies remains unexplained and a postulated contextual effect remains untested for. Methods Nine reviews were used to source 45 observational (benchmark) groups and 137 component (control and intervention) groups of studies of SDD and studies of three non-antimicrobial methods of VAP prevention. The logit VAP-IP data were summarized by meta-analysis using random effects methods and the associated heterogeneity (tau2) was measured. As group level predictors of logit VAP-IP, the mode of VAP diagnosis, proportion of trauma admissions, the proportion receiving prolonged ventilation and the intervention method under study were examined in meta-regression models containing the benchmark groups together with either the control (models 1 to 3) or intervention (models 4 to 6) groups of the prevention studies. Results The VAP-IP benchmark derived here is 22.1% (95% confidence interval; 95% CI; 19.2 to 25.5; tau2 0.34) whereas the mean VAP-IP of control groups from studies of SDD and of non-antimicrobial methods, is 35.7 (29.7 to 41.8; tau2 0.63) versus 20.4 (17.2 to 24.0; tau2 0.41), respectively (P < 0.001). The disparity between the benchmark groups and the control groups of the SDD studies, which was most apparent for the highest quality studies, could not be explained in the meta-regression models after adjusting for various group level factors. The mean VAP-IP (95% CI) of intervention groups is 16.0 (12.6 to 20.3; tau2 0.59) and 17.1 (14.2 to 20.3; tau2 0.35) for SDD studies versus studies of non-antimicrobial methods, respectively. Conclusions The VAP-IP among the intervention groups within the SDD evidence base is less variable and more similar to the benchmark than among the control groups. These paradoxical observations cannot readily be explained. The interpretation of the SDD evidence base cannot proceed without further consideration of this contextual effect. PMID:21214897

  12. multiDE: a dimension reduced model based statistical method for differential expression analysis using RNA-sequencing data with multiple treatment conditions.

    PubMed

    Kang, Guangliang; Du, Li; Zhang, Hong

    2016-06-22

    The growing complexity of biological experiment design based on high-throughput RNA sequencing (RNA-seq) is calling for more accommodative statistical tools. We focus on differential expression (DE) analysis using RNA-seq data in the presence of multiple treatment conditions. We propose a novel method, multiDE, for facilitating DE analysis using RNA-seq read count data with multiple treatment conditions. The read count is assumed to follow a log-linear model incorporating two factors (i.e., condition and gene), where an interaction term is used to quantify the association between gene and condition. The number of the degrees of freedom is reduced to one through the first order decomposition of the interaction, leading to a dramatically power improvement in testing DE genes when the number of conditions is greater than two. In our simulation situations, multiDE outperformed the benchmark methods (i.e. edgeR and DESeq2) even if the underlying model was severely misspecified, and the power gain was increasing in the number of conditions. In the application to two real datasets, multiDE identified more biologically meaningful DE genes than the benchmark methods. An R package implementing multiDE is available publicly at http://homepage.fudan.edu.cn/zhangh/softwares/multiDE . When the number of conditions is two, multiDE performs comparably with the benchmark methods. When the number of conditions is greater than two, multiDE outperforms the benchmark methods.

  13. [Using fractional polynomials to estimate the safety threshold of fluoride in drinking water].

    PubMed

    Pan, Shenling; An, Wei; Li, Hongyan; Yang, Min

    2014-01-01

    To study the dose-response relationship between fluoride content in drinking water and prevalence of dental fluorosis on the national scale, then to determine the safety threshold of fluoride in drinking water. Meta-regression analysis was applied to the 2001-2002 national endemic fluorosis survey data of key wards. First, fractional polynomial (FP) was adopted to establish fixed effect model, determining the best FP structure, after that restricted maximum likelihood (REML) was adopted to estimate between-study variance, then the best random effect model was established. The best FP structure was first-order logarithmic transformation. Based on the best random effect model, the benchmark dose (BMD) of fluoride in drinking water and its lower limit (BMDL) was calculated as 0.98 mg/L and 0.78 mg/L. Fluoride in drinking water can only explain 35.8% of the variability of the prevalence, among other influencing factors, ward type was a significant factor, while temperature condition and altitude were not. Fractional polynomial-based meta-regression method is simple, practical and can provide good fitting effect, based on it, the safety threshold of fluoride in drinking water of our country is determined as 0.8 mg/L.

  14. A Monte-Carlo Benchmark of TRIPOLI-4® and MCNP on ITER neutronics

    NASA Astrophysics Data System (ADS)

    Blanchet, David; Pénéliau, Yannick; Eschbach, Romain; Fontaine, Bruno; Cantone, Bruno; Ferlet, Marc; Gauthier, Eric; Guillon, Christophe; Letellier, Laurent; Proust, Maxime; Mota, Fernando; Palermo, Iole; Rios, Luis; Guern, Frédéric Le; Kocan, Martin; Reichle, Roger

    2017-09-01

    Radiation protection and shielding studies are often based on the extensive use of 3D Monte-Carlo neutron and photon transport simulations. ITER organization hence recommends the use of MCNP-5 code (version 1.60), in association with the FENDL-2.1 neutron cross section data library, specifically dedicated to fusion applications. The MCNP reference model of the ITER tokamak, the `C-lite', is being continuously developed and improved. This article proposes to develop an alternative model, equivalent to the 'C-lite', but for the Monte-Carlo code TRIPOLI-4®. A benchmark study is defined to test this new model. Since one of the most critical areas for ITER neutronics analysis concerns the assessment of radiation levels and Shutdown Dose Rates (SDDR) behind the Equatorial Port Plugs (EPP), the benchmark is conducted to compare the neutron flux through the EPP. This problem is quite challenging with regard to the complex geometry and considering the important neutron flux attenuation ranging from 1014 down to 108 n•cm-2•s-1. Such code-to-code comparison provides independent validation of the Monte-Carlo simulations, improving the confidence in neutronic results.

  15. A Field-Based Aquatic Life Benchmark for Conductivity in Central Appalachian Streams (Final Report)

    EPA Science Inventory

    EPA announced the availability of the final report, A Field-Based Aquatic Life Benchmark for Conductivity in Central Appalachian Streams. This report describes a method to characterize the relationship between the extirpation (the effective extinction) of invertebrate g...

  16. Testing and Benchmarking a 2014 GM Silverado 6L80 Six Speed Automatic Transmission

    EPA Science Inventory

    Describe the method and test results of EPA’s partial transmission benchmarking process which involves installing both the engine and transmission in an engine dynamometer test cell with the engine wire harness tethered to its vehicle parked outside the test cell.

  17. Benchmarking initiatives in the water industry.

    PubMed

    Parena, R; Smeets, E

    2001-01-01

    Customer satisfaction and service care are every day pushing professionals in the water industry to seek to improve their performance, lowering costs and increasing the provided service level. Process Benchmarking is generally recognised as a systematic mechanism of comparing one's own utility with other utilities or businesses with the intent of self-improvement by adopting structures or methods used elsewhere. The IWA Task Force on Benchmarking, operating inside the Statistics and Economics Committee, has been committed to developing a general accepted concept of Process Benchmarking to support water decision-makers in addressing issues of efficiency. In a first step the Task Force disseminated among the Committee members a questionnaire focused on providing suggestions about the kind, the evolution degree and the main concepts of Benchmarking adopted in the represented Countries. A comparison among the guidelines adopted in The Netherlands and Scandinavia has recently challenged the Task Force in drafting a methodology for a worldwide process benchmarking in water industry. The paper provides a framework of the most interesting benchmarking experiences in the water sector and describes in detail both the final results of the survey and the methodology focused on identification of possible improvement areas.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williamson, Jeffrey F.

    This paper briefly reviews the evolution of brachytherapy dosimetry from 1900 to the present. Dosimetric practices in brachytherapy fall into three distinct eras: During the era of biological dosimetry (1900-1938), radium pioneers could only specify Ra-226 and Rn-222 implants in terms of the mass of radium encapsulated within the implanted sources. Due to the high energy of its emitted gamma rays and the long range of its secondary electrons in air, free-air chambers could not be used to quantify the output of Ra-226 sources in terms of exposure. Biological dosimetry, most prominently the threshold erythema dose, gained currency as amore » means of intercomparing radium treatments with exposure-calibrated orthovoltage x-ray units. The classical dosimetry era (1940-1980) began with successful exposure standardization of Ra-226 sources by Bragg-Gray cavity chambers. Classical dose-computation algorithms, based upon 1-D buildup factor measurements and point-source superposition computational algorithms, were able to accommodate artificial radionuclides such as Co-60, Ir-192, and Cs-137. The quantitative dosimetry era (1980- ) arose in response to the increasing utilization of low energy K-capture radionuclides such as I-125 and Pd-103 for which classical approaches could not be expected to estimate accurate correct doses. This led to intensive development of both experimental (largely TLD-100 dosimetry) and Monte Carlo dosimetry techniques along with more accurate air-kerma strength standards. As a result of extensive benchmarking and intercomparison of these different methods, single-seed low-energy radionuclide dose distributions are now known with a total uncertainty of 3%-5%.« less

  19. SU-E-T-493: Accelerated Monte Carlo Methods for Photon Dosimetry Using a Dual-GPU System and CUDA.

    PubMed

    Liu, T; Ding, A; Xu, X

    2012-06-01

    To develop a Graphics Processing Unit (GPU) based Monte Carlo (MC) code that accelerates dose calculations on a dual-GPU system. We simulated a clinical case of prostate cancer treatment. A voxelized abdomen phantom derived from 120 CT slices was used containing 218×126×60 voxels, and a GE LightSpeed 16-MDCT scanner was modeled. A CPU version of the MC code was first developed in C++ and tested on Intel Xeon X5660 2.8GHz CPU, then it was translated into GPU version using CUDA C 4.1 and run on a dual Tesla m 2 090 GPU system. The code was featured with automatic assignment of simulation task to multiple GPUs, as well as accurate calculation of energy- and material- dependent cross-sections. Double-precision floating point format was used for accuracy. Doses to the rectum, prostate, bladder and femoral heads were calculated. When running on a single GPU, the MC GPU code was found to be ×19 times faster than the CPU code and ×42 times faster than MCNPX. These speedup factors were doubled on the dual-GPU system. The dose Result was benchmarked against MCNPX and a maximum difference of 1% was observed when the relative error is kept below 0.1%. A GPU-based MC code was developed for dose calculations using detailed patient and CT scanner models. Efficiency and accuracy were both guaranteed in this code. Scalability of the code was confirmed on the dual-GPU system. © 2012 American Association of Physicists in Medicine.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, J; Hu, W; Chen, X

    Purpose: The aim of this study is to investigate the feasibility of using RapidPlan for breast cancer radiotherapy and to evaluate its performance for planners with different planning experiences. Methods: A training database was collected with 80 expert plan datasets from patients previously received left breast conserving surgery and IMRT-simultaneously integrated boost radiotherapy. The models were created on the RapidPlan. Five patients from the training database and 5 external patients were used for internal and external validation, respectively. Three planners with different planning experiences (beginner, junior, senior) designed manual and RapidPlan based plans for additional ten patients. The plan qualitiesmore » were compared with manual and RapidPlan based ones. Results: For the internal and external validations, there were no significant dose differences on target coverage for plans from RapidPlan and manual. Also, no difference was found in the mean doses to contralateral breast and lung. The RapidPlan improved the heart (V5, V10, V20, V30, and mead dose) and ipsilateral lung (V5, V10, V20, V30, and mean dose) sparing for the beginner and junior planners. Compare to the plans from senior planner, 6 out of 16 clinically checked parameters were improved in RapidPlan, and the left parameters were similar. Conclusion: It is feasible to generate clinical acceptable plans using RapidPlan for breast cancer radiotherapy. The RapidPlan helps to systematically improve the quality of IMRT plans against the benchmark of clinically accepted plans. The RapidPlan shows promise for homogenizing plan quality by transferring planning expertise from more experienced to less experienced planners.« less

  1. Reliable B Cell Epitope Predictions: Impacts of Method Development and Improved Benchmarking

    PubMed Central

    Kringelum, Jens Vindahl; Lundegaard, Claus; Lund, Ole; Nielsen, Morten

    2012-01-01

    The interaction between antibodies and antigens is one of the most important immune system mechanisms for clearing infectious organisms from the host. Antibodies bind to antigens at sites referred to as B-cell epitopes. Identification of the exact location of B-cell epitopes is essential in several biomedical applications such as; rational vaccine design, development of disease diagnostics and immunotherapeutics. However, experimental mapping of epitopes is resource intensive making in silico methods an appealing complementary approach. To date, the reported performance of methods for in silico mapping of B-cell epitopes has been moderate. Several issues regarding the evaluation data sets may however have led to the performance values being underestimated: Rarely, all potential epitopes have been mapped on an antigen, and antibodies are generally raised against the antigen in a given biological context not against the antigen monomer. Improper dealing with these aspects leads to many artificial false positive predictions and hence to incorrect low performance values. To demonstrate the impact of proper benchmark definitions, we here present an updated version of the DiscoTope method incorporating a novel spatial neighborhood definition and half-sphere exposure as surface measure. Compared to other state-of-the-art prediction methods, Discotope-2.0 displayed improved performance both in cross-validation and in independent evaluations. Using DiscoTope-2.0, we assessed the impact on performance when using proper benchmark definitions. For 13 proteins in the training data set where sufficient biological information was available to make a proper benchmark redefinition, the average AUC performance was improved from 0.791 to 0.824. Similarly, the average AUC performance on an independent evaluation data set improved from 0.712 to 0.727. Our results thus demonstrate that given proper benchmark definitions, B-cell epitope prediction methods achieve highly significant predictive performances suggesting these tools to be a powerful asset in rational epitope discovery. The updated version of DiscoTope is available at www.cbs.dtu.dk/services/DiscoTope-2.0. PMID:23300419

  2. Comparison of normal tissue dose calculation methods for epidemiological studies of radiotherapy patients.

    PubMed

    Mille, Matthew M; Jung, Jae Won; Lee, Choonik; Kuzmin, Gleb A; Lee, Choonsik

    2018-06-01

    Radiation dosimetry is an essential input for epidemiological studies of radiotherapy patients aimed at quantifying the dose-response relationship of late-term morbidity and mortality. Individualised organ dose must be estimated for all tissues of interest located in-field, near-field, or out-of-field. Whereas conventional measurement approaches are limited to points in water or anthropomorphic phantoms, computational approaches using patient images or human phantoms offer greater flexibility and can provide more detailed three-dimensional dose information. In the current study, we systematically compared four different dose calculation algorithms so that dosimetrists and epidemiologists can better understand the advantages and limitations of the various approaches at their disposal. The four dose calculations algorithms considered were as follows: the (1) Analytical Anisotropic Algorithm (AAA) and (2) Acuros XB algorithm (Acuros XB), as implemented in the Eclipse treatment planning system (TPS); (3) a Monte Carlo radiation transport code, EGSnrc; and (4) an accelerated Monte Carlo code, the x-ray Voxel Monte Carlo (XVMC). The four algorithms were compared in terms of their accuracy and appropriateness in the context of dose reconstruction for epidemiological investigations. Accuracy in peripheral dose was evaluated first by benchmarking the calculated dose profiles against measurements in a homogeneous water phantom. Additional simulations in a heterogeneous cylinder phantom evaluated the performance of the algorithms in the presence of tissue heterogeneity. In general, we found that the algorithms contained within the commercial TPS (AAA and Acuros XB) were fast and accurate in-field or near-field, but not acceptable out-of-field. Therefore, the TPS is best suited for epidemiological studies involving large cohorts and where the organs of interest are located in-field or partially in-field. The EGSnrc and XVMC codes showed excellent agreement with measurements both in-field and out-of-field. The EGSnrc code was the most accurate dosimetry approach, but was too slow to be used for large-scale epidemiological cohorts. The XVMC code showed similar accuracy to EGSnrc, but was significantly faster, and thus epidemiological applications seem feasible, especially when the organs of interest reside far away from the field edge.

  3. Demonstration of a software design and statistical analysis methodology with application to patient outcomes data sets.

    PubMed

    Mayo, Charles; Conners, Steve; Warren, Christopher; Miller, Robert; Court, Laurence; Popple, Richard

    2013-11-01

    With emergence of clinical outcomes databases as tools utilized routinely within institutions, comes need for software tools to support automated statistical analysis of these large data sets and intrainstitutional exchange from independent federated databases to support data pooling. In this paper, the authors present a design approach and analysis methodology that addresses both issues. A software application was constructed to automate analysis of patient outcomes data using a wide range of statistical metrics, by combining use of C#.Net and R code. The accuracy and speed of the code was evaluated using benchmark data sets. The approach provides data needed to evaluate combinations of statistical measurements for ability to identify patterns of interest in the data. Through application of the tools to a benchmark data set for dose-response threshold and to SBRT lung data sets, an algorithm was developed that uses receiver operator characteristic curves to identify a threshold value and combines use of contingency tables, Fisher exact tests, Welch t-tests, and Kolmogorov-Smirnov tests to filter the large data set to identify values demonstrating dose-response. Kullback-Leibler divergences were used to provide additional confirmation. The work demonstrates the viability of the design approach and the software tool for analysis of large data sets.

  4. Key Performance Indicators in the Evaluation of the Quality of Radiation Safety Programs.

    PubMed

    Schultz, Cheryl Culver; Shaffer, Sheila; Fink-Bennett, Darlene; Winokur, Kay

    2016-08-01

    Beaumont is a multiple hospital health care system with a centralized radiation safety department. The health system operates under a broad scope Nuclear Regulatory Commission license but also maintains several other limited use NRC licenses in off-site facilities and clinics. The hospital-based program is expansive including diagnostic radiology and nuclear medicine (molecular imaging), interventional radiology, a comprehensive cardiovascular program, multiple forms of radiation therapy (low dose rate brachytherapy, high dose rate brachytherapy, external beam radiotherapy, and gamma knife), and the Research Institute (including basic bench top, human and animal). Each year, in the annual report, data is analyzed and then tracked and trended. While any summary report will, by nature, include items such as the number of pieces of equipment, inspections performed, staff monitored and educated and other similar parameters, not all include an objective review of the quality and effectiveness of the program. Through objective numerical data Beaumont adopted seven key performance indicators. The assertion made is that key performance indicators can be used to establish benchmarks for evaluation and comparison of the effectiveness and quality of radiation safety programs. Based on over a decade of data collection, and adoption of key performance indicators, this paper demonstrates one way to establish objective benchmarking for radiation safety programs in the health care environment.

  5. Toward benchmarking in catalysis science: Best practices, challenges, and opportunities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bligaard, Thomas; Bullock, R. Morris; Campbell, Charles T.

    Benchmarking is a community-based and (preferably) community-driven activity involving consensus-based decisions on how to make reproducible, fair, and relevant assessments. In catalysis science, important catalyst performance metrics include activity, selectivity, and the deactivation profile, which enable comparisons between new and standard catalysts. Benchmarking also requires careful documentation, archiving, and sharing of methods and measurements, to ensure that the full value of research data can be realized. Beyond these goals, benchmarking presents unique opportunities to advance and accelerate understanding of complex reaction systems by combining and comparing experimental information from multiple, in situ and operando techniques with theoretical insights derived frommore » calculations characterizing model systems. This Perspective describes the origins and uses of benchmarking and its applications in computational catalysis, heterogeneous catalysis, molecular catalysis, and electrocatalysis. As a result, it also discusses opportunities and challenges for future developments in these fields.« less

  6. Toward benchmarking in catalysis science: Best practices, challenges, and opportunities

    DOE PAGES

    Bligaard, Thomas; Bullock, R. Morris; Campbell, Charles T.; ...

    2016-03-07

    Benchmarking is a community-based and (preferably) community-driven activity involving consensus-based decisions on how to make reproducible, fair, and relevant assessments. In catalysis science, important catalyst performance metrics include activity, selectivity, and the deactivation profile, which enable comparisons between new and standard catalysts. Benchmarking also requires careful documentation, archiving, and sharing of methods and measurements, to ensure that the full value of research data can be realized. Beyond these goals, benchmarking presents unique opportunities to advance and accelerate understanding of complex reaction systems by combining and comparing experimental information from multiple, in situ and operando techniques with theoretical insights derived frommore » calculations characterizing model systems. This Perspective describes the origins and uses of benchmarking and its applications in computational catalysis, heterogeneous catalysis, molecular catalysis, and electrocatalysis. As a result, it also discusses opportunities and challenges for future developments in these fields.« less

  7. Integral Full Core Multi-Physics PWR Benchmark with Measured Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forget, Benoit; Smith, Kord; Kumar, Shikhar

    In recent years, the importance of modeling and simulation has been highlighted extensively in the DOE research portfolio with concrete examples in nuclear engineering with the CASL and NEAMS programs. These research efforts and similar efforts worldwide aim at the development of high-fidelity multi-physics analysis tools for the simulation of current and next-generation nuclear power reactors. Like all analysis tools, verification and validation is essential to guarantee proper functioning of the software and methods employed. The current approach relies mainly on the validation of single physic phenomena (e.g. critical experiment, flow loops, etc.) and there is a lack of relevantmore » multiphysics benchmark measurements that are necessary to validate high-fidelity methods being developed today. This work introduces a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading and re-loading patterns. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from 58 instrumented assemblies. The benchmark description is now available online and has been used by many groups. However, much work remains to be done on the quantification of uncertainties and modeling sensitivities. This work aims to address these deficiencies and make this benchmark a true non-proprietary international benchmark for the validation of high-fidelity tools. This report details the BEAVRS uncertainty quantification for the first two cycle of operations and serves as the final report of the project.« less

  8. Multiple exposures to indoor contaminants: Derivation of benchmark doses and relative potency factors based on male reprotoxic effects.

    PubMed

    Fournier, K; Tebby, C; Zeman, F; Glorennec, P; Zmirou-Navier, D; Bonvallot, N

    2016-02-01

    Semi-Volatile Organic Compounds (SVOCs) are commonly present in dwellings and several are suspected of having effects on male reproductive function mediated by an endocrine disruption mode of action. To improve knowledge of the health impact of these compounds, cumulative toxicity indicators are needed. This work derives Benchmark Doses (BMD) and Relative Potency Factors (RPF) for SVOCs acting on the male reproductive system through the same mode of action. We included SVOCs fulfilling the following conditions: detection frequency (>10%) in French dwellings, availability of data on the mechanism/mode of action for male reproductive toxicity, and availability of comparable dose-response relationships. Of 58 SVOCs selected, 18 induce a decrease in serum testosterone levels. Six have sufficient and comparable data to derive BMDs based on 10 or 50% of the response. The SVOCs inducing the largest decrease in serum testosterone concentration are: for 10%, bisphenol A (BMD10 = 7.72E-07 mg/kg bw/d; RPF10 = 7,033,679); for 50%, benzo[a]pyrene (BMD50 = 0.030 mg/kg bw/d; RPF50 = 1630), and the one inducing the smallest one is benzyl butyl phthalate (RPF10 and RPF50 = 0.095). This approach encompasses contaminants from diverse chemical families acting through similar modes of action, and makes possible a cumulative risk assessment in indoor environments. The main limitation remains the lack of comparable toxicological data. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Alcohol calibration of tests measuring skills related to car driving.

    PubMed

    Jongen, Stefan; Vuurman, Eric; Ramaekers, Jan; Vermeeren, Annemiek

    2014-06-01

    Medication and illicit drugs can have detrimental side effects which impair driving performance. A drug's impairing potential should be determined by well-validated, reliable, and sensitive tests and ideally be calibrated by benchmark drugs and doses. To date, no consensus has been reached on the issue of which psychometric tests are best suited for initial screening of a drug's driving impairment potential. The aim of this alcohol calibration study is to determine which performance tests are useful to measure drug-induced impairment. The effects of alcohol are used to compare the psychometric quality between tests and as benchmark to quantify performance changes in each test associated with potentially impairing drug effects. Twenty-four healthy volunteers participated in a double-blind, four-way crossover study. Treatments were placebo and three different doses of alcohol leading to blood alcohol concentrations (BACs) of 0.2, 0.5, and 0.8 g/L. Main effects of alcohol were found in most tests. Compared with placebo, performance in the Divided Attention Test (DAT) was significantly impaired after all alcohol doses and performance in the Psychomotor Vigilance Test (PVT) and the Balance Test was impaired with a BAC of 0.5 and 0.8 g/L. The largest effect sizes were found on postural balance with eyes open and mean reaction time in the divided attention and the psychomotor vigilance test. The preferable tests for initial screening are the DAT and the PVT, as these tests were most sensitive to the impairing effects of alcohol and being considerably valid in assessing potential driving impairment.

  10. Benchmarking the MCNP Monte Carlo code with a photon skyshine experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olsher, R.H.; Hsu, Hsiao Hua; Harvey, W.F.

    1993-07-01

    The MCNP Monte Carlo transport code is used by the Los Alamos National Laboratory Health and Safety Division for a broad spectrum of radiation shielding calculations. One such application involves the determination of skyshine dose for a variety of photon sources. To verify the accuracy of the code, it was benchmarked with the Kansas State Univ. (KSU) photon skyshine experiment of 1977. The KSU experiment for the unshielded source geometry was simulated in great detail to include the contribution of groundshine, in-silo photon scatter, and the effect of spectral degradation in the source capsule. The standard deviation of the KSUmore » experimental data was stated to be 7%, while the statistical uncertainty of the simulation was kept at or under 1%. The results of the simulation agreed closely with the experimental data, generally to within 6%. At distances of under 100 m from the silo, the modeling of the in-silo scatter was crucial to achieving close agreement with the experiment. Specifically, scatter off the top layer of the source cask accounted for [approximately]12% of the dose at 50 m. At distance >300m, using the [sup 60]Co line spectrum led to a dose overresponse as great as 19% at 700 m. It was necessary to use the actual source spectrum, which includes a Compton tail from photon collisions in the source capsule, to achieve close agreement with experimental data. These results highlight the importance of using Monte Carlo transport techniques to account for the nonideal features of even simple experiments''.« less

  11. Monte Carlo dose calculations in homogeneous media and at interfaces: a comparison between GEPTS, EGSnrc, MCNP, and measurements.

    PubMed

    Chibani, Omar; Li, X Allen

    2002-05-01

    Three Monte Carlo photon/electron transport codes (GEPTS, EGSnrc, and MCNP) are bench-marked against dose measurements in homogeneous (both low- and high-Z) media as well as at interfaces. A brief overview on physical models used by each code for photon and electron (positron) transport is given. Absolute calorimetric dose measurements for 0.5 and 1 MeV electron beams incident on homogeneous and multilayer media are compared with the predictions of the three codes. Comparison with dose measurements in two-layer media exposed to a 60Co gamma source is also performed. In addition, comparisons between the codes (including the EGS4 code) are done for (a) 0.05 to 10 MeV electron beams and positron point sources in lead, (b) high-energy photons (10 and 20 MeV) irradiating a multilayer phantom (water/steel/air), and (c) simulation of a 90Sr/90Y brachytherapy source. A good agreement is observed between the calorimetric electron dose measurements and predictions of GEPTS and EGSnrc in both homogeneous and multilayer media. MCNP outputs are found to be dependent on the energy-indexing method (Default/ITS style). This dependence is significant in homogeneous media as well as at interfaces. MCNP(ITS) fits more closely the experimental data than MCNP(DEF), except for the case of Be. At low energy (0.05 and 0.1 MeV), MCNP(ITS) dose distributions in lead show higher maximums in comparison with GEPTS and EGSnrc. EGS4 produces too penetrating electron-dose distributions in high-Z media, especially at low energy (<0.1 MeV). For positrons, differences between GEPTS and EGSnrc are observed in lead because GEPTS distinguishes positrons from electrons for both elastic multiple scattering and bremsstrahlung emission models. For the 60Co source, a quite good agreement between calculations and measurements is observed with regards to the experimental uncertainty. For the other cases (10 and 20 MeV photon sources and the 90Sr/90Y beta source), a good agreement is found between the three codes. In conclusion, differences between GEPTS and EGSnrc results are found to be very small for almost all media and energies studied. MCNP results depend significantly on the electron energy-indexing method.

  12. Population modelling to compare chronic external radiotoxicity between individual and population endpoints in four taxonomic groups.

    PubMed

    Alonzo, Frédéric; Hertel-Aas, Turid; Real, Almudena; Lance, Emilie; Garcia-Sanchez, Laurent; Bradshaw, Clare; Vives I Batlle, Jordi; Oughton, Deborah H; Garnier-Laplace, Jacqueline

    2016-02-01

    In this study, we modelled population responses to chronic external gamma radiation in 12 laboratory species (including aquatic and soil invertebrates, fish and terrestrial mammals). Our aim was to compare radiosensitivity between individual and population endpoints and to examine how internationally proposed benchmarks for environmental radioprotection protected species against various risks at the population level. To do so, we used population matrix models, combining life history and chronic radiotoxicity data (derived from laboratory experiments and described in the literature and the FREDERICA database) to simulate changes in population endpoints (net reproductive rate R0, asymptotic population growth rate λ, equilibrium population size Neq) for a range of dose rates. Elasticity analyses of models showed that population responses differed depending on the affected individual endpoint (juvenile or adult survival, delay in maturity or reduction in fecundity), the considered population endpoint (R0, λ or Neq) and the life history of the studied species. Among population endpoints, net reproductive rate R0 showed the lowest EDR10 (effective dose rate inducing 10% effect) in all species, with values ranging from 26 μGy h(-1) in the mouse Mus musculus to 38,000 μGy h(-1) in the fish Oryzias latipes. For several species, EDR10 for population endpoints were lower than the lowest EDR10 for individual endpoints. Various population level risks, differing in severity for the population, were investigated. Population extinction (predicted when radiation effects caused population growth rate λ to decrease below 1, indicating that no population growth in the long term) was predicted for dose rates ranging from 2700 μGy h(-1) in fish to 12,000 μGy h(-1) in soil invertebrates. A milder risk, that population growth rate λ will be reduced by 10% of the reduction causing extinction, was predicted for dose rates ranging from 24 μGy h(-1) in mammals to 1800 μGy h(-1) in soil invertebrates. These predictions suggested that proposed reference benchmarks from the literature for different taxonomic groups protected all simulated species against population extinction. A generic reference benchmark of 10 μGy h(-1) protected all simulated species against 10% of the effect causing population extinction. Finally, a risk of pseudo-extinction was predicted from 2.0 μGy h(-1) in mammals to 970 μGy h(-1) in soil invertebrates, representing a slight but statistically significant population decline, the importance of which remains to be evaluated in natural settings. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Establishing Language Benchmarks for Children with Typically Developing Language and Children with Language Impairment

    ERIC Educational Resources Information Center

    Schmitt, Mary Beth; Logan, Jessica A. R.; Tambyraja, Sherine R.; Farquharson, Kelly; Justice, Laura M.

    2017-01-01

    Purpose: Practitioners, researchers, and policymakers (i.e., stakeholders) have vested interests in children's language growth yet currently do not have empirically driven methods for measuring such outcomes. The present study established language benchmarks for children with typically developing language (TDL) and children with language…

  14. Children's Services Statistical Neighbour Benchmarking Tool. Practitioner User Guide

    ERIC Educational Resources Information Center

    National Foundation for Educational Research, 2007

    2007-01-01

    Statistical neighbour models provide one method for benchmarking progress. For each local authority (LA), these models designate a number of other LAs deemed to have similar characteristics. These designated LAs are known as statistical neighbours. Any LA may compare its performance (as measured by various indicators) against its statistical…

  15. International E-Benchmarking: Flexible Peer Development of Authentic Learning Principles in Higher Education

    ERIC Educational Resources Information Center

    Leppisaari, Irja; Vainio, Leena; Herrington, Jan; Im, Yeonwook

    2011-01-01

    More and more, social technologies and virtual work methods are facilitating new ways of crossing boundaries in professional development and international collaborations. This paper examines the peer development of higher education teachers through the experiences of the IVBM project (International Virtual Benchmarking, 2009-2010). The…

  16. Identifying key genes in glaucoma based on a benchmarked dataset and the gene regulatory network.

    PubMed

    Chen, Xi; Wang, Qiao-Ling; Zhang, Meng-Hui

    2017-10-01

    The current study aimed to identify key genes in glaucoma based on a benchmarked dataset and gene regulatory network (GRN). Local and global noise was added to the gene expression dataset to produce a benchmarked dataset. Differentially-expressed genes (DEGs) between patients with glaucoma and normal controls were identified utilizing the Linear Models for Microarray Data (Limma) package based on benchmarked dataset. A total of 5 GRN inference methods, including Zscore, GeneNet, context likelihood of relatedness (CLR) algorithm, Partial Correlation coefficient with Information Theory (PCIT) and GEne Network Inference with Ensemble of Trees (Genie3) were evaluated using receiver operating characteristic (ROC) and precision and recall (PR) curves. The interference method with the best performance was selected to construct the GRN. Subsequently, topological centrality (degree, closeness and betweenness) was conducted to identify key genes in the GRN of glaucoma. Finally, the key genes were validated by performing reverse transcription-quantitative polymerase chain reaction (RT-qPCR). A total of 176 DEGs were detected from the benchmarked dataset. The ROC and PR curves of the 5 methods were analyzed and it was determined that Genie3 had a clear advantage over the other methods; thus, Genie3 was used to construct the GRN. Following topological centrality analysis, 14 key genes for glaucoma were identified, including IL6 , EPHA2 and GSTT1 and 5 of these 14 key genes were validated by RT-qPCR. Therefore, the current study identified 14 key genes in glaucoma, which may be potential biomarkers to use in the diagnosis of glaucoma and aid in identifying the molecular mechanism of this disease.

  17. Evaluation of state-of-the-art segmentation algorithms for left ventricle infarct from late Gadolinium enhancement MR images.

    PubMed

    Karim, Rashed; Bhagirath, Pranav; Claus, Piet; James Housden, R; Chen, Zhong; Karimaghaloo, Zahra; Sohn, Hyon-Mok; Lara Rodríguez, Laura; Vera, Sergio; Albà, Xènia; Hennemuth, Anja; Peitgen, Heinz-Otto; Arbel, Tal; Gonzàlez Ballester, Miguel A; Frangi, Alejandro F; Götte, Marco; Razavi, Reza; Schaeffter, Tobias; Rhode, Kawal

    2016-05-01

    Studies have demonstrated the feasibility of late Gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) imaging for guiding the management of patients with sequelae to myocardial infarction, such as ventricular tachycardia and heart failure. Clinical implementation of these developments necessitates a reproducible and reliable segmentation of the infarcted regions. It is challenging to compare new algorithms for infarct segmentation in the left ventricle (LV) with existing algorithms. Benchmarking datasets with evaluation strategies are much needed to facilitate comparison. This manuscript presents a benchmarking evaluation framework for future algorithms that segment infarct from LGE CMR of the LV. The image database consists of 30 LGE CMR images of both humans and pigs that were acquired from two separate imaging centres. A consensus ground truth was obtained for all data using maximum likelihood estimation. Six widely-used fixed-thresholding methods and five recently developed algorithms are tested on the benchmarking framework. Results demonstrate that the algorithms have better overlap with the consensus ground truth than most of the n-SD fixed-thresholding methods, with the exception of the Full-Width-at-Half-Maximum (FWHM) fixed-thresholding method. Some of the pitfalls of fixed thresholding methods are demonstrated in this work. The benchmarking evaluation framework, which is a contribution of this work, can be used to test and benchmark future algorithms that detect and quantify infarct in LGE CMR images of the LV. The datasets, ground truth and evaluation code have been made publicly available through the website: https://www.cardiacatlas.org/web/guest/challenges. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  18. WWTP dynamic disturbance modelling--an essential module for long-term benchmarking development.

    PubMed

    Gernaey, K V; Rosen, C; Jeppsson, U

    2006-01-01

    Intensive use of the benchmark simulation model No. 1 (BSM1), a protocol for objective comparison of the effectiveness of control strategies in biological nitrogen removal activated sludge plants, has also revealed a number of limitations. Preliminary definitions of the long-term benchmark simulation model No. 1 (BSM1_LT) and the benchmark simulation model No. 2 (BSM2) have been made to extend BSM1 for evaluation of process monitoring methods and plant-wide control strategies, respectively. Influent-related disturbances for BSM1_LT/BSM2 are to be generated with a model, and this paper provides a general overview of the modelling methods used. Typical influent dynamic phenomena generated with the BSM1_LT/BSM2 influent disturbance model, including diurnal, weekend, seasonal and holiday effects, as well as rainfall, are illustrated with simulation results. As a result of the work described in this paper, a proposed influent model/file has been released to the benchmark developers for evaluation purposes. Pending this evaluation, a final BSM1_LT/BSM2 influent disturbance model definition is foreseen. Preliminary simulations with dynamic influent data generated by the influent disturbance model indicate that default BSM1 activated sludge plant control strategies will need extensions for BSM1_LT/BSM2 to efficiently handle 1 year of influent dynamics.

  19. Fisk-based criteria to support validation of detection methods for drinking water and air.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacDonell, M.; Bhattacharyya, M.; Finster, M.

    2009-02-18

    This report was prepared to support the validation of analytical methods for threat contaminants under the U.S. Environmental Protection Agency (EPA) National Homeland Security Research Center (NHSRC) program. It is designed to serve as a resource for certain applications of benchmark and fate information for homeland security threat contaminants. The report identifies risk-based criteria from existing health benchmarks for drinking water and air for potential use as validation targets. The focus is on benchmarks for chronic public exposures. The priority sources are standard EPA concentration limits for drinking water and air, along with oral and inhalation toxicity values. Many contaminantsmore » identified as homeland security threats to drinking water or air would convert to other chemicals within minutes to hours of being released. For this reason, a fate analysis has been performed to identify potential transformation products and removal half-lives in air and water so appropriate forms can be targeted for detection over time. The risk-based criteria presented in this report to frame method validation are expected to be lower than actual operational targets based on realistic exposures following a release. Note that many target criteria provided in this report are taken from available benchmarks without assessing the underlying toxicological details. That is, although the relevance of the chemical form and analogues are evaluated, the toxicological interpretations and extrapolations conducted by the authoring organizations are not. It is also important to emphasize that such targets in the current analysis are not health-based advisory levels to guide homeland security responses. This integrated evaluation of chronic public benchmarks and contaminant fate has identified more than 200 risk-based criteria as method validation targets across numerous contaminants and fate products in drinking water and air combined. The gap in directly applicable values is considerable across the full set of threat contaminants, so preliminary indicators were developed from other well-documented benchmarks to serve as a starting point for validation efforts. By this approach, at least preliminary context is available for water or air, and sometimes both, for all chemicals on the NHSRC list that was provided for this evaluation. This means that a number of concentrations presented in this report represent indirect measures derived from related benchmarks or surrogate chemicals, as described within the many results tables provided in this report.« less

  20. Constructing Benchmark Databases and Protocols for Medical Image Analysis: Diabetic Retinopathy

    PubMed Central

    Kauppi, Tomi; Kämäräinen, Joni-Kristian; Kalesnykiene, Valentina; Sorri, Iiris; Uusitalo, Hannu; Kälviäinen, Heikki

    2013-01-01

    We address the performance evaluation practices for developing medical image analysis methods, in particular, how to establish and share databases of medical images with verified ground truth and solid evaluation protocols. Such databases support the development of better algorithms, execution of profound method comparisons, and, consequently, technology transfer from research laboratories to clinical practice. For this purpose, we propose a framework consisting of reusable methods and tools for the laborious task of constructing a benchmark database. We provide a software tool for medical image annotation helping to collect class label, spatial span, and expert's confidence on lesions and a method to appropriately combine the manual segmentations from multiple experts. The tool and all necessary functionality for method evaluation are provided as public software packages. As a case study, we utilized the framework and tools to establish the DiaRetDB1 V2.1 database for benchmarking diabetic retinopathy detection algorithms. The database contains a set of retinal images, ground truth based on information from multiple experts, and a baseline algorithm for the detection of retinopathy lesions. PMID:23956787

  1. The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grebe, A.; Leveling, A.; Lu, T.

    The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay gamma-quanta by the residuals in the activated structures and scoring the prompt doses of these gamma-quanta at arbitrary distances frommore » those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and showed a good agreement. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.« less

  2. The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose

    NASA Astrophysics Data System (ADS)

    Grebe, A.; Leveling, A.; Lu, T.; Mokhov, N.; Pronskikh, V.

    2018-01-01

    The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay γ-quanta by the residuals in the activated structures and scoring the prompt doses of these γ-quanta at arbitrary distances from those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and against experimental data from the CERF facility at CERN, and FermiCORD showed reasonable agreement with these. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.

  3. Using Benchmarking Techniques and the 2011 Maternity Practices Infant Nutrition and Care (mPINC) Survey to Improve Performance among Peer Groups across the United States

    PubMed Central

    Edwards, Roger A.; Dee, Deborah; Umer, Amna; Perrine, Cria G.; Shealy, Katherine R.; Grummer-Strawn, Laurence M.

    2015-01-01

    Background A substantial proportion of US maternity care facilities engage in practices that are not evidence-based and that interfere with breastfeeding. The CDC Survey of Maternity Practices in Infant Nutrition and Care (mPINC) showed significant variation in maternity practices among US states. Objective The purpose of this article is to use benchmarking techniques to identify states within relevant peer groups that were top performers on mPINC survey indicators related to breastfeeding support. Methods We used 11 indicators of breastfeeding-related maternity care from the 2011 mPINC survey and benchmarking techniques to organize and compare hospital-based maternity practices across the 50 states and Washington, DC. We created peer categories for benchmarking first by region (grouping states by West, Midwest, South, and Northeast) and then by size (grouping states by the number of maternity facilities and dividing each region into approximately equal halves based on the number of facilities). Results Thirty-four states had scores high enough to serve as benchmarks, and 32 states had scores low enough to reflect the lowest score gap from the benchmark on at least 1 indicator. No state served as the benchmark on more than 5 indicators and no state was furthest from the benchmark on more than 7 indicators. The small peer group benchmarks in the South, West, and Midwest were better than the large peer group benchmarks on 91%, 82%, and 36% of the indicators, respectively. In the West large, the Midwest large, the Midwest small, and the South large peer groups, 4–6 benchmarks showed that less than 50% of hospitals have ideal practice in all states. Conclusion The evaluation presents benchmarks for peer group state comparisons that provide potential and feasible targets for improvement. PMID:24394963

  4. Assessment of composite motif discovery methods.

    PubMed

    Klepper, Kjetil; Sandve, Geir K; Abul, Osman; Johansen, Jostein; Drablos, Finn

    2008-02-26

    Computational discovery of regulatory elements is an important area of bioinformatics research and more than a hundred motif discovery methods have been published. Traditionally, most of these methods have addressed the problem of single motif discovery - discovering binding motifs for individual transcription factors. In higher organisms, however, transcription factors usually act in combination with nearby bound factors to induce specific regulatory behaviours. Hence, recent focus has shifted from single motifs to the discovery of sets of motifs bound by multiple cooperating transcription factors, so called composite motifs or cis-regulatory modules. Given the large number and diversity of methods available, independent assessment of methods becomes important. Although there have been several benchmark studies of single motif discovery, no similar studies have previously been conducted concerning composite motif discovery. We have developed a benchmarking framework for composite motif discovery and used it to evaluate the performance of eight published module discovery tools. Benchmark datasets were constructed based on real genomic sequences containing experimentally verified regulatory modules, and the module discovery programs were asked to predict both the locations of these modules and to specify the single motifs involved. To aid the programs in their search, we provided position weight matrices corresponding to the binding motifs of the transcription factors involved. In addition, selections of decoy matrices were mixed with the genuine matrices on one dataset to test the response of programs to varying levels of noise. Although some of the methods tested tended to score somewhat better than others overall, there were still large variations between individual datasets and no single method performed consistently better than the rest in all situations. The variation in performance on individual datasets also shows that the new benchmark datasets represents a suitable variety of challenges to most methods for module discovery.

  5. A Signal-to-Noise Crossover Dose as the Point of Departure for Health Risk Assessment

    PubMed Central

    Portier, Christopher J.; Krewski, Daniel

    2011-01-01

    Background: The U.S. National Toxicology Program (NTP) cancer bioassay database provides an opportunity to compare both existing and new approaches to determining points of departure (PoDs) for establishing reference doses (RfDs). Objectives: The aims of this study were a) to investigate the risk associated with the traditional PoD used in human health risk assessment [the no observed adverse effect level (NOAEL)]; b) to present a new approach based on the signal-to-noise crossover dose (SNCD); and c) to compare the SNCD and SNCD-based RfD with PoDs and RfDs based on the NOAEL and benchmark dose (BMD) approaches. Methods: The complete NTP database was used as the basis for these analyses, which were performed using the Hill model. We determined NOAELs and estimated corresponding extra risks. Lower 95% confidence bounds on the BMD (BMDLs) corresponding to extra risks of 1%, 5%, and 10% (BMDL01, BMDL05, and BMDL10, respectively) were also estimated. We introduce the SNCD as a new PoD, defined as the dose where the additional risk is equal to the “background noise” (the difference between the upper and lower bounds of the two-sided 90% confidence interval on absolute risk) or a specified fraction thereof. Results: The median risk at the NOAEL was approximately 10%, and the default uncertainty factor (UF = 100) was considered most applicable to the BMDL10. Therefore, we chose a target risk of 1/1,000 (0.1/100) to derive an SNCD-based RfD by linear extrapolation. At the median, this approach provided the same RfD as the BMDL10 divided by the default UF. Conclusions: Under a standard BMD approach, the BMDL10 is considered to be the most appropriate PoD. The SNCD approach, which is based on the lowest dose at which the signal can be reliably detected, warrants further development as a PoD for human health risk assessment. PMID:21813365

  6. Clinical decision-making tools for exam selection, reporting and dose tracking.

    PubMed

    Brink, James A

    2014-10-01

    Although many efforts have been made to reduce the radiation dose associated with individual medical imaging examinations to "as low as reasonably achievable," efforts to ensure such examinations are performed only when medically indicated and appropriate are equally if not more important. Variations in the use of ionizing radiation for medical imaging are concerning, regardless of whether they occur on a local, regional or national basis. Such variations among practices can be reduced with the use of decision support tools at the time of order entry. These tools help reduce radiation exposure among practices through the appropriate use of medical imaging. Similarly, adoption of best practices among imaging facilities can be promoted through tracking the radiation exposure among imaging patients. Practices can benchmark their aggregate radiation exposures for medical imaging through the use of dose index registries. However several variables must be considered when contemplating individual patient dose tracking. The specific dose measures and the variation among them introduced by variations in body habitus must be understood. Moreover the uncertainties in risk estimation from dose metrics related to age, gender and life expectancy must also be taken into account.

  7. A diversity index for model space selection in the estimation of benchmark and infectious doses via model averaging.

    PubMed

    Kim, Steven B; Kodell, Ralph L; Moon, Hojin

    2014-03-01

    In chemical and microbial risk assessments, risk assessors fit dose-response models to high-dose data and extrapolate downward to risk levels in the range of 1-10%. Although multiple dose-response models may be able to fit the data adequately in the experimental range, the estimated effective dose (ED) corresponding to an extremely small risk can be substantially different from model to model. In this respect, model averaging (MA) provides more robustness than a single dose-response model in the point and interval estimation of an ED. In MA, accounting for both data uncertainty and model uncertainty is crucial, but addressing model uncertainty is not achieved simply by increasing the number of models in a model space. A plausible set of models for MA can be characterized by goodness of fit and diversity surrounding the truth. We propose a diversity index (DI) to balance between these two characteristics in model space selection. It addresses a collective property of a model space rather than individual performance of each model. Tuning parameters in the DI control the size of the model space for MA. © 2013 Society for Risk Analysis.

  8. Quantitative assessment of the dose-response of alkylating agents in DNA repair proficient and deficient ames tester strains.

    PubMed

    Tang, Leilei; Guérard, Melanie; Zeller, Andreas

    2014-01-01

    Mutagenic and clastogenic effects of some DNA damaging agents such as methyl methanesulfonate (MMS) and ethyl methanesulfonate (EMS) have been demonstrated to exhibit a nonlinear or even "thresholded" dose-response in vitro and in vivo. DNA repair seems to be mainly responsible for these thresholds. To this end, we assessed several mutagenic alkylators in the Ames test with four different strains of Salmonella typhimurium: the alkyl transferases proficient strain TA1535 (Ogt+/Ada+), as well as the alkyl transferases deficient strains YG7100 (Ogt+/Ada-), YG7104 (Ogt-/Ada+) and YG7108 (Ogt-/Ada-). The known genotoxins EMS, MMS, temozolomide (TMZ), ethylnitrosourea (ENU) and methylnitrosourea (MNU) were tested in as many as 22 concentration levels. Dose-response curves were statistically fitted by the PROAST benchmark dose model and the Lutz-Lutz "hockeystick" model. These dose-response curves suggest efficient DNA-repair for lesions inflicted by all agents in strain TA1535. In the absence of Ogt, Ada is predominantly repairing methylations but not ethylations. It is concluded that the capacity of alkyl-transferases to successfully repair DNA lesions up to certain dose levels contributes to genotoxicity thresholds. Copyright © 2013 Wiley Periodicals, Inc.

  9. Low-dose CT image reconstruction using gain intervention-based dictionary learning

    NASA Astrophysics Data System (ADS)

    Pathak, Yadunath; Arya, K. V.; Tiwari, Shailendra

    2018-05-01

    Computed tomography (CT) approach is extensively utilized in clinical diagnoses. However, X-ray residue in human body may introduce somatic damage such as cancer. Owing to radiation risk, research has focused on the radiation exposure distributed to patients through CT investigations. Therefore, low-dose CT has become a significant research area. Many researchers have proposed different low-dose CT reconstruction techniques. But, these techniques suffer from various issues such as over smoothing, artifacts, noise, etc. Therefore, in this paper, we have proposed a novel integrated low-dose CT reconstruction technique. The proposed technique utilizes global dictionary-based statistical iterative reconstruction (GDSIR) and adaptive dictionary-based statistical iterative reconstruction (ADSIR)-based reconstruction techniques. In case the dictionary (D) is predetermined, then GDSIR can be used and if D is adaptively defined then ADSIR is appropriate choice. The gain intervention-based filter is also used as a post-processing technique for removing the artifacts from low-dose CT reconstructed images. Experiments have been done by considering the proposed and other low-dose CT reconstruction techniques on well-known benchmark CT images. Extensive experiments have shown that the proposed technique outperforms the available approaches.

  10. How do I know if my forecasts are better? Using benchmarks in hydrological ensemble prediction

    NASA Astrophysics Data System (ADS)

    Pappenberger, F.; Ramos, M. H.; Cloke, H. L.; Wetterhall, F.; Alfieri, L.; Bogner, K.; Mueller, A.; Salamon, P.

    2015-03-01

    The skill of a forecast can be assessed by comparing the relative proximity of both the forecast and a benchmark to the observations. Example benchmarks include climatology or a naïve forecast. Hydrological ensemble prediction systems (HEPS) are currently transforming the hydrological forecasting environment but in this new field there is little information to guide researchers and operational forecasters on how benchmarks can be best used to evaluate their probabilistic forecasts. In this study, it is identified that the forecast skill calculated can vary depending on the benchmark selected and that the selection of a benchmark for determining forecasting system skill is sensitive to a number of hydrological and system factors. A benchmark intercomparison experiment is then undertaken using the continuous ranked probability score (CRPS), a reference forecasting system and a suite of 23 different methods to derive benchmarks. The benchmarks are assessed within the operational set-up of the European Flood Awareness System (EFAS) to determine those that are 'toughest to beat' and so give the most robust discrimination of forecast skill, particularly for the spatial average fields that EFAS relies upon. Evaluating against an observed discharge proxy the benchmark that has most utility for EFAS and avoids the most naïve skill across different hydrological situations is found to be meteorological persistency. This benchmark uses the latest meteorological observations of precipitation and temperature to drive the hydrological model. Hydrological long term average benchmarks, which are currently used in EFAS, are very easily beaten by the forecasting system and the use of these produces much naïve skill. When decomposed into seasons, the advanced meteorological benchmarks, which make use of meteorological observations from the past 20 years at the same calendar date, have the most skill discrimination. They are also good at discriminating skill in low flows and for all catchment sizes. Simpler meteorological benchmarks are particularly useful for high flows. Recommendations for EFAS are to move to routine use of meteorological persistency, an advanced meteorological benchmark and a simple meteorological benchmark in order to provide a robust evaluation of forecast skill. This work provides the first comprehensive evidence on how benchmarks can be used in evaluation of skill in probabilistic hydrological forecasts and which benchmarks are most useful for skill discrimination and avoidance of naïve skill in a large scale HEPS. It is recommended that all HEPS use the evidence and methodology provided here to evaluate which benchmarks to employ; so forecasters can have trust in their skill evaluation and will have confidence that their forecasts are indeed better.

  11. Monte Carlo proton dose calculations using a radiotherapy specific dual-energy CT scanner for tissue segmentation and range assessment

    NASA Astrophysics Data System (ADS)

    Almeida, Isabel P.; Schyns, Lotte E. J. R.; Vaniqui, Ana; van der Heyden, Brent; Dedes, George; Resch, Andreas F.; Kamp, Florian; Zindler, Jaap D.; Parodi, Katia; Landry, Guillaume; Verhaegen, Frank

    2018-06-01

    Proton beam ranges derived from dual-energy computed tomography (DECT) images from a dual-spiral radiotherapy (RT)-specific CT scanner were assessed using Monte Carlo (MC) dose calculations. Images from a dual-source and a twin-beam DECT scanner were also used to establish a comparison to the RT-specific scanner. Proton ranges extracted from conventional single-energy CT (SECT) were additionally performed to benchmark against literature values. Using two phantoms, a DECT methodology was tested as input for GEANT4 MC proton dose calculations. Proton ranges were calculated for different mono-energetic proton beams irradiating both phantoms; the results were compared to the ground truth based on the phantom compositions. The same methodology was applied in a head-and-neck cancer patient using both SECT and dual-spiral DECT scans from the RT-specific scanner. A pencil-beam-scanning plan was designed, which was subsequently optimized by MC dose calculations, and differences in proton range for the different image-based simulations were assessed. For phantoms, the DECT method yielded overall better material segmentation with  >86% of the voxel correctly assigned for the dual-spiral and dual-source scanners, but only 64% for a twin-beam scanner. For the calibration phantom, the dual-spiral scanner yielded range errors below 1.2 mm (0.6% of range), like the errors yielded by the dual-source scanner (<1.1 mm, <0.5%). With the validation phantom, the dual-spiral scanner yielded errors below 0.8 mm (0.9%), whereas SECT yielded errors up to 1.6 mm (2%). For the patient case, where the absolute truth was missing, proton range differences between DECT and SECT were on average in  ‑1.2  ±  1.2 mm (‑0.5%  ±  0.5%). MC dose calculations were successfully performed on DECT images, where the dual-spiral scanner resulted in media segmentation and range accuracy as good as the dual-source CT. In the patient, the various methods showed relevant range differences.

  12. SU-E-T-587: Optimal Volumetric Modulated Arc Radiotherapy Treatment Planning Technique for Multiple Brain Metastases with Increasing Number of Arcs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keeling, V; Hossain, S; Hildebrand, K

    Purpose: To show improvements in dose conformity and normal brain tissue sparing using an optimal planning technique (OPT) against clinically acceptable planning technique (CAP) in the treatment of multiple brain metastases. Methods: A standardized international benchmark case with12 intracranial tumors was planned using two different VMAT optimization methods. Plans were split into four groups with 3, 6, 9, and 12 targets each planned with 3, 5, and 7 arcs using Eclipse TPS. The beam geometries were 1 full coplanar and half non-coplanar arcs. A prescription dose of 20Gy was used for all targets. The following optimization criteria was used (OPTmore » vs. CAP): (No upper limit vs.108% upper limit for target volume), (priority 140–150 vs. 75–85 for normal-brain-tissue), and (selection of automatic sparing Normal-Tissue-Objective (NTO) vs. Manual NTO). Both had priority 50 to critical structures such as brainstem and optic-chiasm, and both had an NTO priority 150. Normal-brain-tissue doses along with Paddick Conformity Index (PCI) were evaluated. Results: In all cases PCI was higher for OPT plans. The average PCI (OPT,CAP) for all targets was (0.81,0.64), (0.81,0.63), (0.79,0.57), and (0.72,0.55) for 3, 6, 9, and 12 target plans respectively. The percent decrease in normal brain tissue volume (OPT/CAP*100) achieved by OPT plans was (reported as follows: V4, V8, V12, V16, V20) (184, 343, 350, 294, 371%), (192, 417, 380, 299, 360%), and (235, 390, 299, 281, 502%) for the 3, 5, 7 arc 12 target plans, respectively. The maximum brainstem dose decreased for the OPT plan by 4.93, 4.89, and 5.30 Gy for 3, 5, 7 arc 12 target plans, respectively. Conclusion: Substantial increases in PCI, critical structure sparing, and decreases in normal brain tissue dose were achieved by eliminating upper limits from optimization, using automatic sparing of normal tissue function with high priority, and a high priority to normal brain tissue.« less

  13. TU-AB-BRC-10: Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison of GPU and MIC Computing Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, T; Lin, H; Xu, X

    Purpose: (1) To perform phase space (PS) based source modeling for Tomotherapy and Varian TrueBeam 6 MV Linacs, (2) to examine the accuracy and performance of the ARCHER Monte Carlo code on a heterogeneous computing platform with Many Integrated Core coprocessors (MIC, aka Xeon Phi) and GPUs, and (3) to explore the software micro-optimization methods. Methods: The patient-specific source of Tomotherapy and Varian TrueBeam Linacs was modeled using the PS approach. For the helical Tomotherapy case, the PS data were calculated in our previous study (Su et al. 2014 41(7) Medical Physics). For the single-view Varian TrueBeam case, we analyticallymore » derived them from the raw patient-independent PS data in IAEA’s database, partial geometry information of the jaw and MLC as well as the fluence map. The phantom was generated from DICOM images. The Monte Carlo simulation was performed by ARCHER-MIC and GPU codes, which were benchmarked against a modified parallel DPM code. Software micro-optimization was systematically conducted, and was focused on SIMD vectorization of tight for-loops and data prefetch, with the ultimate goal of increasing 512-bit register utilization and reducing memory access latency. Results: Dose calculation was performed for two clinical cases, a Tomotherapy-based prostate cancer treatment and a TrueBeam-based left breast treatment. ARCHER was verified against the DPM code. The statistical uncertainty of the dose to the PTV was less than 1%. Using double-precision, the total wall time of the multithreaded CPU code on a X5650 CPU was 339 seconds for the Tomotherapy case and 131 seconds for the TrueBeam, while on 3 5110P MICs it was reduced to 79 and 59 seconds, respectively. The single-precision GPU code on a K40 GPU took 45 seconds for the Tomotherapy dose calculation. Conclusion: We have extended ARCHER, the MIC and GPU-based Monte Carlo dose engine to Tomotherapy and Truebeam dose calculations.« less

  14. Low utilisation of diabetes medicines in Iran, despite their affordability (2000-2012): a time-series and benchmarking study.

    PubMed

    Sarayani, Amir; Rashidian, Arash; Gholami, Kheirollah

    2014-10-16

    Diabetes is a major public health concern worldwide, particularly in low-income and middle-income countries (LMICs). Limited data exist on the status of access to diabetes medicines in LMICs. We assessed the utilisation and affordability of diabetes medicines in Iran as a middle-income country. We used a retrospective time-series design (2000-2012) and assessed national diabetes medicines' utilisation using pharmaceuticals wholesale data. We calculated defined daily dose consumptions per population days (DDDs/1000 inhabitants/day; DIDs) indicator. Findings were benchmarked with data from Organization for Economic Co-operation and Development (OECD) countries. We also employed Drug Utilization-90% (DU-90) method to compare DU-90s with the Essential Medicines List published by the WHO. We measured affordability using number of minimum daily wage required to purchase a treatment course for 1 month. Diabetes medicines' consumption increased from 4.47 to 33.54 DIDs. The benchmarking showed that medicines' utilisation in Iran in 2011 was only 54% of the median DIDs of 22 OECD countries. Oral hypoglycaemic agents consisted over 80% of use throughout the study period. Regular and isophane insulin (NPH), glibenclamide, metformin and gliclazide were the DU-90 drugs in 2012. Metformin, glibenclamide and regular/NPH insulin combination therapy were affordable throughout the study period (∼0.4, ∼0.1, ∼0.3 of minimum daily wage, respectively). While the affordability of novel insulin preparations improved over time, they were still unaffordable in 2012. The utilisation of diabetes medicines was relatively low, perhaps due to underdiagnosis and inadequate management of patients with diabetes. This had occurred despite affordability of essential diabetes medicines in Iran. Appropriate policies are required to address the underutilisation of diabetes medicines in Iran. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  15. A Field-Based Aquatic Life Benchmark for Conductivity in Central Appalachian Streams (2010) (External Review Draft)

    EPA Science Inventory

    This report adapts the standard U.S. EPA methodology for deriving ambient water quality criteria. Rather than use toxicity test results, the adaptation uses field data to determine the loss of 5% of genera from streams. The method is applied to derive effect benchmarks for disso...

  16. Recommendations for Benchmarking Web Site Usage among Academic Libraries.

    ERIC Educational Resources Information Center

    Hightower, Christy; Sih, Julie; Tilghman, Adam

    1998-01-01

    To help library directors and Web developers create a benchmarking program to compare statistics of academic Web sites, the authors analyzed the Web server log files of 14 university science and engineering libraries. Recommends a centralized voluntary reporting structure coordinated by the Association of Research Libraries (ARL) and a method for…

  17. Quality Enhancement on E-Learning

    ERIC Educational Resources Information Center

    Ossiannilsson, E. S. I.

    2012-01-01

    Purpose: Benchmarking, a method for quality assurance has not been very commonly used in higher education with regard to e-learning. Today, e-learning is an integral part of higher education, and so should also be an integral part of quality assurance systems. However, quality indicators, benchmarks and critical success factors on e-learning have…

  18. Benchmarking for maximum value.

    PubMed

    Baldwin, Ed

    2009-03-01

    Speaking at the most recent Healthcare Estates conference, Ed Baldwin, of international built asset consultancy EC Harris LLP, examined the role of benchmarking and market-testing--two of the key methods used to evaluate the quality and cost-effectiveness of hard and soft FM services provided under PFI healthcare schemes to ensure they are offering maximum value for money.

  19. A generic high-dose rate {sup 192}Ir brachytherapy source for evaluation of model-based dose calculations beyond the TG-43 formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballester, Facundo, E-mail: Facundo.Ballester@uv.es; Carlsson Tedgren, Åsa; Granero, Domingo

    Purpose: In order to facilitate a smooth transition for brachytherapy dose calculations from the American Association of Physicists in Medicine (AAPM) Task Group No. 43 (TG-43) formalism to model-based dose calculation algorithms (MBDCAs), treatment planning systems (TPSs) using a MBDCA require a set of well-defined test case plans characterized by Monte Carlo (MC) methods. This also permits direct dose comparison to TG-43 reference data. Such test case plans should be made available for use in the software commissioning process performed by clinical end users. To this end, a hypothetical, generic high-dose rate (HDR) {sup 192}Ir source and a virtual watermore » phantom were designed, which can be imported into a TPS. Methods: A hypothetical, generic HDR {sup 192}Ir source was designed based on commercially available sources as well as a virtual, cubic water phantom that can be imported into any TPS in DICOM format. The dose distribution of the generic {sup 192}Ir source when placed at the center of the cubic phantom, and away from the center under altered scatter conditions, was evaluated using two commercial MBDCAs [Oncentra{sup ®} Brachy with advanced collapsed-cone engine (ACE) and BrachyVision ACUROS{sup TM}]. Dose comparisons were performed using state-of-the-art MC codes for radiation transport, including ALGEBRA, BrachyDose, GEANT4, MCNP5, MCNP6, and PENELOPE2008. The methodologies adhered to recommendations in the AAPM TG-229 report on high-energy brachytherapy source dosimetry. TG-43 dosimetry parameters, an along-away dose-rate table, and primary and scatter separated (PSS) data were obtained. The virtual water phantom of (201){sup 3} voxels (1 mm sides) was used to evaluate the calculated dose distributions. Two test case plans involving a single position of the generic HDR {sup 192}Ir source in this phantom were prepared: (i) source centered in the phantom and (ii) source displaced 7 cm laterally from the center. Datasets were independently produced by different investigators. MC results were then compared against dose calculated using TG-43 and MBDCA methods. Results: TG-43 and PSS datasets were generated for the generic source, the PSS data for use with the ACE algorithm. The dose-rate constant values obtained from seven MC simulations, performed independently using different codes, were in excellent agreement, yielding an average of 1.1109 ± 0.0004 cGy/(h U) (k = 1, Type A uncertainty). MC calculated dose-rate distributions for the two plans were also found to be in excellent agreement, with differences within type A uncertainties. Differences between commercial MBDCA and MC results were test, position, and calculation parameter dependent. On average, however, these differences were within 1% for ACUROS and 2% for ACE at clinically relevant distances. Conclusions: A hypothetical, generic HDR {sup 192}Ir source was designed and implemented in two commercially available TPSs employing different MBDCAs. Reference dose distributions for this source were benchmarked and used for the evaluation of MBDCA calculations employing a virtual, cubic water phantom in the form of a CT DICOM image series. The implementation of a generic source of identical design in all TPSs using MBDCAs is an important step toward supporting univocal commissioning procedures and direct comparisons between TPSs.« less

  20. Exposing Exposure: Automated Anatomy-specific CT Radiation Exposure Extraction for Quality Assurance and Radiation Monitoring

    PubMed Central

    Warden, Graham I.; Farkas, Cameron E.; Ikuta, Ichiro; Prevedello, Luciano M.; Andriole, Katherine P.; Khorasani, Ramin

    2012-01-01

    Purpose: To develop and validate an informatics toolkit that extracts anatomy-specific computed tomography (CT) radiation exposure metrics (volume CT dose index and dose-length product) from existing digital image archives through optical character recognition of CT dose report screen captures (dose screens) combined with Digital Imaging and Communications in Medicine attributes. Materials and Methods: This institutional review board–approved HIPAA-compliant study was performed in a large urban health care delivery network. Data were drawn from a random sample of CT encounters that occurred between 2000 and 2010; images from these encounters were contained within the enterprise image archive, which encompassed images obtained at an adult academic tertiary referral hospital and its affiliated sites, including a cancer center, a community hospital, and outpatient imaging centers, as well as images imported from other facilities. Software was validated by using 150 randomly selected encounters for each major CT scanner manufacturer, with outcome measures of dose screen retrieval rate (proportion of correctly located dose screens) and anatomic assignment precision (proportion of extracted exposure data with correctly assigned anatomic region, such as head, chest, or abdomen and pelvis). The 95% binomial confidence intervals (CIs) were calculated for discrete proportions, and CIs were derived from the standard error of the mean for continuous variables. After validation, the informatics toolkit was used to populate an exposure repository from a cohort of 54 549 CT encounters; of which 29 948 had available dose screens. Results: Validation yielded a dose screen retrieval rate of 99% (597 of 605 CT encounters; 95% CI: 98%, 100%) and an anatomic assignment precision of 94% (summed DLP fraction correct 563 in 600 CT encounters; 95% CI: 92%, 96%). Patient safety applications of the resulting data repository include benchmarking between institutions, CT protocol quality control and optimization, and cumulative patient- and anatomy-specific radiation exposure monitoring. Conclusion: Large-scale anatomy-specific radiation exposure data repositories can be created with high fidelity from existing digital image archives by using open-source informatics tools. ©RSNA, 2012 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.12111822/-/DC1 PMID:22668563

  1. A methodology for direct quantification of over-ranging length in helical computed tomography with real-time dosimetry.

    PubMed

    Tien, Christopher J; Winslow, James F; Hintenlang, David E

    2011-01-31

    In helical computed tomography (CT), reconstruction information from volumes adjacent to the clinical volume of interest (VOI) is required for proper reconstruction. Previous studies have relied upon either operator console readings or indirect extrapolation of measurements in order to determine the over-ranging length of a scan. This paper presents a methodology for the direct quantification of over-ranging dose contributions using real-time dosimetry. A Siemens SOMATOM Sensation 16 multislice helical CT scanner is used with a novel real-time "point" fiber-optic dosimeter system with 10 ms temporal resolution to measure over-ranging length, which is also expressed in dose-length-product (DLP). Film was used to benchmark the exact length of over-ranging. Over-ranging length varied from 4.38 cm at pitch of 0.5 to 6.72 cm at a pitch of 1.5, which corresponds to DLP of 131 to 202 mGy-cm. The dose-extrapolation method of Van der Molen et al. yielded results within 3%, while the console reading method of Tzedakis et al. yielded consistently larger over-ranging lengths. From film measurements, it was determined that Tzedakis et al. overestimated over-ranging lengths by one-half of beam collimation width. Over-ranging length measured as a function of reconstruction slice thicknesses produced two linear regions similar to previous publications. Over-ranging is quantified with both absolute length and DLP, which contributes about 60 mGy-cm or about 10% of DLP for a routine abdominal scan. This paper presents a direct physical measurement of over-ranging length within 10% of previous methodologies. Current uncertainties are less than 1%, in comparison with 5% in other methodologies. Clinical implantation can be increased by using only one dosimeter if codependence with console readings is acceptable, with an uncertainty of 1.1% This methodology will be applied to different vendors, models, and postprocessing methods--which have been shown to produce over-ranging lengths differing by 125%.

  2. Dosimetric evaluation of a Monte Carlo IMRT treatment planning system incorporating the MIMiC

    NASA Astrophysics Data System (ADS)

    Rassiah-Szegedi, P.; Fuss, M.; Sheikh-Bagheri, D.; Szegedi, M.; Stathakis, S.; Lancaster, J.; Papanikolaou, N.; Salter, B.

    2007-12-01

    The high dose per fraction delivered to lung lesions in stereotactic body radiation therapy (SBRT) demands high dose calculation and delivery accuracy. The inhomogeneous density in the thoracic region along with the small fields used typically in intensity-modulated radiation therapy (IMRT) treatments poses a challenge in the accuracy of dose calculation. In this study we dosimetrically evaluated a pre-release version of a Monte Carlo planning system (PEREGRINE 1.6b, NOMOS Corp., Cranberry Township, PA), which incorporates the modeling of serial tomotherapy IMRT treatments with the binary multileaf intensity modulating collimator (MIMiC). The aim of this study is to show the validation process of PEREGRINE 1.6b since it was used as a benchmark to investigate the accuracy of doses calculated by a finite size pencil beam (FSPB) algorithm for lung lesions treated on the SBRT dose regime via serial tomotherapy in our previous study. Doses calculated by PEREGRINE were compared against measurements in homogeneous and inhomogeneous materials carried out on a Varian 600C with a 6 MV photon beam. Phantom studies simulating various sized lesions were also carried out to explain some of the large dose discrepancies seen in the dose calculations with small lesions. Doses calculated by PEREGRINE agreed to within 2% in water and up to 3% for measurements in an inhomogeneous phantom containing lung, bone and unit density tissue.

  3. Exposure-response relationship and risk assessment for cognitive deficits in early welding-induced manganism.

    PubMed

    Park, Robert M; Bowler, Rosemarie M; Roels, Harry A

    2009-10-01

    The exposure-response relationship for manganese (Mn)-induced adverse nervous system effects is not well described. Symptoms and neuropsychological deficits associated with early manganism were previously reported for welders constructing bridge piers during 2003 to 2004. A reanalysis using improved exposure, work history information, and diverse exposure metrics is presented here. Ten neuropsychological performance measures were examined, including working memory index (WMI), verbal intelligence quotient, design fluency, Stroop color word test, Rey-Osterrieth Complex Figure, and Auditory Consonant Trigram tests. Mn blood levels and air sampling data in the form of both personal and area samples were available. The exposure metrics used were cumulative exposure to Mn, body burden assuming simple first-order kinetics for Mn elimination, and cumulative burden (effective dose). Benchmark doses were calculated. Burden with a half-life of about 150 days was the best predictor of blood Mn. WMI performance declined by 3.6 (normal = 100, SD = 15) for each 1.0 mg/m3 x mo exposure (P = 0.02, one tailed). At the group mean exposure metric (burden; half-life = 275 days), WMI performance was at the lowest 17th percentile of normal, and at the maximum observed metric, performance was at the lowest 2.5 percentiles. Four other outcomes also exhibited statistically significant associations (verbal intelligence quotient, verbal comprehension index, design fluency, Stroop color word test); no dose-rate effect was observed for three of the five outcomes. A risk assessment performed for the five stronger effects, choosing various percentiles of normal performance to represent impairment, identified benchmark doses for a 2-year exposure leading to 5% excess impairment prevalence in the range of 0.03 to 0.15 mg/m3, or 30 to 150 microg/m3, total Mn in air, levels that are far below those permitted by current occupational standards. More than one-third of workers would be impaired after working 2 years at 0.2 mg/m3 Mn (the current threshold limit value).

  4. Inverse treatment planning for spinal robotic radiosurgery: an international multi‐institutional benchmark trial

    PubMed Central

    Wang, Lei; Baus, Wolfgang; Grimm, Jimm; Lacornerie, Thomas; Nilsson, Joakim; Luchkovskyi, Sergii; Cano, Isabel Palazon; Shou, Zhenyu; Ayadi, Myriam; Treuer, Harald; Viard, Romain; Siebert, Frank‐Andre; Chan, Mark K.H.; Hildebrandt, Guido; Dunst, Jürgen; Imhoff, Detlef; Wurster, Stefan; Wolff, Robert; Romanelli, Pantaleo; Lartigau, Eric; Semrau, Robert; Soltys, Scott G.; Schweikard, Achim

    2016-01-01

    Stereotactic radiosurgery (SRS) is the accurate, conformal delivery of high‐dose radiation to well‐defined targets while minimizing normal structure doses via steep dose gradients. While inverse treatment planning (ITP) with computerized optimization algorithms are routine, many aspects of the planning process remain user‐dependent. We performed an international, multi‐institutional benchmark trial to study planning variability and to analyze preferable ITP practice for spinal robotic radiosurgery. 10 SRS treatment plans were generated for a complex‐shaped spinal metastasis with 21 Gy in 3 fractions and tight constraints for spinal cord (V14Gy<2 cc, V18Gy<0.1 cc) and target (coverage >95%). The resulting plans were rated on a scale from 1 to 4 (excellent‐poor) in five categories (constraint compliance, optimization goals, low‐dose regions, ITP complexity, and clinical acceptability) by a blinded review panel. Additionally, the plans were mathematically rated based on plan indices (critical structure and target doses, conformity, monitor units, normal tissue complication probability, and treatment time) and compared to the human rankings. The treatment plans and the reviewers' rankings varied substantially among the participating centers. The average mean overall rank was 2.4 (1.2‐4.0) and 8/10 plans were rated excellent in at least one category by at least one reviewer. The mathematical rankings agreed with the mean overall human rankings in 9/10 cases pointing toward the possibility for sole mathematical plan quality comparison. The final rankings revealed that a plan with a well‐balanced trade‐off among all planning objectives was preferred for treatment by most participants, reviewers, and the mathematical ranking system. Furthermore, this plan was generated with simple planning techniques. Our multi‐institutional planning study found wide variability in ITP approaches for spinal robotic radiosurgery. The participants', reviewers', and mathematical match on preferable treatment plans and ITP techniques indicate that agreement on treatment planning and plan quality can be reached for spinal robotic radiosurgery. PACS number(s): 87.55.de PMID:27167291

  5. Antibody-protein interactions: benchmark datasets and prediction tools evaluation

    PubMed Central

    Ponomarenko, Julia V; Bourne, Philip E

    2007-01-01

    Background The ability to predict antibody binding sites (aka antigenic determinants or B-cell epitopes) for a given protein is a precursor to new vaccine design and diagnostics. Among the various methods of B-cell epitope identification X-ray crystallography is one of the most reliable methods. Using these experimental data computational methods exist for B-cell epitope prediction. As the number of structures of antibody-protein complexes grows, further interest in prediction methods using 3D structure is anticipated. This work aims to establish a benchmark for 3D structure-based epitope prediction methods. Results Two B-cell epitope benchmark datasets inferred from the 3D structures of antibody-protein complexes were defined. The first is a dataset of 62 representative 3D structures of protein antigens with inferred structural epitopes. The second is a dataset of 82 structures of antibody-protein complexes containing different structural epitopes. Using these datasets, eight web-servers developed for antibody and protein binding sites prediction have been evaluated. In no method did performance exceed a 40% precision and 46% recall. The values of the area under the receiver operating characteristic curve for the evaluated methods were about 0.6 for ConSurf, DiscoTope, and PPI-PRED methods and above 0.65 but not exceeding 0.70 for protein-protein docking methods when the best of the top ten models for the bound docking were considered; the remaining methods performed close to random. The benchmark datasets are included as a supplement to this paper. Conclusion It may be possible to improve epitope prediction methods through training on datasets which include only immune epitopes and through utilizing more features characterizing epitopes, for example, the evolutionary conservation score. Notwithstanding, overall poor performance may reflect the generality of antigenicity and hence the inability to decipher B-cell epitopes as an intrinsic feature of the protein. It is an open question as to whether ultimately discriminatory features can be found. PMID:17910770

  6. Automatic Keyword Extraction from Individual Documents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rose, Stuart J.; Engel, David W.; Cramer, Nicholas O.

    2010-05-03

    This paper introduces a novel and domain-independent method for automatically extracting keywords, as sequences of one or more words, from individual documents. We describe the method’s configuration parameters and algorithm, and present an evaluation on a benchmark corpus of technical abstracts. We also present a method for generating lists of stop words for specific corpora and domains, and evaluate its ability to improve keyword extraction on the benchmark corpus. Finally, we apply our method of automatic keyword extraction to a corpus of news articles and define metrics for characterizing the exclusivity, essentiality, and generality of extracted keywords within a corpus.

  7. SeSBench - An initiative to benchmark reactive transport models for environmental subsurface processes

    NASA Astrophysics Data System (ADS)

    Jacques, Diederik

    2017-04-01

    As soil functions are governed by a multitude of interacting hydrological, geochemical and biological processes, simulation tools coupling mathematical models for interacting processes are needed. Coupled reactive transport models are a typical example of such coupled tools mainly focusing on hydrological and geochemical coupling (see e.g. Steefel et al., 2015). Mathematical and numerical complexity for both the tool itself or of the specific conceptual model can increase rapidly. Therefore, numerical verification of such type of models is a prerequisite for guaranteeing reliability and confidence and qualifying simulation tools and approaches for any further model application. In 2011, a first SeSBench -Subsurface Environmental Simulation Benchmarking- workshop was held in Berkeley (USA) followed by four other ones. The objective is to benchmark subsurface environmental simulation models and methods with a current focus on reactive transport processes. The final outcome was a special issue in Computational Geosciences (2015, issue 3 - Reactive transport benchmarks for subsurface environmental simulation) with a collection of 11 benchmarks. Benchmarks, proposed by the participants of the workshops, should be relevant for environmental or geo-engineering applications; the latter were mostly related to radioactive waste disposal issues - excluding benchmarks defined for pure mathematical reasons. Another important feature is the tiered approach within a benchmark with the definition of a single principle problem and different sub problems. The latter typically benchmarked individual or simplified processes (e.g. inert solute transport, simplified geochemical conceptual model) or geometries (e.g. batch or one-dimensional, homogeneous). Finally, three codes should be involved into a benchmark. The SeSBench initiative contributes to confidence building for applying reactive transport codes. Furthermore, it illustrates the use of those type of models for different environmental and geo-engineering applications. SeSBench will organize new workshops to add new benchmarks in a new special issue. Steefel, C. I., et al. (2015). "Reactive transport codes for subsurface environmental simulation." Computational Geosciences 19: 445-478.

  8. Probabilistic risk assessment of exposure to leucomalachite green residues from fish products.

    PubMed

    Chu, Yung-Lin; Chimeddulam, Dalaijamts; Sheen, Lee-Yan; Wu, Kuen-Yuh

    2013-12-01

    To assess the potential risk of human exposure to carcinogenic leucomalachite green (LMG) due to fish consumption, the probabilistic risk assessment was conducted for adolescent, adult and senior adult consumers in Taiwan. The residues of LMG with the mean concentration of 13.378±20.56 μg kg(-1) (BFDA, 2009) in fish was converted into dose, considering fish intake reported for three consumer groups by NAHSIT (1993-1996) and body weight of an average individual of the group. The lifetime average and high 95th percentile dietary intakes of LMG from fish consumption for Taiwanese consumers were estimated at up to 0.0135 and 0.0451 μg kg-bw(-1) day(-1), respectively. Human equivalent dose (HED) of 2.875 mg kg-bw(-1) day(-1) obtained from a lower-bound benchmark dose (BMDL10) in mice by interspecies extrapolation was linearly extrapolated to oral cancer slope factor (CSF) of 0.035 (mgkg-bw(-1)day(-1))(-1) for humans. Although, the assumptions and methods are different, the results of lifetime cancer risk varying from 3×10(-7) to 1.6×10(-6) were comparable to those of margin of exposures (MOEs) varying from 410,000 to 4,800,000. In conclusions, Taiwanese fish consumers with the 95th percentile LADD of LMG have greater risk of liver cancer and need to an action of risk management in Taiwan. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Benchmarking Multilayer-HySEA model for landslide generated tsunami. HTHMP validation process.

    NASA Astrophysics Data System (ADS)

    Macias, J.; Escalante, C.; Castro, M. J.

    2017-12-01

    Landslide tsunami hazard may be dominant along significant parts of the coastline around the world, in particular in the USA, as compared to hazards from other tsunamigenic sources. This fact motivated NTHMP about the need of benchmarking models for landslide generated tsunamis, following the same methodology already used for standard tsunami models when the source is seismic. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory data sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. A total of 7 benchmarks. The Multilayer-HySEA model including non-hydrostatic effects has been used to perform all the benchmarking problems dealing with laboratory experiments proposed in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017 by NTHMP. The aim of this presentation is to show some of the latest numerical results obtained with the Multilayer-HySEA (non-hydrostatic) model in the framework of this validation effort.Acknowledgements. This research has been partially supported by the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and University of Malaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).

  10. Summary of comparison and analysis of results from exercises 1 and 2 of the OECD PBMR coupled neutronics/thermal hydraulics transient benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mkhabela, P.; Han, J.; Tyobeka, B.

    2006-07-01

    The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has accepted, through the Nuclear Science Committee (NSC), the inclusion of the Pebble-Bed Modular Reactor 400 MW design (PBMR-400) coupled neutronics/thermal hydraulics transient benchmark problem as part of their official activities. The scope of the benchmark is to establish a well-defined problem, based on a common given library of cross sections, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events through a set of multi-dimensional computational test problems. The benchmark includes three steady state exercises andmore » six transient exercises. This paper describes the first two steady state exercises, their objectives and the international participation in terms of organization, country and computer code utilized. This description is followed by a comparison and analysis of the participants' results submitted for these two exercises. The comparison of results from different codes allows for an assessment of the sensitivity of a result to the method employed and can thus help to focus the development efforts on the most critical areas. The two first exercises also allow for removing of user-related modeling errors and prepare core neutronics and thermal-hydraulics models of the different codes for the rest of the exercises in the benchmark. (authors)« less

  11. Benchmarking of a treatment planning system for spot scanning proton therapy: Comparison and analysis of robustness to setup errors of photon IMRT and proton SFUD treatment plans of base of skull meningioma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harding, R., E-mail: ruth.harding2@wales.nhs.uk; Trnková, P.; Lomax, A. J.

    Purpose: Base of skull meningioma can be treated with both intensity modulated radiation therapy (IMRT) and spot scanned proton therapy (PT). One of the main benefits of PT is better sparing of organs at risk, but due to the physical and dosimetric characteristics of protons, spot scanned PT can be more sensitive to the uncertainties encountered in the treatment process compared with photon treatment. Therefore, robustness analysis should be part of a comprehensive comparison between these two treatment methods in order to quantify and understand the sensitivity of the treatment techniques to uncertainties. The aim of this work was tomore » benchmark a spot scanning treatment planning system for planning of base of skull meningioma and to compare the created plans and analyze their robustness to setup errors against the IMRT technique. Methods: Plans were produced for three base of skull meningioma cases: IMRT planned with a commercial TPS [Monaco (Elekta AB, Sweden)]; single field uniform dose (SFUD) spot scanning PT produced with an in-house TPS (PSI-plan); and SFUD spot scanning PT plan created with a commercial TPS [XiO (Elekta AB, Sweden)]. A tool for evaluating robustness to random setup errors was created and, for each plan, both a dosimetric evaluation and a robustness analysis to setup errors were performed. Results: It was possible to create clinically acceptable treatment plans for spot scanning proton therapy of meningioma with a commercially available TPS. However, since each treatment planning system uses different methods, this comparison showed different dosimetric results as well as different sensitivities to setup uncertainties. The results confirmed the necessity of an analysis tool for assessing plan robustness to provide a fair comparison of photon and proton plans. Conclusions: Robustness analysis is a critical part of plan evaluation when comparing IMRT plans with spot scanned proton therapy plans.« less

  12. 77 FR 15969 - Waybill Data Released in Three-Benchmark Rail Rate Proceedings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-19

    ... confidentiality of the contract rates, as required by 49 U.S.C. 11904. Background In Simplified Standards for Rail Rate Cases (Simplified Standards), EP 646 (Sub-No. 1) (STB served Sept. 5, 2007), aff'd sub nom. CSX...\\ Under the Three-Benchmark method as revised in Simplified Standards, each party creates and proffers to...

  13. Generation of openEHR Test Datasets for Benchmarking.

    PubMed

    El Helou, Samar; Karvonen, Tuukka; Yamamoto, Goshiro; Kume, Naoto; Kobayashi, Shinji; Kondo, Eiji; Hiragi, Shusuke; Okamoto, Kazuya; Tamura, Hiroshi; Kuroda, Tomohiro

    2017-01-01

    openEHR is a widely used EHR specification. Given its technology-independent nature, different approaches for implementing openEHR data repositories exist. Public openEHR datasets are needed to conduct benchmark analyses over different implementations. To address their current unavailability, we propose a method for generating openEHR test datasets that can be publicly shared and used.

  14. The Medical Library Association Benchmarking Network: development and implementation*

    PubMed Central

    Dudden, Rosalind Farnam; Corcoran, Kate; Kaplan, Janice; Magouirk, Jeff; Rand, Debra C.; Smith, Bernie Todd

    2006-01-01

    Objective: This article explores the development and implementation of the Medical Library Association (MLA) Benchmarking Network from the initial idea and test survey, to the implementation of a national survey in 2002, to the establishment of a continuing program in 2004. Started as a program for hospital libraries, it has expanded to include other nonacademic health sciences libraries. Methods: The activities and timelines of MLA's Benchmarking Network task forces and editorial board from 1998 to 2004 are described. Results: The Benchmarking Network task forces successfully developed an extensive questionnaire with parameters of size and measures of library activity and published a report of the data collected by September 2002. The data were available to all MLA members in the form of aggregate tables. Utilization of Web-based technologies proved feasible for data intake and interactive display. A companion article analyzes and presents some of the data. MLA has continued to develop the Benchmarking Network with the completion of a second survey in 2004. Conclusions: The Benchmarking Network has provided many small libraries with comparative data to present to their administrators. It is a challenge for the future to convince all MLA members to participate in this valuable program. PMID:16636702

  15. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Peiyuan; Brown, Timothy; Fullmer, William D.

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling ofmore » the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.« less

  16. Correlation of In Vivo Versus In Vitro Benchmark Doses (BMDs) Derived From Micronucleus Test Data: A Proof of Concept Study.

    PubMed

    Soeteman-Hernández, Lya G; Fellows, Mick D; Johnson, George E; Slob, Wout

    2015-12-01

    In this study, we explored the applicability of using in vitro micronucleus (MN) data from human lymphoblastoid TK6 cells to derive in vivo genotoxicity potency information. Nineteen chemicals covering a broad spectrum of genotoxic modes of action were tested in an in vitro MN test using TK6 cells using the same study protocol. Several of these chemicals were considered to need metabolic activation, and these were administered in the presence of S9. The Benchmark dose (BMD) approach was applied using the dose-response modeling program PROAST to estimate the genotoxic potency from the in vitro data. The resulting in vitro BMDs were compared with previously derived BMDs from in vivo MN and carcinogenicity studies. A proportional correlation was observed between the BMDs from the in vitro MN and the BMDs from the in vivo MN assays. Further, a clear correlation was found between the BMDs from in vitro MN and the associated BMDs for malignant tumors. Although these results are based on only 19 compounds, they show that genotoxicity potencies estimated from in vitro tests may result in useful information regarding in vivo genotoxic potency, as well as expected cancer potency. Extension of the number of compounds and further investigation of metabolic activation (S9) and of other toxicokinetic factors would be needed to validate our initial conclusions. However, this initial work suggests that this approach could be used for in vitro to in vivo extrapolations which would support the reduction of animals used in research (3Rs: replacement, reduction, and refinement). © The Author 2015. Published by Oxford University Press on behalf of the Society of Toxicology.

  17. Hepatic transcriptomic alterations for N,N-dimethyl-p-toluidine (DMPT) and p-toluidine after 5-day exposure in rats.

    PubMed

    Dunnick, June K; Shockley, Keith R; Morgan, Daniel L; Brix, Amy; Travlos, Gregory S; Gerrish, Kevin; Michael Sanders, J; Ton, T V; Pandiri, Arun R

    2017-04-01

    N,N-dimethyl-p-toluidine (DMPT), an accelerant for methyl methacrylate monomers in medical devices, was a liver carcinogen in male and female F344/N rats and B6C3F1 mice in a 2-year oral exposure study. p-Toluidine, a structurally related chemical, was a liver carcinogen in mice but not in rats in an 18-month feed exposure study. In this current study, liver transcriptomic data were used to characterize mechanisms in DMPT and p-toluidine liver toxicity and for conducting benchmark dose (BMD) analysis. Male F344/N rats were exposed orally to DMPT or p-toluidine (0, 1, 6, 20, 60 or 120 mg/kg/day) for 5 days. The liver was examined for lesions and transcriptomic alterations. Both chemicals caused mild hepatic toxicity at 60 and 120 mg/kg and dose-related transcriptomic alterations in the liver. There were 511 liver transcripts differentially expressed for DMPT and 354 for p-toluidine at 120 mg/kg/day (false discovery rate threshold of 5 %). The liver transcriptomic alterations were characteristic of an anti-oxidative damage response (activation of the Nrf2 pathway) and hepatic toxicity. The top cellular processes in gene ontology (GO) categories altered in livers exposed to DMPT or p-toluidine were used for BMD calculations. The lower confidence bound benchmark doses for these chemicals were 2 mg/kg/day for DMPT and 7 mg/kg/day for p-toluidine. These studies show the promise of using 5-day target organ transcriptomic data to identify chemical-induced molecular changes that can serve as markers for preliminary toxicity risk assessment.

  18. Use of benchmarking techniques to justify the evolution of antibiotic management programs in healthcare systems.

    PubMed

    Schentag, J J; Paladino, J A; Birmingham, M C; Zimmer, G; Carr, J R; Hanson, S C

    1995-01-01

    To apply basic benchmarking techniques to hospital antibiotic expenditures and clinical pharmacy personnel and their duties, to identify cost savings strategies for clinical pharmacy services. Prospective survey of 18 hospitals ranging in size from 201 to 942 beds. Each was asked to provide antibiotic expenditures, an overview of their clinical pharmacy services, and to describe the duties of clinical pharmacists involved in antibiotic management activities. Specific information was sought on the use of pharmacokinetic dosing services, antibiotic streamlining, and oral switch in each of the hospitals. Most smaller hospitals (< 300 beds) did not employ clinical pharmacists with the specific duties of antibiotic management or streamlining. At these institutions, antibiotic management services consisted of formulary enforcement and aminoglycoside and/or vancomycin dosing services. The larger hospitals we surveyed employed clinical pharmacists designated as antibiotic management specialists, but their usual activities were aminoglycoside and/or vancomycin dosing services and formulary enforcement. In virtually all hospitals, the yearly expenses for antibiotics exceeded those of Millard Fillmore Hospitals by $2,000-3,000 per occupied bed. In a 500-bed hospital, this difference in expenditures would exceed $1.5 million yearly. Millard Fillmore Health System has similar types of patients, but employs clinical pharmacists to perform streamlining and/or switch functions at days 2-4, when cultures come back from the laboratory. The antibiotic streamlining and oral switch duties of clinical pharmacy specialists are associated with the majority of cost savings in hospital antibiotic management programs. The savings are considerable to the extent that most hospitals with 200-300 beds could readily cost-justify a full-time clinical pharmacist to perform these activities on a daily basis. Expenses of the program would be offset entirely by the reduction in the actual pharmacy expenditures on antibiotics.

  19. A method to improve the nutritional quality of foods and beverages based on dietary recommendations.

    PubMed

    Nijman, C A J; Zijp, I M; Sierksma, A; Roodenburg, A J C; Leenen, R; van den Kerkhoff, C; Weststrate, J A; Meijer, G W

    2007-04-01

    The increasing consumer interest in health prompted Unilever to develop a globally applicable method (Nutrition Score) to evaluate and improve the nutritional composition of its foods and beverages portfolio. Based on (inter)national dietary recommendations, generic benchmarks were developed to evaluate foods and beverages on their content of trans fatty acids, saturated fatty acids, sodium and sugars. High intakes of these key nutrients are associated with undesirable health effects. In principle, the developed generic benchmarks can be applied globally for any food and beverage product. Product category-specific benchmarks were developed when it was not feasible to meet generic benchmarks because of technological and/or taste factors. The whole Unilever global foods and beverages portfolio has been evaluated and actions have been taken to improve the nutritional quality. The advantages of this method over other initiatives to assess the nutritional quality of foods are that it is based on the latest nutritional scientific insights and its global applicability. The Nutrition Score is the first simple, transparent and straightforward method that can be applied globally and across all food and beverage categories to evaluate the nutritional composition. It can help food manufacturers to improve the nutritional value of their products. In addition, the Nutrition Score can be a starting point for a powerful health indicator front-of-pack. This can have a significant positive impact on public health, especially when implemented by all food manufacturers.

  20. Benchmarking the MCNP code for Monte Carlo modelling of an in vivo neutron activation analysis system.

    PubMed

    Natto, S A; Lewis, D G; Ryde, S J

    1998-01-01

    The Monte Carlo computer code MCNP (version 4A) has been used to develop a personal computer-based model of the Swansea in vivo neutron activation analysis (IVNAA) system. The model included specification of the neutron source (252Cf), collimators, reflectors and shielding. The MCNP model was 'benchmarked' against fast neutron and thermal neutron fluence data obtained experimentally from the IVNAA system. The Swansea system allows two irradiation geometries using 'short' and 'long' collimators, which provide alternative dose rates for IVNAA. The data presented here relate to the short collimator, although results of similar accuracy were obtained using the long collimator. The fast neutron fluence was measured in air at a series of depths inside the collimator. The measurements agreed with the MCNP simulation within the statistical uncertainty (5-10%) of the calculations. The thermal neutron fluence was measured and calculated inside the cuboidal water phantom. The depth of maximum thermal fluence was 3.2 cm (measured) and 3.0 cm (calculated). The width of the 50% thermal fluence level across the phantom at its mid-depth was found to be the same by both MCNP and experiment. This benchmarking exercise has given us a high degree of confidence in MCNP as a tool for the design of IVNAA systems.

  1. Understanding and benchmarking health service achievement of policy goals for chronic disease

    PubMed Central

    2012-01-01

    Background Key challenges in benchmarking health service achievement of policy goals in areas such as chronic disease are: 1) developing indicators and understanding how policy goals might work as indicators of service performance; 2) developing methods for economically collecting and reporting stakeholder perceptions; 3) combining and sharing data about the performance of organizations; 4) interpreting outcome measures; 5) obtaining actionable benchmarking information. This study aimed to explore how a new Boolean-based small-N method from the social sciences—Qualitative Comparative Analysis or QCA—could contribute to meeting these internationally shared challenges. Methods A ‘multi-value QCA’ (MVQCA) analysis was conducted of data from 24 senior staff at 17 randomly selected services for chronic disease, who provided perceptions of 1) whether government health services were improving their achievement of a set of statewide policy goals for chronic disease and 2) the efficacy of state health office actions in influencing this improvement. The analysis produced summaries of configurations of perceived service improvements. Results Most respondents observed improvements in most areas but uniformly good improvements across services were not perceived as happening (regardless of whether respondents identified a state health office contribution to that improvement). The sentinel policy goal of using evidence to develop service practice was not achieved at all in four services and appears to be reliant on other kinds of service improvements happening. Conclusions The QCA method suggested theoretically plausible findings and an approach that with further development could help meet the five benchmarking challenges. In particular, it suggests that achievement of one policy goal may be reliant on achievement of another goal in complex ways that the literature has not yet fully accommodated but which could help prioritize policy goals. The weaknesses of QCA can be found wherever traditional big-N statistical methods are needed and possible, and in its more complex and therefore difficult to empirically validate findings. It should be considered a potentially valuable adjunct method for benchmarking complex health policy goals such as those for chronic disease. PMID:23020943

  2. Effects of benchmarking on the quality of type 2 diabetes care: results of the OPTIMISE (Optimal Type 2 Diabetes Management Including Benchmarking and Standard Treatment) study in Greece

    PubMed Central

    Tsimihodimos, Vasilis; Kostapanos, Michael S.; Moulis, Alexandros; Nikas, Nikos; Elisaf, Moses S.

    2015-01-01

    Objectives: To investigate the effect of benchmarking on the quality of type 2 diabetes (T2DM) care in Greece. Methods: The OPTIMISE (Optimal Type 2 Diabetes Management Including Benchmarking and Standard Treatment) study [ClinicalTrials.gov identifier: NCT00681850] was an international multicenter, prospective cohort study. It included physicians randomized 3:1 to either receive benchmarking for glycated hemoglobin (HbA1c), systolic blood pressure (SBP) and low-density lipoprotein cholesterol (LDL-C) treatment targets (benchmarking group) or not (control group). The proportions of patients achieving the targets of the above-mentioned parameters were compared between groups after 12 months of treatment. Also, the proportions of patients achieving those targets at 12 months were compared with baseline in the benchmarking group. Results: In the Greek region, the OPTIMISE study included 797 adults with T2DM (570 in the benchmarking group). At month 12 the proportion of patients within the predefined targets for SBP and LDL-C was greater in the benchmarking compared with the control group (50.6 versus 35.8%, and 45.3 versus 36.1%, respectively). However, these differences were not statistically significant. No difference between groups was noted in the percentage of patients achieving the predefined target for HbA1c. At month 12 the increase in the percentage of patients achieving all three targets was greater in the benchmarking (5.9–15.0%) than in the control group (2.7–8.1%). In the benchmarking group more patients were on target regarding SBP (50.6% versus 29.8%), LDL-C (45.3% versus 31.3%) and HbA1c (63.8% versus 51.2%) at 12 months compared with baseline (p < 0.001 for all comparisons). Conclusion: Benchmarking may comprise a promising tool for improving the quality of T2DM care. Nevertheless, target achievement rates of each, and of all three, quality indicators were suboptimal, indicating there are still unmet needs in the management of T2DM. PMID:26445642

  3. Validation of electronic structure methods for isomerization reactions of large organic molecules.

    PubMed

    Luo, Sijie; Zhao, Yan; Truhlar, Donald G

    2011-08-14

    In this work the ISOL24 database of isomerization energies of large organic molecules presented by Huenerbein et al. [Phys. Chem. Chem. Phys., 2010, 12, 6940] is updated, resulting in the new benchmark database called ISOL24/11, and this database is used to test 50 electronic model chemistries. To accomplish the update, the very expensive and highly accurate CCSD(T)-F12a/aug-cc-pVDZ method is first exploited to investigate a six-reaction subset of the 24 reactions, and by comparison of various methods with the benchmark, MCQCISD-MPW is confirmed to be of high accuracy. The final ISOL24/11 database is composed of six reaction energies calculated by CCSD(T)-F12a/aug-cc-pVDZ and 18 calculated by MCQCISD-MPW. We then tested 40 single-component density functionals (both local and hybrid), eight doubly hybrid functionals, and two other methods against ISOL24/11. It is found that the SCS-MP3/CBS method, which is used as benchmark for the original ISOL24, has an MUE of 1.68 kcal mol(-1), which is close to or larger than some of the best tested DFT methods. Using the new benchmark, we find ωB97X-D and MC3MPWB to be the best single-component and doubly hybrid functionals respectively, with PBE0-D3 and MC3MPW performing almost as well. The best single-component density functionals without molecular mechanics dispersion-like terms are M08-SO, M08-HX, M05-2X, and M06-2X. The best single-component density functionals without Hartree-Fock exchange are M06-L-D3 when MM terms are included and M06-L when they are not.

  4. Radioactive impacts on nekton species in the Northwest Pacific and humans more than one year after the Fukushima nuclear accident.

    PubMed

    Men, Wu; Deng, Fangfang; He, Jianhua; Yu, Wen; Wang, Fenfen; Li, Yiliang; Lin, Feng; Lin, Jing; Lin, Longshan; Zhang, Yusheng; Yu, Xingguang

    2017-10-01

    This study investigated the radioactive impacts on 10 nekton species in the Northwest Pacific more than one year after the Fukushima Nuclear Accident (FNA) from the two perspectives of contamination and harm. Squids were especially used for the spatial and temporal comparisons to demonstrate the impacts from the FNA. The radiation doses to nekton species and humans were assessed to link this radioactivity contamination to possible harm. The total dose rates to nektons were lower than the ERICA ecosystem screening benchmark of 10μGy/h. Further dose-contribution analysis showed that the internal doses from the naturally occurring nuclide 210 Po were the main dose contributor. The dose rates from 134 Cs, 137 Cs, 90 Sr and 110m Ag were approximately three or four orders of magnitude lower than those from naturally occurring radionuclides. The 210 Po-derived dose was also the main contributor of the total human dose from immersion in the seawater and the ingestion of nekton species. The human doses from anthropogenic radionuclides were ~ 100 to ~ 10,000 times lower than the doses from naturally occurring radionuclides. A morbidity assessment was performed based on the Linear No Threshold assumptions of exposure and showed 7 additional cancer cases per 100,000,000 similarly exposed people. Taken together, there is no need for concern regarding the radioactive harm in the open ocean area of the Northwest Pacific. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Decreasing unnecessary utilization in acute bronchiolitis care: results from the value in inpatient pediatrics network.

    PubMed

    Ralston, Shawn; Garber, Matthew; Narang, Steve; Shen, Mark; Pate, Brian; Pope, John; Lossius, Michele; Croland, Trina; Bennett, Jeff; Jewell, Jennifer; Krugman, Scott; Robbins, Elizabeth; Nazif, Joanne; Liewehr, Sheila; Miller, Ansley; Marks, Michelle; Pappas, Rita; Pardue, Jeanann; Quinonez, Ricardo; Fine, Bryan R; Ryan, Michael

    2013-01-01

    Acute viral bronchiolitis is the most common diagnosis resulting in hospital admission in pediatrics. Utilization of non-evidence-based therapies and testing remains common despite a large volume of evidence to guide quality improvement efforts. Our objective was to reduce utilization of unnecessary therapies in the inpatient care of bronchiolitis across a diverse network of clinical sites. We formed a voluntary quality improvement collaborative of pediatric hospitalists for the purpose of benchmarking the use of bronchodilators, steroids, chest radiography, chest physiotherapy, and viral testing in bronchiolitis using hospital administrative data. We shared resources within the network, including protocols, scores, order sets, and key bibliographies, and established group norms for decreasing utilization. Aggregate data on 11,568 hospitalizations for bronchiolitis from 17 centers was analyzed for this report. The network was organized in 2008. By 2010, we saw a 46% reduction in overall volume of bronchodilators used, a 3.4 dose per patient absolute decrease in utilization (95% confidence interval [CI] 1.4-5.8). Overall exposure to any dose of bronchodilator decreased by 12 percentage points as well (95% CI 5%-25%). There was also a statistically significant decline in chest physiotherapy usage, but not for steroids, chest radiography, or viral testing. Benchmarking within a voluntary pediatric hospitalist collaborative facilitated decreased utilization of bronchodilators and chest physiotherapy in bronchiolitis. Copyright © 2012 Society of Hospital Medicine.

  6. Benchmark dose for cadmium exposure and elevated N-acetyl-β-D-glucosaminidase: a meta-analysis.

    PubMed

    Liu, CuiXia; Li, YuBiao; Zhu, ChunShui; Dong, ZhaoMin; Zhang, Kun; Zhao, YanBin; Xu, YiLu

    2016-10-01

    Cadmium (Cd) is a well-known nephrotoxic contaminant, and N-acetyl-β-D-glucosaminidase (NAG) is considered to be an early and sensitive marker of tubular dysfunction. The link between Cd exposure and NAG level enables us to derive the benchmark dose (BMD) of Cd. Although several reports have already documented urinary Cd (UCd)-NAG relationships and BMD estimations, high heterogeneities arise due to the sub-populations (age, gender, and ethnicity) and BMD methodologies being employed. To clarify the influences that these variables exert, firstly, a random effect meta-analysis was performed in this study to correlate the UCd and NAG based on 92 datasets collected from 30 publications. Later, this established correlation (Ln(NAG) = 0.51 × Ln(UCd) + 0.83) was applied to derive the UCd BMD 5 of 1.76 μg/g creatinine and 95 % lower confidence limit of BMD 5 (BMDL 5 ) of 1.67 μg/g creatinine. While the regressions for different age groups and genders differed slightly, it is age and not gender that significantly affects BMD estimations. Ethnic differences may require further investigation given that limited data is currently available. Based on a comprehensive and systematic literature review, this study is a new attempt to quantify the UCd-NAG link and estimate BMD.

  7. [Benchmarking and other functions of ROM: back to basics].

    PubMed

    Barendregt, M

    2015-01-01

    Since 2011 outcome data in the Dutch mental health care have been collected on a national scale. This has led to confusion about the position of benchmarking in the system known as routine outcome monitoring (rom). To provide insight into the various objectives and uses of aggregated outcome data. A qualitative review was performed and the findings were analysed. Benchmarking is a strategy for finding best practices and for improving efficacy and it belongs to the domain of quality management. Benchmarking involves comparing outcome data by means of instrumentation and is relatively tolerant with regard to the validity of the data. Although benchmarking is a function of rom, it must be differentiated form other functions from rom. Clinical management, public accountability, research, payment for performance and information for patients are all functions of rom which require different ways of data feedback and which make different demands on the validity of the underlying data. Benchmarking is often wrongly regarded as being simply a synonym for 'comparing institutions'. It is, however, a method which includes many more factors; it can be used to improve quality and has a more flexible approach to the validity of outcome data and is less concerned than other rom functions about funding and the amount of information given to patients. Benchmarking can make good use of currently available outcome data.

  8. Dual linear structured support vector machine tracking method via scale correlation filter

    NASA Astrophysics Data System (ADS)

    Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen

    2018-01-01

    Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.

  9. Automated benchmarking of peptide-MHC class I binding predictions.

    PubMed

    Trolle, Thomas; Metushi, Imir G; Greenbaum, Jason A; Kim, Yohan; Sidney, John; Lund, Ole; Sette, Alessandro; Peters, Bjoern; Nielsen, Morten

    2015-07-01

    Numerous in silico methods predicting peptide binding to major histocompatibility complex (MHC) class I molecules have been developed over the last decades. However, the multitude of available prediction tools makes it non-trivial for the end-user to select which tool to use for a given task. To provide a solid basis on which to compare different prediction tools, we here describe a framework for the automated benchmarking of peptide-MHC class I binding prediction tools. The framework runs weekly benchmarks on data that are newly entered into the Immune Epitope Database (IEDB), giving the public access to frequent, up-to-date performance evaluations of all participating tools. To overcome potential selection bias in the data included in the IEDB, a strategy was implemented that suggests a set of peptides for which different prediction methods give divergent predictions as to their binding capability. Upon experimental binding validation, these peptides entered the benchmark study. The benchmark has run for 15 weeks and includes evaluation of 44 datasets covering 17 MHC alleles and more than 4000 peptide-MHC binding measurements. Inspection of the results allows the end-user to make educated selections between participating tools. Of the four participating servers, NetMHCpan performed the best, followed by ANN, SMM and finally ARB. Up-to-date performance evaluations of each server can be found online at http://tools.iedb.org/auto_bench/mhci/weekly. All prediction tool developers are invited to participate in the benchmark. Sign-up instructions are available at http://tools.iedb.org/auto_bench/mhci/join. mniel@cbs.dtu.dk or bpeters@liai.org Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. ViSAPy: a Python tool for biophysics-based generation of virtual spiking activity for evaluation of spike-sorting algorithms.

    PubMed

    Hagen, Espen; Ness, Torbjørn V; Khosrowshahi, Amir; Sørensen, Christina; Fyhn, Marianne; Hafting, Torkel; Franke, Felix; Einevoll, Gaute T

    2015-04-30

    New, silicon-based multielectrodes comprising hundreds or more electrode contacts offer the possibility to record spike trains from thousands of neurons simultaneously. This potential cannot be realized unless accurate, reliable automated methods for spike sorting are developed, in turn requiring benchmarking data sets with known ground-truth spike times. We here present a general simulation tool for computing benchmarking data for evaluation of spike-sorting algorithms entitled ViSAPy (Virtual Spiking Activity in Python). The tool is based on a well-established biophysical forward-modeling scheme and is implemented as a Python package built on top of the neuronal simulator NEURON and the Python tool LFPy. ViSAPy allows for arbitrary combinations of multicompartmental neuron models and geometries of recording multielectrodes. Three example benchmarking data sets are generated, i.e., tetrode and polytrode data mimicking in vivo cortical recordings and microelectrode array (MEA) recordings of in vitro activity in salamander retinas. The synthesized example benchmarking data mimics salient features of typical experimental recordings, for example, spike waveforms depending on interspike interval. ViSAPy goes beyond existing methods as it includes biologically realistic model noise, synaptic activation by recurrent spiking networks, finite-sized electrode contacts, and allows for inhomogeneous electrical conductivities. ViSAPy is optimized to allow for generation of long time series of benchmarking data, spanning minutes of biological time, by parallel execution on multi-core computers. ViSAPy is an open-ended tool as it can be generalized to produce benchmarking data or arbitrary recording-electrode geometries and with various levels of complexity. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mackillop, William J., E-mail: william.mackillop@krcc.on.ca; Kong, Weidong; Brundage, Michael

    Purpose: Estimates of the appropriate rate of use of radiation therapy (RT) are required for planning and monitoring access to RT. Our objective was to compare estimates of the appropriate rate of use of RT derived from mathematical models, with the rate observed in a population of patients with optimal access to RT. Methods and Materials: The rate of use of RT within 1 year of diagnosis (RT{sub 1Y}) was measured in the 134,541 cases diagnosed in Ontario between November 2009 and October 2011. The lifetime rate of use of RT (RT{sub LIFETIME}) was estimated by the multicohort utilization tablemore » method. Poisson regression was used to evaluate potential barriers to access to RT and to identify a benchmark subpopulation with unimpeded access to RT. Rates of use of RT were measured in the benchmark subpopulation and compared with published evidence-based estimates of the appropriate rates. Results: The benchmark rate for RT{sub 1Y}, observed under conditions of optimal access, was 33.6% (95% confidence interval [CI], 33.0%-34.1%), and the benchmark for RT{sub LIFETIME} was 41.5% (95% CI, 41.2%-42.0%). Benchmarks for RT{sub LIFETIME} for 4 of 5 selected sites and for all cancers combined were significantly lower than the corresponding evidence-based estimates. Australian and Canadian evidence-based estimates of RT{sub LIFETIME} for 5 selected sites differed widely. RT{sub LIFETIME} in the overall population of Ontario was just 7.9% short of the benchmark but 20.9% short of the Australian evidence-based estimate of the appropriate rate. Conclusions: Evidence-based estimates of the appropriate lifetime rate of use of RT may overestimate the need for RT in Ontario.« less

  12. A publicly available benchmark for biomedical dataset retrieval: the reference standard for the 2016 bioCADDIE dataset retrieval challenge

    PubMed Central

    Gururaj, Anupama E.; Chen, Xiaoling; Pournejati, Saeid; Alter, George; Hersh, William R.; Demner-Fushman, Dina; Ohno-Machado, Lucila

    2017-01-01

    Abstract The rapid proliferation of publicly available biomedical datasets has provided abundant resources that are potentially of value as a means to reproduce prior experiments, and to generate and explore novel hypotheses. However, there are a number of barriers to the re-use of such datasets, which are distributed across a broad array of dataset repositories, focusing on different data types and indexed using different terminologies. New methods are needed to enable biomedical researchers to locate datasets of interest within this rapidly expanding information ecosystem, and new resources are needed for the formal evaluation of these methods as they emerge. In this paper, we describe the design and generation of a benchmark for information retrieval of biomedical datasets, which was developed and used for the 2016 bioCADDIE Dataset Retrieval Challenge. In the tradition of the seminal Cranfield experiments, and as exemplified by the Text Retrieval Conference (TREC), this benchmark includes a corpus (biomedical datasets), a set of queries, and relevance judgments relating these queries to elements of the corpus. This paper describes the process through which each of these elements was derived, with a focus on those aspects that distinguish this benchmark from typical information retrieval reference sets. Specifically, we discuss the origin of our queries in the context of a larger collaborative effort, the biomedical and healthCAre Data Discovery Index Ecosystem (bioCADDIE) consortium, and the distinguishing features of biomedical dataset retrieval as a task. The resulting benchmark set has been made publicly available to advance research in the area of biomedical dataset retrieval. Database URL: https://biocaddie.org/benchmark-data PMID:29220453

  13. Automated benchmarking of peptide-MHC class I binding predictions

    PubMed Central

    Trolle, Thomas; Metushi, Imir G.; Greenbaum, Jason A.; Kim, Yohan; Sidney, John; Lund, Ole; Sette, Alessandro; Peters, Bjoern; Nielsen, Morten

    2015-01-01

    Motivation: Numerous in silico methods predicting peptide binding to major histocompatibility complex (MHC) class I molecules have been developed over the last decades. However, the multitude of available prediction tools makes it non-trivial for the end-user to select which tool to use for a given task. To provide a solid basis on which to compare different prediction tools, we here describe a framework for the automated benchmarking of peptide-MHC class I binding prediction tools. The framework runs weekly benchmarks on data that are newly entered into the Immune Epitope Database (IEDB), giving the public access to frequent, up-to-date performance evaluations of all participating tools. To overcome potential selection bias in the data included in the IEDB, a strategy was implemented that suggests a set of peptides for which different prediction methods give divergent predictions as to their binding capability. Upon experimental binding validation, these peptides entered the benchmark study. Results: The benchmark has run for 15 weeks and includes evaluation of 44 datasets covering 17 MHC alleles and more than 4000 peptide-MHC binding measurements. Inspection of the results allows the end-user to make educated selections between participating tools. Of the four participating servers, NetMHCpan performed the best, followed by ANN, SMM and finally ARB. Availability and implementation: Up-to-date performance evaluations of each server can be found online at http://tools.iedb.org/auto_bench/mhci/weekly. All prediction tool developers are invited to participate in the benchmark. Sign-up instructions are available at http://tools.iedb.org/auto_bench/mhci/join. Contact: mniel@cbs.dtu.dk or bpeters@liai.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25717196

  14. Scalable randomized benchmarking of non-Clifford gates

    NASA Astrophysics Data System (ADS)

    Cross, Andrew; Magesan, Easwar; Bishop, Lev; Smolin, John; Gambetta, Jay

    Randomized benchmarking is a widely used experimental technique to characterize the average error of quantum operations. Benchmarking procedures that scale to enable characterization of n-qubit circuits rely on efficient procedures for manipulating those circuits and, as such, have been limited to subgroups of the Clifford group. However, universal quantum computers require additional, non-Clifford gates to approximate arbitrary unitary transformations. We define a scalable randomized benchmarking procedure over n-qubit unitary matrices that correspond to protected non-Clifford gates for a class of stabilizer codes. We present efficient methods for representing and composing group elements, sampling them uniformly, and synthesizing corresponding poly (n) -sized circuits. The procedure provides experimental access to two independent parameters that together characterize the average gate fidelity of a group element. We acknowledge support from ARO under Contract W911NF-14-1-0124.

  15. Risk assessment of skin lightening cosmetics containing hydroquinone.

    PubMed

    Matsumoto, Mariko; Todo, Hiroaki; Akiyama, Takumi; Hirata-Koizumi, Mutsuko; Sugibayashi, Kenji; Ikarashi, Yoshiaki; Ono, Atsushi; Hirose, Akihiko; Yokoyama, Kazuhito

    2016-11-01

    Following reports on potential risks of hydroquinone (HQ), HQ for skin lightening has been banned or restricted in Europe and the US. In contrast, HQ is not listed as a prohibited or limited ingredient for cosmetic use in Japan, and many HQ cosmetics are sold without restriction. To assess the risk of systemic effects of HQ, we examined the rat skin permeation rates of four HQ (0.3%, 1.0%, 2.6%, and 3.3%) cosmetics. The permeation coefficients ranged from 1.2 × 10 -9 to 3.1 × 10 -7  cm/s, with the highest value superior than the HQ aqueous solution (1.6 × 10 -7  cm/s). After dermal application of the HQ cosmetics to rats, HQ in plasma was detected only in the treatment by highest coefficient cosmetic. Absorbed HQ levels treated with this highest coefficient cosmetic in humans were estimated by numerical methods, and we calculated the margin of exposure (MOE) for the estimated dose (0.017 mg/kg-bw/day in proper use) to a benchmark dose for rat renal tubule adenomas. The MOE of 559 is judged to be in a range safe for the consumer. However, further consideration may be required for regulation of cosmetic ingredients. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Monte Carlo simulations and benchmark measurements on the response of TE(TE) and Mg(Ar) ionization chambers in photon, electron and neutron beams

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Chun; Huang, Tseng-Te; Liu, Yuan-Hao; Chen, Wei-Lin; Chen, Yen-Fu; Wu, Shu-Wei; Nievaart, Sander; Jiang, Shiang-Huei

    2015-06-01

    The paired ionization chambers (ICs) technique is commonly employed to determine neutron and photon doses in radiology or radiotherapy neutron beams, where neutron dose shows very strong dependence on the accuracy of accompanying high energy photon dose. During the dose derivation, it is an important issue to evaluate the photon and electron response functions of two commercially available ionization chambers, denoted as TE(TE) and Mg(Ar), used in our reactor based epithermal neutron beam. Nowadays, most perturbation corrections for accurate dose determination and many treatment planning systems are based on the Monte Carlo technique. We used general purposed Monte Carlo codes, MCNP5, EGSnrc, FLUKA or GEANT4 for benchmark verifications among them and carefully measured values for a precise estimation of chamber current from absorbed dose rate of cavity gas. Also, energy dependent response functions of two chambers were calculated in a parallel beam with mono-energies from 20 keV to 20 MeV photons and electrons by using the optimal simple spherical and detailed IC models. The measurements were performed in the well-defined (a) four primary M-80, M-100, M120 and M150 X-ray calibration fields, (b) primary 60Co calibration beam, (c) 6 MV and 10 MV photon, (d) 6 MeV and 18 MeV electron LINACs in hospital and (e) BNCT clinical trials neutron beam. For the TE(TE) chamber, all codes were almost identical over the whole photon energy range. In the Mg(Ar) chamber, MCNP5 showed lower response than other codes for photon energy region below 0.1 MeV and presented similar response above 0.2 MeV (agreed within 5% in the simple spherical model). With the increase of electron energy, the response difference between MCNP5 and other codes became larger in both chambers. Compared with the measured currents, MCNP5 had the difference from the measurement data within 5% for the 60Co, 6 MV, 10 MV, 6 MeV and 18 MeV LINACs beams. But for the Mg(Ar) chamber, the derivations reached 7.8-16.5% below 120 kVp X-ray beams. In this study, we were especially interested in BNCT doses where low energy photon contribution is less to ignore, MCNP model is recognized as the most suitable to simulate wide photon-electron and neutron energy distributed responses of the paired ICs. Also, MCNP provides the best prediction of BNCT source adjustment by the detector's neutron and photon responses.

  17. Use of computer code for dose distribution studies in A 60CO industrial irradiator

    NASA Astrophysics Data System (ADS)

    Piña-Villalpando, G.; Sloan, D. P.

    1995-09-01

    This paper presents a benchmark comparison between calculated and experimental absorbed dose values tor a typical product, in a 60Co industrial irradiator, located at ININ, México. The irradiator is a two levels, two layers system with overlapping product configuration with activity around 300kCi. Experimental values were obtanied from routine dosimetry, using red acrylic pellets. Typical product was Petri dishes packages, apparent density 0.13 g/cm3; that product was chosen because uniform size, large quantity and low density. Minimum dose was fixed in 15 kGy. Calculated values were obtained from QAD-CGGP code. This code uses a point kernel technique, build-up factors fitting was done by geometrical progression and combinatorial geometry is used for system description. Main modifications for the code were related with source sumilation, using punctual sources instead of pencils and an energy and anisotropic emission spectrums were included. Results were, for maximum dose, calculated value (18.2 kGy) was 8% higher than experimental average value (16.8 kGy); for minimum dose, calculated value (13.8 kGy) was 3% higher than experimental average value (14.3 kGy).

  18. Benchmark On Sensitivity Calculation (Phase III)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ivanova, Tatiana; Laville, Cedric; Dyrda, James

    2012-01-01

    The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impactmore » the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.« less

  19. A novel hybrid meta-heuristic technique applied to the well-known benchmark optimization problems

    NASA Astrophysics Data System (ADS)

    Abtahi, Amir-Reza; Bijari, Afsane

    2017-03-01

    In this paper, a hybrid meta-heuristic algorithm, based on imperialistic competition algorithm (ICA), harmony search (HS), and simulated annealing (SA) is presented. The body of the proposed hybrid algorithm is based on ICA. The proposed hybrid algorithm inherits the advantages of the process of harmony creation in HS algorithm to improve the exploitation phase of the ICA algorithm. In addition, the proposed hybrid algorithm uses SA to make a balance between exploration and exploitation phases. The proposed hybrid algorithm is compared with several meta-heuristic methods, including genetic algorithm (GA), HS, and ICA on several well-known benchmark instances. The comprehensive experiments and statistical analysis on standard benchmark functions certify the superiority of the proposed method over the other algorithms. The efficacy of the proposed hybrid algorithm is promising and can be used in several real-life engineering and management problems.

  20. Benchmarking the efficiency of the Chilean water and sewerage companies: a double-bootstrap approach.

    PubMed

    Molinos-Senante, María; Donoso, Guillermo; Sala-Garrido, Ramon; Villegas, Andrés

    2018-03-01

    Benchmarking the efficiency of water companies is essential to set water tariffs and to promote their sustainability. In doing so, most of the previous studies have applied conventional data envelopment analysis (DEA) models. However, it is a deterministic method that does not allow to identify environmental factors influencing efficiency scores. To overcome this limitation, this paper evaluates the efficiency of a sample of Chilean water and sewerage companies applying a double-bootstrap DEA model. Results evidenced that the ranking of water and sewerage companies changes notably whether efficiency scores are computed applying conventional or double-bootstrap DEA models. Moreover, it was found that the percentage of non-revenue water and customer density are factors influencing the efficiency of Chilean water and sewerage companies. This paper illustrates the importance of using a robust and reliable method to increase the relevance of benchmarking tools.

  1. Oral toxicity of 3-nitro-1,2,4-triazol-5-one in rats.

    PubMed

    Crouse, Lee C B; Lent, Emily May; Leach, Glenn J

    2015-01-01

    3-Nitro-1,2,4-triazol-5-one (NTO), an insensitive explosive, was evaluated to assess potential environmental and human health effects. A 14-day oral toxicity study in Sprague-Dawley rats was conducted with NTO in polyethylene glycol -200 by gavage at doses of 0, 250, 500, 1000, 1500, or 2000 mg/kg-d. Body mass and food consumption decreased in males (2000 mg/kg-d), and testes mass was reduced at doses of 500 mg/kg-d and greater. Based on the findings in the 14-day study, a 90-day study was conducted at doses of 0, 30, 100, 315, or 1000 mg/kg-d NTO. There was no effect on food consumption, body mass, or neurobehavioral parameters. Males in the 315 and 1000 mg/kg-d groups had reduced testes mass with associated tubular degeneration and atrophy. The testicular effects were the most sensitive adverse effect and were used to derive a benchmark dose (BMD) of 70 mg/kg-d with a 10% effect level (BMDL10) of 40 mg/kg-d. © The Author(s) 2015.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hua Chiaho, E-mail: Chia-Ho.Hua@stjude.org; Merchant, Thomas E.; Gajjar, Amar

    Purpose: To characterize therapy-induced changes in normal-appearing brainstems of childhood brain tumor patients by serial diffusion tensor imaging (DTI). Methods and Materials: We analyzed 109 DTI studies from 20 brain tumor patients, aged 4 to 23 years, with normal-appearing brainstems included in the treatment fields. Those with medulloblastomas, supratentorial primitive neuroectodermal tumors, and atypical teratoid rhabdoid tumors (n = 10) received postoperative craniospinal irradiation (23.4-39.6 Gy) and a cumulative dose of 55.8 Gy to the primary site, followed by four cycles of high-dose chemotherapy. Patients with high-grade gliomas (n = 10) received erlotinib during and after irradiation (54-59.4 Gy). Parametricmore » maps of fractional anisotropy (FA) and apparent diffusion coefficient (ADC) were computed and spatially registered to three-dimensional radiation dose data. Volumes of interest included corticospinal tracts, medial lemnisci, and the pons. Serving as an age-related benchmark for comparison, 37 DTI studies from 20 healthy volunteers, aged 6 to 25 years, were included in the analysis. Results: The median DTI follow-up time was 3.5 years (range, 1.6-5.0 years). The median mean dose to the pons was 56 Gy (range, 7-59 Gy). Three patterns were seen in longitudinal FA and apparent diffusion coefficient changes: (1) a stable or normal developing time trend, (2) initial deviation from normal with subsequent recovery, and (3) progressive deviation without evidence of complete recovery. The maximal decline in FA often occurred 1.5 to 3.5 years after the start of radiation therapy. A full recovery time trend could be observed within 4 years. Patients with incomplete recovery often had a larger decline in FA within the first year. Radiation dose alone did not predict long-term recovery patterns. Conclusions: Variations existed among individual patients after therapy in longitudinal evolution of brainstem white matter injury and recovery. Early response in brainstem anisotropy may serve as an indicator of the recovery time trend over 5 years after radiation therapy.« less

  3. Determining a pre-mining radiological baseline from historic airborne gamma surveys: a case study.

    PubMed

    Bollhöfer, Andreas; Beraldo, Annamarie; Pfitzner, Kirrilly; Esparon, Andrew; Doering, Che

    2014-01-15

    Knowing the baseline level of radioactivity in areas naturally enriched in radionuclides is important in the uranium mining context to assess radiation doses to humans and the environment both during and after mining. This information is particularly useful in rehabilitation planning and developing closure criteria for uranium mines as only radiation doses additional to the natural background are usually considered 'controllable' for radiation protection purposes. In this case study we have tested whether the method of contemporary groundtruthing of a historic airborne gamma survey could be used to determine the pre-mining radiological conditions at the Ranger mine in northern Australia. The airborne gamma survey was flown in 1976 before mining started and groundtruthed using ground gamma dose rate measurements made between 2007 and 2009 at an undisturbed area naturally enriched in uranium (Anomaly 2) located nearby the Ranger mine. Measurements of (226)Ra soil activity concentration and (222)Rn exhalation flux density at Anomaly 2 were made concurrent with the ground gamma dose rate measurements. Algorithms were developed to upscale the ground gamma data to the same spatial resolution as the historic airborne gamma survey data using a geographic information system, allowing comparison of the datasets. Linear correlation models were developed to estimate the pre-mining gamma dose rates, (226)Ra soil activity concentrations, and (222)Rn exhalation flux densities at selected areas in the greater Ranger region. The modelled levels agreed with measurements made at the Ranger Orebodies 1 and 3 before mining started, and at environmental sites in the region. The conclusion is that our approach can be used to determine baseline radiation levels, and provide a benchmark for rehabilitation of uranium mines or industrial sites where historical airborne gamma survey data are available and an undisturbed radiological analogue exists to groundtruth the data. © 2013.

  4. Screening-Level Risk Assessment for Styrene-Acrylonitrile (SAN) Trimer Detected in Soil and Groundwater

    PubMed Central

    Kirman, C. R.; Gargas, M. L.; Collins, J. J.; Rowlands, J. C.

    2012-01-01

    A screening-level risk assessment was conducted for styrene-acrylonitrile (SAN) Trimer detected at the Reich Farm Superfund site in Toms River, NJ. Consistent with a screening-level approach, on-site and off-site exposure scenarios were evaluated using assumptions that are expected to overestimate actual exposures and hazards at the site. Environmental sampling data collected for soil and groundwater were used to estimate exposure point concentrations. Several exposure scenarios were evaluated to assess potential on-site and off-site exposures, using parameter values for exposures to soil (oral, inhalation of particulates, and dermal contact) and groundwater (oral, dermal contact) to reflect central tendency exposure (CTE) and reasonable maximum exposure (RME) conditions. Three reference dose (RfD) values were derived for SAN Trimer for short-term, subchronic, and chronic exposures, based upon its effects on the liver in exposed rats. Benchmark (BMD) methods were used to assess the relationship between exposure and response, and to characterize appropriate points of departure (POD) for each RfD. An uncertainty factor of 300 was applied to each POD to yield RfD values of 0.1, 0.04, and 0.03 mg/kg-d for short-term, subchronic, and chronic exposures, respectively. Because a chronic cancer bioassay for SAN Trimer in rats (NTP 2011a) does not provide evidence of carcinogenicity, a cancer risk assessment is not appropriate for this chemical. Potential health hazards to human health were assessed using a hazard index (HI) approach, which considers the ratio of exposure dose (i.e., average daily dose, mg/kg-d) to toxicity dose (RfD, mg/kg-d) for each scenario. All CTE and RME HI values are well below 1 (where the average daily dose is equivalent to the RfD), indicating that there is no concern for potential noncancer effects in exposed populations even under the conservative assumptions of this screening-level assessment. PMID:23030654

  5. Towards evidence-based computational statistics: lessons from clinical research on the role and design of real-data benchmark studies.

    PubMed

    Boulesteix, Anne-Laure; Wilson, Rory; Hapfelmeier, Alexander

    2017-09-09

    The goal of medical research is to develop interventions that are in some sense superior, with respect to patient outcome, to interventions currently in use. Similarly, the goal of research in methodological computational statistics is to develop data analysis tools that are themselves superior to the existing tools. The methodology of the evaluation of medical interventions continues to be discussed extensively in the literature and it is now well accepted that medicine should be at least partly "evidence-based". Although we statisticians are convinced of the importance of unbiased, well-thought-out study designs and evidence-based approaches in the context of clinical research, we tend to ignore these principles when designing our own studies for evaluating statistical methods in the context of our methodological research. In this paper, we draw an analogy between clinical trials and real-data-based benchmarking experiments in methodological statistical science, with datasets playing the role of patients and methods playing the role of medical interventions. Through this analogy, we suggest directions for improvement in the design and interpretation of studies which use real data to evaluate statistical methods, in particular with respect to dataset inclusion criteria and the reduction of various forms of bias. More generally, we discuss the concept of "evidence-based" statistical research, its limitations and its impact on the design and interpretation of real-data-based benchmark experiments. We suggest that benchmark studies-a method of assessment of statistical methods using real-world datasets-might benefit from adopting (some) concepts from evidence-based medicine towards the goal of more evidence-based statistical research.

  6. A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neylon, J., E-mail: jneylon@mednet.ucla.edu; Sheng, K.; Yu, V.

    Purpose: Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. Methods: The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy intomore » a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria, respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. Results: The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. Conclusions: The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.« less

  7. Comparison of Monte Carlo and analytical dose computations for intensity modulated proton therapy

    NASA Astrophysics Data System (ADS)

    Yepes, Pablo; Adair, Antony; Grosshans, David; Mirkovic, Dragan; Poenisch, Falk; Titt, Uwe; Wang, Qianxia; Mohan, Radhe

    2018-02-01

    To evaluate the effect of approximations in clinical analytical calculations performed by a treatment planning system (TPS) on dosimetric indices in intensity modulated proton therapy. TPS calculated dose distributions were compared with dose distributions as estimated by Monte Carlo (MC) simulations, calculated with the fast dose calculator (FDC) a system previously benchmarked to full MC. This study analyzed a total of 525 patients for four treatment sites (brain, head-and-neck, thorax and prostate). Dosimetric indices (D02, D05, D20, D50, D95, D98, EUD and Mean Dose) and a gamma-index analysis were utilized to evaluate the differences. The gamma-index passing rates for a 3%/3 mm criterion for voxels with a dose larger than 10% of the maximum dose had a median larger than 98% for all sites. The median difference for all dosimetric indices for target volumes was less than 2% for all cases. However, differences for target volumes as large as 10% were found for 2% of the thoracic patients. For organs at risk (OARs), the median absolute dose difference was smaller than 2 Gy for all indices and cohorts. However, absolute dose differences as large as 10 Gy were found for some small volume organs in brain and head-and-neck patients. This analysis concludes that for a fraction of the patients studied, TPS may overestimate the dose in the target by as much as 10%, while for some OARs the dose could be underestimated by as much as 10 Gy. Monte Carlo dose calculations may be needed to ensure more accurate dose computations to improve target coverage and sparing of OARs in proton therapy.

  8. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    DOE PAGES

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; ...

    2015-12-21

    This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 ® problems. These benchmark and scaling studies show promising results.« less

  9. Human Health Benchmarks for Pesticides

    EPA Pesticide Factsheets

    Advanced testing methods now allow pesticides to be detected in water at very low levels. These small amounts of pesticides detected in drinking water or source water for drinking water do not necessarily indicate a health risk. The EPA has developed human health benchmarks for 363 pesticides to enable our partners to better determine whether the detection of a pesticide in drinking water or source waters for drinking water may indicate a potential health risk and to help them prioritize monitoring efforts.The table below includes benchmarks for acute (one-day) and chronic (lifetime) exposures for the most sensitive populations from exposure to pesticides that may be found in surface or ground water sources of drinking water. The table also includes benchmarks for 40 pesticides in drinking water that have the potential for cancer risk. The HHBP table includes pesticide active ingredients for which Health Advisories or enforceable National Primary Drinking Water Regulations (e.g., maximum contaminant levels) have not been developed.

  10. Principal Angle Enrichment Analysis (PAEA): Dimensionally Reduced Multivariate Gene Set Enrichment Analysis Tool

    PubMed Central

    Clark, Neil R.; Szymkiewicz, Maciej; Wang, Zichen; Monteiro, Caroline D.; Jones, Matthew R.; Ma’ayan, Avi

    2016-01-01

    Gene set analysis of differential expression, which identifies collectively differentially expressed gene sets, has become an important tool for biology. The power of this approach lies in its reduction of the dimensionality of the statistical problem and its incorporation of biological interpretation by construction. Many approaches to gene set analysis have been proposed, but benchmarking their performance in the setting of real biological data is difficult due to the lack of a gold standard. In a previously published work we proposed a geometrical approach to differential expression which performed highly in benchmarking tests and compared well to the most popular methods of differential gene expression. As reported, this approach has a natural extension to gene set analysis which we call Principal Angle Enrichment Analysis (PAEA). PAEA employs dimensionality reduction and a multivariate approach for gene set enrichment analysis. However, the performance of this method has not been assessed nor its implementation as a web-based tool. Here we describe new benchmarking protocols for gene set analysis methods and find that PAEA performs highly. The PAEA method is implemented as a user-friendly web-based tool, which contains 70 gene set libraries and is freely available to the community. PMID:26848405

  11. Principal Angle Enrichment Analysis (PAEA): Dimensionally Reduced Multivariate Gene Set Enrichment Analysis Tool.

    PubMed

    Clark, Neil R; Szymkiewicz, Maciej; Wang, Zichen; Monteiro, Caroline D; Jones, Matthew R; Ma'ayan, Avi

    2015-11-01

    Gene set analysis of differential expression, which identifies collectively differentially expressed gene sets, has become an important tool for biology. The power of this approach lies in its reduction of the dimensionality of the statistical problem and its incorporation of biological interpretation by construction. Many approaches to gene set analysis have been proposed, but benchmarking their performance in the setting of real biological data is difficult due to the lack of a gold standard. In a previously published work we proposed a geometrical approach to differential expression which performed highly in benchmarking tests and compared well to the most popular methods of differential gene expression. As reported, this approach has a natural extension to gene set analysis which we call Principal Angle Enrichment Analysis (PAEA). PAEA employs dimensionality reduction and a multivariate approach for gene set enrichment analysis. However, the performance of this method has not been assessed nor its implementation as a web-based tool. Here we describe new benchmarking protocols for gene set analysis methods and find that PAEA performs highly. The PAEA method is implemented as a user-friendly web-based tool, which contains 70 gene set libraries and is freely available to the community.

  12. ForceGen 3D structure and conformer generation: from small lead-like molecules to macrocyclic drugs

    NASA Astrophysics Data System (ADS)

    Cleves, Ann E.; Jain, Ajay N.

    2017-05-01

    We introduce the ForceGen method for 3D structure generation and conformer elaboration of drug-like small molecules. ForceGen is novel, avoiding use of distance geometry, molecular templates, or simulation-oriented stochastic sampling. The method is primarily driven by the molecular force field, implemented using an extension of MMFF94s and a partial charge estimator based on electronegativity-equalization. The force field is coupled to algorithms for direct sampling of realistic physical movements made by small molecules. Results are presented on a standard benchmark from the Cambridge Crystallographic Database of 480 drug-like small molecules, including full structure generation from SMILES strings. Reproduction of protein-bound crystallographic ligand poses is demonstrated on four carefully curated data sets: the ConfGen Set (667 ligands), the PINC cross-docking benchmark (1062 ligands), a large set of macrocyclic ligands (182 total with typical ring sizes of 12-23 atoms), and a commonly used benchmark for evaluating macrocycle conformer generation (30 ligands total). Results compare favorably to alternative methods, and performance on macrocyclic compounds approaches that observed on non-macrocycles while yielding a roughly 100-fold speed improvement over alternative MD-based methods with comparable performance.

  13. Open-source platform to benchmark fingerprints for ligand-based virtual screening

    PubMed Central

    2013-01-01

    Similarity-search methods using molecular fingerprints are an important tool for ligand-based virtual screening. A huge variety of fingerprints exist and their performance, usually assessed in retrospective benchmarking studies using data sets with known actives and known or assumed inactives, depends largely on the validation data sets used and the similarity measure used. Comparing new methods to existing ones in any systematic way is rather difficult due to the lack of standard data sets and evaluation procedures. Here, we present a standard platform for the benchmarking of 2D fingerprints. The open-source platform contains all source code, structural data for the actives and inactives used (drawn from three publicly available collections of data sets), and lists of randomly selected query molecules to be used for statistically valid comparisons of methods. This allows the exact reproduction and comparison of results for future studies. The results for 12 standard fingerprints together with two simple baseline fingerprints assessed by seven evaluation methods are shown together with the correlations between methods. High correlations were found between the 12 fingerprints and a careful statistical analysis showed that only the two baseline fingerprints were different from the others in a statistically significant way. High correlations were also found between six of the seven evaluation methods, indicating that despite their seeming differences, many of these methods are similar to each other. PMID:23721588

  14. Benchmarks and Reliable DFT Results for Spin Gaps of Small Ligand Fe(II) Complexes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Suhwan; Kim, Min-Cheol; Sim, Eunji

    2017-05-01

    All-electron fixed-node diffusion Monte Carlo provides benchmark spin gaps for four Fe(II) octahedral complexes. Standard quantum chemical methods (semilocal DFT and CCSD(T)) fail badly for the energy difference between their high- and low-spin states. Density-corrected DFT is both significantly more accurate and reliable and yields a consistent prediction for the Fe-Porphyrin complex

  15. Benchmarking the Performance of Employment and Training Programs: A Pilot Effort of the Annie E. Casey Foundation's Jobs Initiative.

    ERIC Educational Resources Information Center

    Welch, Doug

    As part of its Jobs Initiative (JI) program in six metropolitan areas Denver, Milwaukee, New Orleans, Philadelphia, St. Louis, and Seattle the Annie E. Casey Foundation sought to develop and test a method for establishing benchmarks for workforce development agencies. Data collected from 10 projects in the JI from April through March, 2000,…

  16. Restaurant Energy Use Benchmarking Guideline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  17. Two-dimensional free-surface flow under gravity: A new benchmark case for SPH method

    NASA Astrophysics Data System (ADS)

    Wu, J. Z.; Fang, L.

    2018-02-01

    Currently there are few free-surface benchmark cases with analytical results for the Smoothed Particle Hydrodynamics (SPH) simulation. In the present contribution we introduce a two-dimensional free-surface flow under gravity, and obtain an analytical expression on the surface height difference and a theoretical estimation on the surface fractal dimension. They are preliminarily validated and supported by SPH calculations.

  18. Extension of PENELOPE to protons: simulation of nuclear reactions and benchmark with Geant4.

    PubMed

    Sterpin, E; Sorriaux, J; Vynckier, S

    2013-11-01

    Describing the implementation of nuclear reactions in the extension of the Monte Carlo code (MC) PENELOPE to protons (PENH) and benchmarking with Geant4. PENH is based on mixed-simulation mechanics for both elastic and inelastic electromagnetic collisions (EM). The adopted differential cross sections for EM elastic collisions are calculated using the eikonal approximation with the Dirac-Hartree-Fock-Slater atomic potential. Cross sections for EM inelastic collisions are computed within the relativistic Born approximation, using the Sternheimer-Liljequist model of the generalized oscillator strength. Nuclear elastic and inelastic collisions were simulated using explicitly the scattering analysis interactive dialin database for (1)H and ICRU 63 data for (12)C, (14)N, (16)O, (31)P, and (40)Ca. Secondary protons, alphas, and deuterons were all simulated as protons, with the energy adapted to ensure consistent range. Prompt gamma emission can also be simulated upon user request. Simulations were performed in a water phantom with nuclear interactions switched off or on and integral depth-dose distributions were compared. Binary-cascade and precompound models were used for Geant4. Initial energies of 100 and 250 MeV were considered. For cases with no nuclear interactions simulated, additional simulations in a water phantom with tight resolution (1 mm in all directions) were performed with FLUKA. Finally, integral depth-dose distributions for a 250 MeV energy were computed with Geant4 and PENH in a homogeneous phantom with, first, ICRU striated muscle and, second, ICRU compact bone. For simulations with EM collisions only, integral depth-dose distributions were within 1%/1 mm for doses higher than 10% of the Bragg-peak dose. For central-axis depth-dose and lateral profiles in a phantom with tight resolution, there are significant deviations between Geant4 and PENH (up to 60%/1 cm for depth-dose distributions). The agreement is much better with FLUKA, with deviations within 3%/3 mm. When nuclear interactions were turned on, agreement (within 6% before the Bragg-peak) between PENH and Geant4 was consistent with uncertainties on nuclear models and cross sections, whatever the material simulated (water, muscle, or bone). A detailed and flexible description of nuclear reactions has been implemented in the PENH extension of PENELOPE to protons, which utilizes a mixed-simulation scheme for both elastic and inelastic EM collisions, analogous to the well-established algorithm for electrons/positrons. PENH is compatible with all current main programs that use PENELOPE as the MC engine. The nuclear model of PENH is realistic enough to give dose distributions in fair agreement with those computed by Geant4.

  19. Performance of Landslide-HySEA tsunami model for NTHMP benchmarking validation process

    NASA Astrophysics Data System (ADS)

    Macias, Jorge

    2017-04-01

    In its FY2009 Strategic Plan, the NTHMP required that all numerical tsunami inundation models be verified as accurate and consistent through a model benchmarking process. This was completed in 2011, but only for seismic tsunami sources and in a limited manner for idealized solid underwater landslides. Recent work by various NTHMP states, however, has shown that landslide tsunami hazard may be dominant along significant parts of the US coastline, as compared to hazards from other tsunamigenic sources. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory date sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. The Landslide-HySEA model has participated in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017. The aim of this presentation is to show some of the numerical results obtained for Landslide-HySEA in the framework of this benchmarking validation/verification effort. Acknowledgements. This research has been partially supported by the Junta de Andalucía research project TESELA (P11-RNM7069), the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and Universidad de Málaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).

  20. The Filament Sensor for Near Real-Time Detection of Cytoskeletal Fiber Structures

    PubMed Central

    Eltzner, Benjamin; Wollnik, Carina; Gottschlich, Carsten; Huckemann, Stephan; Rehfeldt, Florian

    2015-01-01

    A reliable extraction of filament data from microscopic images is of high interest in the analysis of acto-myosin structures as early morphological markers in mechanically guided differentiation of human mesenchymal stem cells and the understanding of the underlying fiber arrangement processes. In this paper, we propose the filament sensor (FS), a fast and robust processing sequence which detects and records location, orientation, length, and width for each single filament of an image, and thus allows for the above described analysis. The extraction of these features has previously not been possible with existing methods. We evaluate the performance of the proposed FS in terms of accuracy and speed in comparison to three existing methods with respect to their limited output. Further, we provide a benchmark dataset of real cell images along with filaments manually marked by a human expert as well as simulated benchmark images. The FS clearly outperforms existing methods in terms of computational runtime and filament extraction accuracy. The implementation of the FS and the benchmark database are available as open source. PMID:25996921

  1. Molecular diffusion of stable water isotopes in polar firn as a proxy for past temperatures

    NASA Astrophysics Data System (ADS)

    Holme, Christian; Gkinis, Vasileios; Vinther, Bo M.

    2018-03-01

    Polar precipitation archived in ice caps contains information on past temperature conditions. Such information can be retrieved by measuring the water isotopic signals of δ18O and δD in ice cores. These signals have been attenuated during densification due to molecular diffusion in the firn column, where the magnitude of the diffusion is isotopologue specific and temperature dependent. By utilizing the differential diffusion signal, dual isotope measurements of δ18O and δD enable multiple temperature reconstruction techniques. This study assesses how well six different methods can be used to reconstruct past surface temperatures from the diffusion-based temperature proxies. Two of the methods are based on the single diffusion lengths of δ18O and δD , three of the methods employ the differential diffusion signal, while the last uses the ratio between the single diffusion lengths. All techniques are tested on synthetic data in order to evaluate their accuracy and precision. We perform a benchmark test to thirteen high resolution Holocene data sets from Greenland and Antarctica, which represent a broad range of mean annual surface temperatures and accumulation rates. Based on the benchmark test, we comment on the accuracy and precision of the methods. Both the benchmark test and the synthetic data test demonstrate that the most precise reconstructions are obtained when using the single isotope diffusion lengths, with precisions of approximately 1.0 °C . In the benchmark test, the single isotope diffusion lengths are also found to reconstruct consistent temperatures with a root-mean-square-deviation of 0.7 °C . The techniques employing the differential diffusion signals are more uncertain, where the most precise method has a precision of 1.9 °C . The diffusion length ratio method is the least precise with a precision of 13.7 °C . The absolute temperature estimates from this method are also shown to be highly sensitive to the choice of fractionation factor parameterization.

  2. Update on the Code Intercomparison and Benchmark for Muon Fluence and Absorbed Dose Induced by an 18 GeV Electron Beam After Massive Iron Shielding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fasso, A.; Ferrari, A.; Ferrari, A.

    In 1974, Nelson, Kase and Svensson published an experimental investigation on muon shielding around SLAC high-energy electron accelerators [1]. They measured muon fluence and absorbed dose induced by 14 and 18 GeV electron beams hitting a copper/water beamdump and attenuated in a thick steel shielding. In their paper, they compared the results with the theoretical models available at that time. In order to compare their experimental results with present model calculations, we use the modern transport Monte Carlo codes MARS15, FLUKA2011 and GEANT4 to model the experimental setup and run simulations. The results are then compared between the codes, andmore » with the SLAC data.« less

  3. Benchmarking Is Associated With Improved Quality of Care in Type 2 Diabetes

    PubMed Central

    Hermans, Michel P.; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank; Vandenberghe, Hans; Brotons, Carlos

    2013-01-01

    OBJECTIVE To assess prospectively the effect of benchmarking on quality of primary care for patients with type 2 diabetes by using three major modifiable cardiovascular risk factors as critical quality indicators. RESEARCH DESIGN AND METHODS Primary care physicians treating patients with type 2 diabetes in six European countries were randomized to give standard care (control group) or standard care with feedback benchmarked against other centers in each country (benchmarking group). In both groups, laboratory tests were performed every 4 months. The primary end point was the percentage of patients achieving preset targets of the critical quality indicators HbA1c, LDL cholesterol, and systolic blood pressure (SBP) after 12 months of follow-up. RESULTS Of 4,027 patients enrolled, 3,996 patients were evaluable and 3,487 completed 12 months of follow-up. Primary end point of HbA1c target was achieved in the benchmarking group by 58.9 vs. 62.1% in the control group (P = 0.398) after 12 months; 40.0 vs. 30.1% patients met the SBP target (P < 0.001); 54.3 vs. 49.7% met the LDL cholesterol target (P = 0.006). Percentages of patients meeting all three targets increased during the study in both groups, with a statistically significant increase observed in the benchmarking group. The percentage of patients achieving all three targets at month 12 was significantly larger in the benchmarking group than in the control group (12.5 vs. 8.1%; P < 0.001). CONCLUSIONS In this prospective, randomized, controlled study, benchmarking was shown to be an effective tool for increasing achievement of critical quality indicators and potentially reducing patient cardiovascular residual risk profile. PMID:23846810

  4. Purely in silico BCS classification: science based quality standards for the world's drugs.

    PubMed

    Dahan, Arik; Wolk, Omri; Kim, Young Hoon; Ramachandran, Chandrasekharan; Crippen, Gordon M; Takagi, Toshihide; Bermejo, Marival; Amidon, Gordon L

    2013-11-04

    BCS classification is a vital tool in the development of both generic and innovative drug products. The purpose of this work was to provisionally classify the world's top selling oral drugs according to the BCS, using in silico methods. Three different in silico methods were examined: the well-established group contribution (CLogP) and atom contribution (ALogP) methods, and a new method based solely on the molecular formula and element contribution (KLogP). Metoprolol was used as the benchmark for the low/high permeability class boundary. Solubility was estimated in silico using a thermodynamic equation that relies on the partition coefficient and melting point. The validity of each method was affirmed by comparison to reference data and literature. We then used each method to provisionally classify the orally administered, IR drug products found in the WHO Model list of Essential Medicines, and the top-selling oral drug products in the United States (US), Great Britain (GB), Spain (ES), Israel (IL), Japan (JP), and South Korea (KR). A combined list of 363 drugs was compiled from the various lists, and 257 drugs were classified using the different in silico permeability methods and literature solubility data, as well as BDDCS classification. Lastly, we calculated the solubility values for 185 drugs from the combined set using in silico approach. Permeability classification with the different in silico methods was correct for 69-72.4% of the 29 reference drugs with known human jejunal permeability, and for 84.6-92.9% of the 14 FDA reference drugs in the set. The correlations (r(2)) between experimental log P values of 154 drugs and their CLogP, ALogP and KLogP were 0.97, 0.82 and 0.71, respectively. The different in silico permeability methods produced comparable results: 30-34% of the US, GB, ES and IL top selling drugs were class 1, 27-36.4% were class 2, 22-25.5% were class 3, and 5.46-14% were class 4 drugs, while ∼8% could not be classified. The WHO list included significantly less class 1 and more class 3 drugs in comparison to the countries' lists, probably due to differences in commonly used drugs in developing vs industrial countries. BDDCS classified more drugs as class 1 compared to in silico BCS, likely due to the more lax benchmark for metabolism (70%), in comparison to the strict permeability benchmark (metoprolol). For 185 out of the 363 drugs, in silico solubility values were calculated, and successfully matched the literature solubility data. In conclusion, relatively simple in silico methods can be used to estimate both permeability and solubility. While CLogP produced the best correlation to experimental values, even KLogP, the most simplified in silico method that is based on molecular formula with no knowledge of molecular structure, produced comparable BCS classification to the sophisticated methods. This KLogP, when combined with a mean melting point and estimated dose, can be used to provisionally classify potential drugs from just molecular formula, even before synthesis. 49-59% of the world's top-selling drugs are highly soluble (class 1 and class 3), and are therefore candidates for waivers of in vivo bioequivalence studies. For these drugs, the replacement of expensive human studies with affordable in vitro dissolution tests would ensure their bioequivalence, and encourage the development and availability of generic drug products in both industrial and developing countries.

  5. Computation of Cosmic Ray Ionization and Dose at Mars: a Comparison of HZETRN and Planetocosmics for Proton and Alpha Particles

    NASA Technical Reports Server (NTRS)

    Gronoff, Guillaume; Norman, Ryan B.; Mertens, Christopher J.

    2014-01-01

    The ability to evaluate the cosmic ray environment at Mars is of interest for future manned exploration. To support exploration, tools must be developed to accurately access the radiation environment in both free space and on planetary surfaces. The primary tool NASA uses to quantify radiation exposure behind shielding materials is the space radiation transport code, HZETRN. In order to build confidence in HZETRN, code benchmarking against Monte Carlo radiation transport codes is often used. This work compares the dose calculations at Mars by HZETRN and the Geant4 application Planetocosmics. The dose at ground and the energy deposited in the atmosphere by galactic cosmic ray protons and alpha particles has been calculated for the Curiosity landing conditions. In addition, this work has considered Solar Energetic Particle events, allowing for the comparison of varying input radiation environments. The results for protons and alpha particles show very good agreement between HZETRN and Planetocosmics.

  6. Heart rate changes during electroconvulsive therapy

    PubMed Central

    2013-01-01

    Background This observational study documented heart rate over the entire course of electrically induced seizures and aimed to evaluate the effects of stimulus electrode placement, patients' age, stimulus dose, and additional predictors. Method In 119 consecutive patients with 64 right unilateral (RUL) and 55 bifrontal (BF) electroconvulsive treatments, heart rate graphs based on beat-to-beat measurements were plotted up to durations of 130 s. Results In RUL stimulation, the initial drop in heart rate lasted for 12.5 ± 2.6 s (mean ± standard deviation). This depended on stimulus train duration, age, and baseline heart rate. In seizures induced with BF electrode placement, a sympathetic response was observed within the first few seconds of the stimulation phase (median 3.5 s). This was also the case with subconvulsive stimulations. The mean peak heart rate in all 119 treatments amounted to 135 ± 20 bpm and depended on baseline heart rate and seizure duration; electrode placement, charge dose, and age were insignificant in regression analysis. A marked decline in heart rate in connection with seizure cessation occurred in 71% of treatments. Conclusions A significant independent effect of stimulus electrode positioning on cardiac action was evident only in the initial phase of the seizures. Electrical stimulation rather than the seizure causes the initial heart rate increase in BF treatments. The data reveal no rationale for setting the stimulus doses as a function of intraictal peak heart rates (‘benchmark method’). The marked decline in heart rate at the end of most seizures is probably mediated by a baroreceptor reflex. PMID:23764036

  7. Patterns of care study of brachytherapy in New South Wales: cervical cancer treatment quality depends on caseload

    PubMed Central

    Delaney, Geoff P.; Gabriel, Gabriel S.; Barton, Michael B.

    2014-01-01

    Purpose We previously conducted modelling and a patterns of care study (POCS) that showed gynaecological brachytherapy (BT) was underutilized in New South Wales (NSW), the USA and Western Europe. The aim of the current study was to assess the quality of cervical BT in NSW, and to determine if caseload affects quality of treatment delivery. Material and methods All nine NSW radiation oncology departments that treated patients with cervical BT in 2003 were visited. Patient, tumour and treatment related data were collected. Quality of BT was assessed using published quality benchmarks. Higher and lower caseload departments were compared. Results The four higher cervical BT caseload departments treated 11-15 NSW residents in 2003, compared to 1-8 patients for the lower caseload departments. Cervix cancer patients treated at the higher caseload departments were more likely to be treated to a point A dose ≥ 80 Gy (58% vs. 14%, p = 0.001), and to have treatment completed within 8 weeks (66% vs. 35%, p = 0.02). Despite higher point A doses, there was no significant difference in proportions achieving lower than recommended rectal or bladder doses, implying better BT insertions in higher caseload departments. Conclusions Cervical BT in NSW was dispersed amongst a large number of departments and was frequently of sub-optimal quality. Higher quality BT was achieved in departments treating at least 10 patients per year. It is likely that improved outcomes will be achievable if at least 10 patients are treated per department per year. PMID:24790619

  8. Pathway Based Toxicology and Fit-for-Purpose Assays.

    PubMed

    Clewell, Rebecca A; McMullen, Patrick D; Adeleye, Yeyejide; Carmichael, Paul L; Andersen, Melvin E

    The field of toxicity testing for non-pharmaceutical chemicals is in flux with multiple initiatives in North America and the EU to move away from animal testing to mode-of-action based in vitro assays. In this arena, there are still obstacles to overcome, such as developing appropriate cellular assays, creating pathway-based dose-response models and refining in vitro-in vivo extrapolation (IVIVE) tools. Overall, it is necessary to provide assurances that these new approaches are adequately protective of human and ecological health. Another major challenge for individual scientists and regulatory agencies is developing a cultural willingness to shed old biases developed around animal tests and become more comfortable with mode-of-action based assays in human cells. At present, most initiatives focus on developing in vitro alternatives and assessing how well these alternative methods reproduce past results related to predicting organism level toxicity in intact animals. The path forward requires looking beyond benchmarking against high dose animal studies. We need to develop targeted cellular assays, new cell biology-based extrapolation models for assessing regions of safety for chemical exposures in human populations, and mode-of-action-based approaches which are constructed on an understanding of human biology. Furthermore, it is essential that assay developers have the flexibility to 'validate' against the most appropriate mode-of-action data rather than against apical endpoints in high dose animal studies. This chapter demonstrates the principles of fit-for-purpose assay development using pathway-targeted case studies. The projects include p53-mdm2-mediated DNA-repair, estrogen receptor-mediated cell proliferation and PPARα receptor-mediated liver responses.

  9. Two new computational methods for universal DNA barcoding: a benchmark using barcode sequences of bacteria, archaea, animals, fungi, and land plants.

    PubMed

    Tanabe, Akifumi S; Toju, Hirokazu

    2013-01-01

    Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding) is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA) barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used "1-nearest-neighbor" (1-NN) method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence) to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto) method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need to accelerate the registration of reference barcode sequences to apply high-throughput DNA barcoding to genus or species level identification in biodiversity research.

  10. Two New Computational Methods for Universal DNA Barcoding: A Benchmark Using Barcode Sequences of Bacteria, Archaea, Animals, Fungi, and Land Plants

    PubMed Central

    Tanabe, Akifumi S.; Toju, Hirokazu

    2013-01-01

    Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding) is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA) barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used “1-nearest-neighbor” (1-NN) method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence) to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto) method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need to accelerate the registration of reference barcode sequences to apply high-throughput DNA barcoding to genus or species level identification in biodiversity research. PMID:24204702

  11. Piping benchmark problems. Volume 1. Dynamic analysis uniform support motion response spectrum method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bezler, P.; Hartzman, M.; Reich, M.

    1980-08-01

    A set of benchmark problems and solutions have been developed for verifying the adequacy of computer programs used for dynamic analysis and design of nuclear piping systems by the Response Spectrum Method. The problems range from simple to complex configurations which are assumed to experience linear elastic behavior. The dynamic loading is represented by uniform support motion, assumed to be induced by seismic excitation in three spatial directions. The solutions consist of frequencies, participation factors, nodal displacement components and internal force and moment components. Solutions to associated anchor point motion static problems are not included.

  12. SU-F-BRF-09: A Non-Rigid Point Matching Method for Accurate Bladder Dose Summation in Cervical Cancer HDR Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, H; Zhen, X; Zhou, L

    2014-06-15

    Purpose: To propose and validate a deformable point matching scheme for surface deformation to facilitate accurate bladder dose summation for fractionated HDR cervical cancer treatment. Method: A deformable point matching scheme based on the thin plate spline robust point matching (TPSRPM) algorithm is proposed for bladder surface registration. The surface of bladders segmented from fractional CT images is extracted and discretized with triangular surface mesh. Deformation between the two bladder surfaces are obtained by matching the two meshes' vertices via the TPS-RPM algorithm, and the deformation vector fields (DVFs) characteristic of this deformation is estimated by B-spline approximation. Numerically, themore » algorithm is quantitatively compared with the Demons algorithm using five clinical cervical cancer cases by several metrics: vertex-to-vertex distance (VVD), Hausdorff distance (HD), percent error (PE), and conformity index (CI). Experimentally, the algorithm is validated on a balloon phantom with 12 surface fiducial markers. The balloon is inflated with different amount of water, and the displacement of fiducial markers is benchmarked as ground truth to study TPS-RPM calculated DVFs' accuracy. Results: In numerical evaluation, the mean VVD is 3.7(±2.0) mm after Demons, and 1.3(±0.9) mm after TPS-RPM. The mean HD is 14.4 mm after Demons, and 5.3mm after TPS-RPM. The mean PE is 101.7% after Demons and decreases to 18.7% after TPS-RPM. The mean CI is 0.63 after Demons, and increases to 0.90 after TPS-RPM. In the phantom study, the mean Euclidean distance of the fiducials is 7.4±3.0mm and 4.2±1.8mm after Demons and TPS-RPM, respectively. Conclusions: The bladder wall deformation is more accurate using the feature-based TPS-RPM algorithm than the intensity-based Demons algorithm, indicating that TPS-RPM has the potential for accurate bladder dose deformation and dose summation for multi-fractional cervical HDR brachytherapy. This work is supported in part by the National Natural ScienceFoundation of China (no 30970866 and no 81301940)« less

  13. Helical Tomotherapy for Whole-Brain Irradiation With Integrated Boost to Multiple Brain Metastases: Evaluation of Dose Distribution Characteristics and Comparison With Alternative Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levegrün, Sabine, E-mail: sabine.levegruen@uni-due.de; Pöttgen, Christoph; Wittig, Andrea

    2013-07-15

    Purpose: To quantitatively evaluate dose distribution characteristics achieved with helical tomotherapy (HT) for whole-brain irradiation (WBRT) with integrated boost (IB) to multiple brain metastases in comparison with alternative techniques. Methods and Materials: Dose distributions for 23 patients with 81 metastases treated with WBRT (30 Gy/10 fractions) and IB (50 Gy) were analyzed. The median number of metastases per patient (N{sub mets}) was 3 (range, 2-8). Mean values of the composite planning target volume of all metastases per patient (PTV{sub mets}) and of the individual metastasis planning target volume (PTV{sub ind} {sub met}) were 8.7 ± 8.9 cm{sup 3} (range, 1.3-35.5more » cm{sup 3}) and 2.5 ± 4.5 cm{sup 3} (range, 0.19-24.7 cm{sup 3}), respectively. Dose distributions in PTV{sub mets} and PTV{sub ind} {sub met} were evaluated with respect to dose conformity (conformation number [CN], RTOG conformity index [PITV]), target coverage (TC), and homogeneity (homogeneity index [HI], ratio of maximum dose to prescription dose [MDPD]). The dependence of dose conformity on target size and N{sub mets} was investigated. The dose distribution characteristics were benchmarked against alternative irradiation techniques identified in a systematic literature review. Results: Mean ± standard deviation of dose distribution characteristics derived for PTV{sub mets} amounted to CN = 0.790 ± 0.101, PITV = 1.161 ± 0.154, TC = 0.95 ± 0.01, HI = 0.142 ± 0.022, and MDPD = 1.147 ± 0.029, respectively, demonstrating high dose conformity with acceptable homogeneity. Corresponding numbers for PTV{sub ind} {sub met} were CN = 0.708 ± 0.128, PITV = 1.174 ± 0.237, TC = 0.90 ± 0.10, HI = 0.140 ± 0.027, and MDPD = 1.129 ± 0.030, respectively. The target size had a statistically significant influence on dose conformity to PTV{sub mets} (CN = 0.737 for PTV{sub mets} ≤4.32 cm{sup 3} vs CN = 0.848 for PTV{sub mets} >4.32 cm{sup 3}, P=.006), in contrast to N{sub mets}. The achieved dose conformity to PTV{sub mets}, assessed by both CN and PITV, was in all investigated volume strata well within the best quartile of the values reported for alternative irradiation techniques. Conclusions: HT is a well-suited technique to deliver WBRT with IB to multiple brain metastases, yielding high-quality dose distributions. A multi-institutional prospective randomized phase 2 clinical trial to exploit efficacy and safety of the treatment concept is currently under way.« less

  14. Neutron Activation and Thermoluminescent Detector Responses to a Bare Pulse of the CEA Valduc SILENE Critical Assembly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Thomas Martin; Celik, Cihangir; McMahan, Kimberly L.

    This benchmark experiment was conducted as a joint venture between the US Department of Energy (DOE) and the French Commissariat à l'Energie Atomique (CEA). Staff at the Oak Ridge National Laboratory (ORNL) in the US and the Centre de Valduc in France planned this experiment. The experiment was conducted on October 11, 2010 in the SILENE critical assembly facility at Valduc. Several other organizations contributed to this experiment and the subsequent evaluation, including CEA Saclay, Lawrence Livermore National Laboratory (LLNL), the Y-12 National Security Complex (NSC), Babcock International Group in the United Kingdom, and Los Alamos National Laboratory (LANL). Themore » goal of this experiment was to measure neutron activation and thermoluminescent dosimeter (TLD) doses from a source similar to a fissile solution critical excursion. The resulting benchmark can be used for validation of computer codes and nuclear data libraries as required when performing analysis of criticality accident alarm systems (CAASs). A secondary goal of this experiment was to qualitatively test performance of two CAAS detectors similar to those currently and formerly in use in some US DOE facilities. The detectors tested were the CIDAS MkX and the Rocky Flats NCD-91. These detectors were being evaluated to determine whether they would alarm, so they were not expected to generate benchmark quality data.« less

  15. Benchmarking Discount Rate in Natural Resource Damage Assessment with Risk Aversion.

    PubMed

    Wu, Desheng; Chen, Shuzhen

    2017-08-01

    Benchmarking a credible discount rate is of crucial importance in natural resource damage assessment (NRDA) and restoration evaluation. This article integrates a holistic framework of NRDA with prevailing low discount rate theory, and proposes a discount rate benchmarking decision support system based on service-specific risk aversion. The proposed approach has the flexibility of choosing appropriate discount rates for gauging long-term services, as opposed to decisions based simply on duration. It improves injury identification in NRDA since potential damages and side-effects to ecosystem services are revealed within the service-specific framework. A real embankment case study demonstrates valid implementation of the method. © 2017 Society for Risk Analysis.

  16. Subgroup Benchmark Calculations for the Intra-Pellet Nonuniform Temperature Cases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kang Seog; Jung, Yeon Sang; Liu, Yuxuan

    A benchmark suite has been developed by Seoul National University (SNU) for intrapellet nonuniform temperature distribution cases based on the practical temperature profiles according to the thermal power levels. Though a new subgroup capability for nonuniform temperature distribution was implemented in MPACT, no validation calculation has been performed for the new capability. This study focuses on bench-marking the new capability through a code-to-code comparison. Two continuous-energy Monte Carlo codes, McCARD and CE-KENO, are engaged in obtaining reference solutions, and the MPACT results are compared to the SNU nTRACER using a similar cross section library and subgroup method to obtain self-shieldedmore » cross sections.« less

  17. Using relative survival measures for cross-sectional and longitudinal benchmarks of countries, states, and districts: the BenchRelSurv- and BenchRelSurvPlot-macros

    PubMed Central

    2013-01-01

    Background The objective of screening programs is to discover life threatening diseases in as many patients as early as possible and to increase the chance of survival. To be able to compare aspects of health care quality, methods are needed for benchmarking that allow comparisons on various health care levels (regional, national, and international). Objectives Applications and extensions of algorithms can be used to link the information on disease phases with relative survival rates and to consolidate them in composite measures. The application of the developed SAS-macros will give results for benchmarking of health care quality. Data examples for breast cancer care are given. Methods A reference scale (expected, E) must be defined at a time point at which all benchmark objects (observed, O) are measured. All indices are defined as O/E, whereby the extended standardized screening-index (eSSI), the standardized case-mix-index (SCI), the work-up-index (SWI), and the treatment-index (STI) address different health care aspects. The composite measures called overall-performance evaluation (OPE) and relative overall performance indices (ROPI) link the individual indices differently for cross-sectional or longitudinal analyses. Results Algorithms allow a time point and a time interval associated comparison of the benchmark objects in the indices eSSI, SCI, SWI, STI, OPE, and ROPI. Comparisons between countries, states and districts are possible. Exemplarily comparisons between two countries are made. The success of early detection and screening programs as well as clinical health care quality for breast cancer can be demonstrated while the population’s background mortality is concerned. Conclusions If external quality assurance programs and benchmark objects are based on population-based and corresponding demographic data, information of disease phase and relative survival rates can be combined to indices which offer approaches for comparative analyses between benchmark objects. Conclusions on screening programs and health care quality are possible. The macros can be transferred to other diseases if a disease-specific phase scale of prognostic value (e.g. stage) exists. PMID:23316692

  18. Benchmark Results Of Active Tracer Particles In The Open Souce Code ASPECT For Modelling Convection In The Earth's Mantle

    NASA Astrophysics Data System (ADS)

    Jiang, J.; Kaloti, A. P.; Levinson, H. R.; Nguyen, N.; Puckett, E. G.; Lokavarapu, H. V.

    2016-12-01

    We present the results of three standard benchmarks for the new active tracer particle algorithm in ASPECT. The three benchmarks are SolKz, SolCx, and SolVI (also known as the 'inclusion benchmark') first proposed by Duretz, May, Gerya, and Tackley (G Cubed, 2011) and in subsequent work by Theilman, May, and Kaus (Pure and Applied Geophysics, 2014). Each of the three benchmarks compares the accuracy of the numerical solution to a steady (time-independent) solution of the incompressible Stokes equations with a known exact solution. These benchmarks are specifically designed to test the accuracy and effectiveness of the numerical method when the viscosity varies up to six orders of magnitude. ASPECT has been shown to converge to the exact solution of each of these benchmarks at the correct design rate when all of the flow variables, including the density and viscosity, are discretized on the underlying finite element grid (Krobichler, Heister, and Bangerth, GJI, 2012). In our work we discretize the density and viscosity by initially placing the true values of the density and viscosity at the intial particle positions. At each time step, including the initialization step, the density and viscosity are interpolated from the particles onto the finite element grid. The resulting Stokes system is solved for the velocity and pressure, and the particle positions are advanced in time according to this new, numerical, velocity field. Note that this procedure effectively changes a steady solution of the Stokes equaton (i.e., one that is independent of time) to a solution of the Stokes equations that is time dependent. Furthermore, the accuracy of the active tracer particle algorithm now also depends on the accuracy of the interpolation algorithm and of the numerical method one uses to advance the particle positions in time. Finally, we will present new interpolation algorithms designed to increase the overall accuracy of the active tracer algorithms in ASPECT and interpolation algotithms designed to conserve properties, such as mass density, that are being carried by the particles.

  19. INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom; Javier Ortensi; Sonat Sen

    2013-09-01

    The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible formore » defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III results of all other international participants in 2014, while the remaining Phase II transient case results will be reported in 2015.« less

  20. Benchmark datasets for phylogenomic pipeline validation, applications for foodborne pathogen surveillance

    PubMed Central

    Rand, Hugh; Shumway, Martin; Trees, Eija K.; Simmons, Mustafa; Agarwala, Richa; Davis, Steven; Tillman, Glenn E.; Defibaugh-Chavez, Stephanie; Carleton, Heather A.; Klimke, William A.; Katz, Lee S.

    2017-01-01

    Background As next generation sequence technology has advanced, there have been parallel advances in genome-scale analysis programs for determining evolutionary relationships as proxies for epidemiological relationship in public health. Most new programs skip traditional steps of ortholog determination and multi-gene alignment, instead identifying variants across a set of genomes, then summarizing results in a matrix of single-nucleotide polymorphisms or alleles for standard phylogenetic analysis. However, public health authorities need to document the performance of these methods with appropriate and comprehensive datasets so they can be validated for specific purposes, e.g., outbreak surveillance. Here we propose a set of benchmark datasets to be used for comparison and validation of phylogenomic pipelines. Methods We identified four well-documented foodborne pathogen events in which the epidemiology was concordant with routine phylogenomic analyses (reference-based SNP and wgMLST approaches). These are ideal benchmark datasets, as the trees, WGS data, and epidemiological data for each are all in agreement. We have placed these sequence data, sample metadata, and “known” phylogenetic trees in publicly-accessible databases and developed a standard descriptive spreadsheet format describing each dataset. To facilitate easy downloading of these benchmarks, we developed an automated script that uses the standard descriptive spreadsheet format. Results Our “outbreak” benchmark datasets represent the four major foodborne bacterial pathogens (Listeria monocytogenes, Salmonella enterica, Escherichia coli, and Campylobacter jejuni) and one simulated dataset where the “known tree” can be accurately called the “true tree”. The downloading script and associated table files are available on GitHub: https://github.com/WGS-standards-and-analysis/datasets. Discussion These five benchmark datasets will help standardize comparison of current and future phylogenomic pipelines, and facilitate important cross-institutional collaborations. Our work is part of a global effort to provide collaborative infrastructure for sequence data and analytic tools—we welcome additional benchmark datasets in our recommended format, and, if relevant, we will add these on our GitHub site. Together, these datasets, dataset format, and the underlying GitHub infrastructure present a recommended path for worldwide standardization of phylogenomic pipelines. PMID:29372115

  1. Radiological assessment for bauxite mining and alumina refining.

    PubMed

    O'Connor, Brian H; Donoghue, A Michael; Manning, Timothy J H; Chesson, Barry J

    2013-01-01

    Two international benchmarks assess whether the mining and processing of ores containing Naturally Occurring Radioactive Material (NORM) require management under radiological regulations set by local jurisdictions. First, the 1 Bq/g benchmark for radionuclide head of chain activity concentration determines whether materials may be excluded from radiological regulation. Second, processes may be exempted from radiological regulation where occupational above-background exposures for members of the workforce do not exceed 1 mSv/year. This is also the upper-limit of exposure prescribed for members of the public. Alcoa of Australia Limited (Alcoa) has undertaken radiological evaluations of the mining and processing of bauxite from the Darling Range of Western Australia since the 1980s. Short-term monitoring projects have demonstrated that above-background exposures for workers do not exceed 1 mSv/year. A whole-of-year evaluation of above-background, occupational radiological doses for bauxite mining, alumina refining and residue operations was conducted during 2008/2009 as part of the Alcoa NORM Quality Assurance System (NQAS). The NQAS has been guided by publications from the International Commission on Radiological Protection (ICRP), the International Atomic Energy Agency (IAEA) and the Australian Radiation Protection and Nuclear Safety Agency (ARPANSA). The NQAS has been developed specifically in response to implementation of the Australian National Directory on Radiation Protection (NDRP). Positional monitoring was undertaken to increase the accuracy of natural background levels required for correction of occupational exposures. This is important in view of the small increments in exposure that occur in bauxite mining, alumina refining and residue operations relative to natural background. Positional monitoring was also undertaken to assess the potential for exposure in operating locations. Personal monitoring was undertaken to characterise exposures in Similar Exposure Groups (SEGs). The monitoring was undertaken over 12 months, to provide annual average assessments of above-background doses, thereby reducing temporal variations, especially for radon exposures. The monitoring program concentrated on gamma and radon exposures, rather than gross alpha exposures, as past studies have shown that gross alpha exposures from inhalable dust for most of the workforce are small in comparison to combined gamma and radon exposures. The natural background determinations were consistent with data in the literature for localities near Alcoa's mining, refining and residue operations in Western Australia, and also with UNSCEAR global data. Within the mining operations, there was further consistency between the above-background dose estimates and the local geochemistry, with slight elevation of dose levels in mining pits. Conservative estimates of above-background levels for the workforce have been made using an assumption of 100% occupancy (1920 hours per year) for the SEGs considered. Total incremental composite doses for individuals were clearly less than 1.0 mSv/year when gamma, radon progeny and gross alpha exposures were considered. This is despite the activity concentration of some materials being slightly higher than the benchmark of 1 Bq/g. The results are consistent with previous monitoring and demonstrate compliance with the 1 mSv/year exemption level within mining, refining and residue operations. These results will be of value to bauxite mines and alumina refineries elsewhere in the world.

  2. PPI4DOCK: large scale assessment of the use of homology models in free docking over more than 1000 realistic targets.

    PubMed

    Yu, Jinchao; Guerois, Raphaël

    2016-12-15

    Protein-protein docking methods are of great importance for understanding interactomes at the structural level. It has become increasingly appealing to use not only experimental structures but also homology models of unbound subunits as input for docking simulations. So far we are missing a large scale assessment of the success of rigid-body free docking methods on homology models. We explored how we could benefit from comparative modelling of unbound subunits to expand docking benchmark datasets. Starting from a collection of 3157 non-redundant, high X-ray resolution heterodimers, we developed the PPI4DOCK benchmark containing 1417 docking targets based on unbound homology models. Rigid-body docking by Zdock showed that for 1208 cases (85.2%), at least one correct decoy was generated, emphasizing the efficiency of rigid-body docking in generating correct assemblies. Overall, the PPI4DOCK benchmark contains a large set of realistic cases and provides new ground for assessing docking and scoring methodologies. Benchmark sets can be downloaded from http://biodev.cea.fr/interevol/ppi4dock/ CONTACT: guerois@cea.frSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. Metric Evaluation Pipeline for 3d Modeling of Urban Scenes

    NASA Astrophysics Data System (ADS)

    Bosch, M.; Leichtman, A.; Chilcott, D.; Goldberg, H.; Brown, M.

    2017-05-01

    Publicly available benchmark data and metric evaluation approaches have been instrumental in enabling research to advance state of the art methods for remote sensing applications in urban 3D modeling. Most publicly available benchmark datasets have consisted of high resolution airborne imagery and lidar suitable for 3D modeling on a relatively modest scale. To enable research in larger scale 3D mapping, we have recently released a public benchmark dataset with multi-view commercial satellite imagery and metrics to compare 3D point clouds with lidar ground truth. We now define a more complete metric evaluation pipeline developed as publicly available open source software to assess semantically labeled 3D models of complex urban scenes derived from multi-view commercial satellite imagery. Evaluation metrics in our pipeline include horizontal and vertical accuracy and completeness, volumetric completeness and correctness, perceptual quality, and model simplicity. Sources of ground truth include airborne lidar and overhead imagery, and we demonstrate a semi-automated process for producing accurate ground truth shape files to characterize building footprints. We validate our current metric evaluation pipeline using 3D models produced using open source multi-view stereo methods. Data and software is made publicly available to enable further research and planned benchmarking activities.

  4. Quality assurance of the SCOPE 1 trial in oesophageal radiotherapy.

    PubMed

    Wills, Lucy; Maggs, Rhydian; Lewis, Geraint; Jones, Gareth; Nixon, Lisette; Staffurth, John; Crosby, Tom

    2017-11-15

    SCOPE 1 was the first UK based multi-centre trial involving radiotherapy of the oesophagus. A comprehensive radiotherapy trials quality assurance programme was launched with two main aims: 1. To assist centres, where needed, to adapt their radiotherapy techniques in order to achieve protocol compliance and thereby enable their participation in the trial. 2. To support the trial's clinical outcomes by ensuring the consistent planning and delivery of radiotherapy across all participating centres. A detailed information package was provided and centres were required to complete a benchmark case in which the delineated target volumes and organs at risk, dose distribution and completion of a plan assessment form were assessed prior to recruiting patients into the trial. Upon recruiting, the quality assurance (QA) programme continued to monitor the outlining and planning of radiotherapy treatments. Completion of a questionnaire was requested in order to gather information about each centre's equipment and techniques relating to their trial participation and to assess the impact of the trial nationally on standard practice for radiotherapy of the oesophagus. During the trial, advice was available for individual planning issues, and was circulated amongst the SCOPE 1 community in response to common areas of concern using bulletins. 36 centres were supported through QA processes to enable their participation in SCOPE1. We discuss the issues which have arisen throughout this process and present details of the benchmark case solutions, centre questionnaires and on-trial protocol compliance. The range of submitted benchmark case GTV volumes was 29.8-67.8cm 3 ; and PTV volumes 221.9-513.3 cm 3 . For the dose distributions associated with these volumes, the percentage volume of the lungs receiving 20Gy (V20Gy) ranged from 20.4 to 33.5%. Similarly, heart V40Gy ranged from 16.1 to 33.0%. Incidence of incorrect outlining of OAR volumes increased from 50% of centres at benchmark case, to 64% on trial. Sixty-five percent of centres, who returned the trial questionnaire, stated that their standard practice had changed as a result of their participation in the SCOPE1 trial. The SCOPE 1 QA programme outcomes lend support to the trial's clinical conclusions. The range of patient planning outcomes for the benchmark case indicated, at the outset of the trial, the significant degree of variation present in UK oesophageal radiotherapy planning outcomes, despite the presence of a protocol. This supports the case for increasingly detailed definition of practice by means of consensus protocols, training and peer review. The incidence of minor inconsistencies of technique highlights the potential for improved QA systems and the need for sufficient resource for this to be addressed within future trials. As indicated in questionnaire responses, the QA exercise as a whole has contributed to greater consistency of oesophageal radiotherapy in the UK via the adoption into standard practice of elements of the protocol. The SCOPE1 trial is an International Standard Randomized Controlled Trial, ISRCTN47718479 .

  5. SU-F-T-28: Evaluation of BEBIG HDR Co-60 After-Loading System for Skin Cancer Treatment Using Conical Surface Applicator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Safigholi, H; Soliman, A; Song, W Y

    Purpose: To evaluate the possibility of utilizing the BEBIG HDR 60Co remote after-loading system for malignant skin surface treatment using Monte Carlo (MC) simulation technique. Methods: First TG-43 parameters of BEBIG-Co-60 and Nucletron Ir-192-mHDR-V2 brachytherapy sources were simulated using MCNP6 code to benchmark the sources against the literature. Second a conical tungsten-alloy with 3-cm diameter of Planning-Target-Volume (PTV) at surface for use with a single stepping HDR source is designed. The HDR source is modeled parallel to treatment plane at the center of the conical applicator with a source surface distance (SSD) of 1.5-cm and a removable plastic end-cap withmore » a 1-mm thickness. Third, MC calculated dose distributions from HDR Co-60 for conical surface applicator were compared with the simulated data using HDR Ir-192 source. The initial calculations were made with the same conical surface applicator (standard-applicator) dimensions as the ones used with the Ir-192 system. Fourth, the applicator wall-thickness for the Co-60 system was increased (doubled) to diminish leakage dose to levels received when using the Ir-192 system. With this geometry, percentage depth dose (PDD), and relative 2D-dose profiles in transverse/coronal planes were normalized at 3-mm prescription-depth evaluated along the central axis. Results: PDD for Ir-192 and Co-60 were similar with standard and thick-walled applicator. 2D-relative dose distribution of Co-60, inside the standard-conical-applicator, generated higher penumbra (7.6%). For thick-walled applicator, it created smaller penumbra (<4%) compared to Ir-192 source in the standard-conicalapplicator. Dose leakage outside of thick-walled applicator with Co-60 source was approximately equal (≤3%) with standard applicator using Ir-192 source. Conclusion: Skin cancer treatment with equal quality can be performed with Co-60 source and thick-walled conical applicators instead of Ir-192 with standard applicators. These conical surface applicator must be used with a protective plastic end-cap to eliminate electron contamination and over-dosage of the skin.« less

  6. SU-G-BRC-10: Feasibility of a Web-Based Monte Carlo Simulation Tool for Dynamic Electron Arc Radiotherapy (DEAR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodrigues, A; Wu, Q; Sawkey, D

    Purpose: DEAR is a radiation therapy technique utilizing synchronized motion of gantry and couch during delivery to optimize dose distribution homogeneity and penumbra for treatment of superficial disease. Dose calculation for DEAR is not yet supported by commercial TPSs. The purpose of this study is to demonstrate the feasibility of using a web-based Monte Carlo (MC) simulation tool (VirtuaLinac) to calculate dose distributions for a DEAR delivery. Methods: MC simulations were run through VirtuaLinac, which is based on the GEANT4 platform. VirtuaLinac utilizes detailed linac head geometry and material models, validated phase space files, and a voxelized phantom. The inputmore » was expanded to include an XML file for simulation of varying mechanical axes as a function of MU. A DEAR XML plan was generated and used in the MC simulation and delivered on a TrueBeam in Developer Mode. Radiographic film wrapped on a cylindrical phantom (12.5 cm radius) measured dose at a depth of 1.5 cm and compared to the simulation results. Results: A DEAR plan was simulated using an energy of 6 MeV and a 3×10 cm{sup 2} cut-out in a 15×15 cm{sup 2} applicator for a delivery of a 90° arc. The resulting data were found to provide qualitative and quantitative evidence that the simulation platform could be used as the basis for DEAR dose calculations. The resulting unwrapped 2D dose distributions agreed well in the cross-plane direction along the arc, with field sizes of 18.4 and 18.2 cm and penumbrae of 1.9 and 2.0 cm for measurements and simulations, respectively. Conclusion: Preliminary feasibility of a DEAR delivery using a web-based MC simulation platform has been demonstrated. This tool will benefit treatment planning for DEAR as a benchmark for developing other model based algorithms, allowing efficient optimization of trajectories, and quality assurance of plans without the need for extensive measurements.« less

  7. SU-E-J-85: Leave-One-Out Perturbation (LOOP) Fitting Algorithm for Absolute Dose Film Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chu, A; Ahmad, M; Chen, Z

    2014-06-01

    Purpose: To introduce an outliers-recognition fitting routine for film dosimetry. It cannot only be flexible with any linear and non-linear regression but also can provide information for the minimal number of sampling points, critical sampling distributions and evaluating analytical functions for absolute film-dose calibration. Methods: The technique, leave-one-out (LOO) cross validation, is often used for statistical analyses on model performance. We used LOO analyses with perturbed bootstrap fitting called leave-one-out perturbation (LOOP) for film-dose calibration . Given a threshold, the LOO process detects unfit points (“outliers”) compared to other cohorts, and a bootstrap fitting process follows to seek any possibilitiesmore » of using perturbations for further improvement. After that outliers were reconfirmed by a traditional t-test statistics and eliminated, then another LOOP feedback resulted in the final. An over-sampled film-dose- calibration dataset was collected as a reference (dose range: 0-800cGy), and various simulated conditions for outliers and sampling distributions were derived from the reference. Comparisons over the various conditions were made, and the performance of fitting functions, polynomial and rational functions, were evaluated. Results: (1) LOOP can prove its sensitive outlier-recognition by its statistical correlation to an exceptional better goodness-of-fit as outliers being left-out. (2) With sufficient statistical information, the LOOP can correct outliers under some low-sampling conditions that other “robust fits”, e.g. Least Absolute Residuals, cannot. (3) Complete cross-validated analyses of LOOP indicate that the function of rational type demonstrates a much superior performance compared to the polynomial. Even with 5 data points including one outlier, using LOOP with rational function can restore more than a 95% value back to its reference values, while the polynomial fitting completely failed under the same conditions. Conclusion: LOOP can cooperate with any fitting routine functioning as a “robust fit”. In addition, it can be set as a benchmark for film-dose calibration fitting performance.« less

  8. Deformable image registration with a featurelet algorithm: implementation as a 3D-slicer extension and validation

    NASA Astrophysics Data System (ADS)

    Renner, A.; Furtado, H.; Seppenwoolde, Y.; Birkfellner, W.; Georg, D.

    2016-03-01

    A radiotherapy (RT) treatment can last for several weeks. In that time organ motion and shape changes introduce uncertainty in dose application. Monitoring and quantifying the change can yield a more precise irradiation margin definition and thereby reduce dose delivery to healthy tissue and adjust tumor targeting. Deformable image registration (DIR) has the potential to fulfill this task by calculating a deformation field (DF) between a planning CT and a repeated CT of the altered anatomy. Application of the DF on the original contours yields new contours that can be used for an adapted treatment plan. DIR is a challenging method and therefore needs careful user interaction. Without a proper graphical user interface (GUI) a misregistration cannot be easily detected by visual inspection and the results cannot be fine-tuned by changing registration parameters. To provide a DIR algorithm with such a GUI available for everyone, we created the extension Featurelet-Registration for the open source software platform 3D Slicer. The registration logic is an upgrade of an in-house-developed DIR method, which is a featurelet-based piecewise rigid registration. The so called "featurelets" are equally sized rectangular subvolumes of the moving image which are rigidly registered to rectangular search regions on the fixed image. The output is a deformed image and a deformation field. Both can be visualized directly in 3D Slicer facilitating the interpretation and quantification of the results. For validation of the registration accuracy two deformable phantoms were used. The performance was benchmarked against a demons algorithm with comparable results.

  9. Data Mining in the U.S. National Toxicology Program (NTP) Database Reveals a Potential Bias Regarding Liver Tumors in Rodents Irrespective of the Test Agent

    PubMed Central

    Ring, Matthias; Eskofier, Bjoern M.

    2015-01-01

    Long-term studies in rodents are the benchmark method to assess carcinogenicity of single substances, mixtures, and multi-compounds. In such a study, mice and rats are exposed to a test agent at different dose levels for a period of two years and the incidence of neoplastic lesions is observed. However, this two-year study is also expensive, time-consuming, and burdensome to the experimental animals. Consequently, various alternatives have been proposed in the literature to assess carcinogenicity on basis of short-term studies. In this paper, we investigated if effects on the rodents’ liver weight in short-term studies can be exploited to predict the incidence of liver tumors in long-term studies. A set of 138 paired short- and long-term studies was compiled from the database of the U.S. National Toxicology Program (NTP), more precisely, from (long-term) two-year carcinogenicity studies and their preceding (short-term) dose finding studies. In this set, data mining methods revealed patterns that can predict the incidence of liver tumors with accuracies of over 80%. However, the results simultaneously indicated a potential bias regarding liver tumors in two-year NTP studies. The incidence of liver tumors does not only depend on the test agent but also on other confounding factors in the study design, e.g., species, sex, type of substance. We recommend considering this bias if the hazard or risk of a test agent is assessed on basis of a NTP carcinogenicity study. PMID:25658102

  10. ILSI Europe's Food Allergy Task Force: From Defining the Hazard to Assessing the Risk from Food Allergens.

    PubMed

    Crevel, René R W; Ronsmans, Stefan; Marsaux, Cyril F M; Bánáti, Diána

    2018-01-01

    The International Life Sciences Institute (ILSI) Europe Food Allergy Task Force was founded in response to early public concerns about the growing impact of food allergies almost coincidentally with the publication of the 1995 Food and Agriculture Organization-World Health Organization Technical Consultation on Food Allergies. In line with ILSI principles aimed to foster collaboration between stakeholders to promote consensus on science-based approaches to food safety and nutrition, the task force has played a central role since then in the development of risk assessment for food allergens. This ranged from consideration of the criteria to be applied to identifying allergens of public health concern through methodologies to determine the relationship between dose and the proportion of allergic individuals reacting, as well as the nature of the observed responses. The task force also promoted the application of novel, probabilistic risk assessment methods to better delineate the impact of benchmarks, such as reference doses, and actively participated in major European food allergy projects, such as EUROPREVALL, the European Union (EU)-funded project "The prevalence, cost and basis of food allergy across Europe;" and iFAAM, "Integrated approaches to food allergen and allergy risk management," also an EU-funded project. Over the years, the task force's work has evolved as answers to initial questions raised further issues: Its current work program includes a review of analytical methods and how different ones can best be deployed given their strengths and limitations. Another activity, which has just commenced, aims to develop a framework for stakeholders to achieve consensus on acceptable risk.

  11. On the development of a comprehensive MC simulation model for the Gamma Knife Perfexion radiosurgery unit

    NASA Astrophysics Data System (ADS)

    Pappas, E. P.; Moutsatsos, A.; Pantelis, E.; Zoros, E.; Georgiou, E.; Torrens, M.; Karaiskos, P.

    2016-02-01

    This work presents a comprehensive Monte Carlo (MC) simulation model for the Gamma Knife Perfexion (PFX) radiosurgery unit. Model-based dosimetry calculations were benchmarked in terms of relative dose profiles (RDPs) and output factors (OFs), against corresponding EBT2 measurements. To reduce the rather prolonged computational time associated with the comprehensive PFX model MC simulations, two approximations were explored and evaluated on the grounds of dosimetric accuracy. The first consists in directional biasing of the 60Co photon emission while the second refers to the implementation of simplified source geometric models. The effect of the dose scoring volume dimensions in OF calculations accuracy was also explored. RDP calculations for the comprehensive PFX model were found to be in agreement with corresponding EBT2 measurements. Output factors of 0.819  ±  0.004 and 0.8941  ±  0.0013 were calculated for the 4 mm and 8 mm collimator, respectively, which agree, within uncertainties, with corresponding EBT2 measurements and published experimental data. Volume averaging was found to affect OF results by more than 0.3% for scoring volume radii greater than 0.5 mm and 1.4 mm for the 4 mm and 8 mm collimators, respectively. Directional biasing of photon emission resulted in a time efficiency gain factor of up to 210 with respect to the isotropic photon emission. Although no considerable effect on relative dose profiles was detected, directional biasing led to OF overestimations which were more pronounced for the 4 mm collimator and increased with decreasing emission cone half-angle, reaching up to 6% for a 5° angle. Implementation of simplified source models revealed that omitting the sources’ stainless steel capsule significantly affects both OF results and relative dose profiles, while the aluminum-based bushing did not exhibit considerable dosimetric effect. In conclusion, the results of this work suggest that any PFX simulation model should be benchmarked in terms of both RDP and OF results.

  12. Evaluation of radiation doses and associated risk from the Fukushima nuclear accident to marine biota and human consumers of seafood

    PubMed Central

    Fisher, Nicholas S.; Beaugelin-Seiller, Karine; Hinton, Thomas G.; Baumann, Zofia; Madigan, Daniel J.; Garnier-Laplace, Jacqueline

    2013-01-01

    Radioactive isotopes originating from the damaged Fukushima nuclear reactor in Japan following the earthquake and tsunami in March 2011 were found in resident marine animals and in migratory Pacific bluefin tuna (PBFT). Publication of this information resulted in a worldwide response that caused public anxiety and concern, although PBFT captured off California in August 2011 contained activity concentrations below those from naturally occurring radionuclides. To link the radioactivity to possible health impairments, we calculated doses, attributable to the Fukushima-derived and the naturally occurring radionuclides, to both the marine biota and human fish consumers. We showed that doses in all cases were dominated by the naturally occurring alpha-emitter 210Po and that Fukushima-derived doses were three to four orders of magnitude below 210Po-derived doses. Doses to marine biota were about two orders of magnitude below the lowest benchmark protection level proposed for ecosystems (10 µGy⋅h−1). The additional dose from Fukushima radionuclides to humans consuming tainted PBFT in the United States was calculated to be 0.9 and 4.7 µSv for average consumers and subsistence fishermen, respectively. Such doses are comparable to, or less than, the dose all humans routinely obtain from naturally occurring radionuclides in many food items, medical treatments, air travel, or other background sources. Although uncertainties remain regarding the assessment of cancer risk at low doses of ionizing radiation to humans, the dose received from PBFT consumption by subsistence fishermen can be estimated to result in two additional fatal cancer cases per 10,000,000 similarly exposed people. PMID:23733934

  13. Collision-kerma conversion between dose-to-tissue and dose-to-water by photon energy-fluence corrections in low-energy brachytherapy

    NASA Astrophysics Data System (ADS)

    Giménez-Alventosa, Vicent; Antunes, Paula C. G.; Vijande, Javier; Ballester, Facundo; Pérez-Calatayud, José; Andreo, Pedro

    2017-01-01

    The AAPM TG-43 brachytherapy dosimetry formalism, introduced in 1995, has become a standard for brachytherapy dosimetry worldwide; it implicitly assumes that charged-particle equilibrium (CPE) exists for the determination of absorbed dose to water at different locations, except in the vicinity of the source capsule. Subsequent dosimetry developments, based on Monte Carlo calculations or analytical solutions of transport equations, do not rely on the CPE assumption and determine directly the dose to different tissues. At the time of relating dose to tissue and dose to water, or vice versa, it is usually assumed that the photon fluence in water and in tissues are practically identical, so that the absorbed dose in the two media can be related by their ratio of mass energy-absorption coefficients. In this work, an efficient way to correlate absorbed dose to water and absorbed dose to tissue in brachytherapy calculations at clinically relevant distances for low-energy photon emitting seeds is proposed. A correction is introduced that is based on the ratio of the water-to-tissue photon energy-fluences. State-of-the art Monte Carlo calculations are used to score photon fluence differential in energy in water and in various human tissues (muscle, adipose and bone), which in all cases include a realistic modelling of low-energy brachytherapy sources in order to benchmark the formalism proposed. The energy-fluence based corrections given in this work are able to correlate absorbed dose to tissue and absorbed dose to water with an accuracy better than 0.5% in the most critical cases (e.g. bone tissue).

  14. Collision-kerma conversion between dose-to-tissue and dose-to-water by photon energy-fluence corrections in low-energy brachytherapy.

    PubMed

    Giménez-Alventosa, Vicent; Antunes, Paula C G; Vijande, Javier; Ballester, Facundo; Pérez-Calatayud, José; Andreo, Pedro

    2017-01-07

    The AAPM TG-43 brachytherapy dosimetry formalism, introduced in 1995, has become a standard for brachytherapy dosimetry worldwide; it implicitly assumes that charged-particle equilibrium (CPE) exists for the determination of absorbed dose to water at different locations, except in the vicinity of the source capsule. Subsequent dosimetry developments, based on Monte Carlo calculations or analytical solutions of transport equations, do not rely on the CPE assumption and determine directly the dose to different tissues. At the time of relating dose to tissue and dose to water, or vice versa, it is usually assumed that the photon fluence in water and in tissues are practically identical, so that the absorbed dose in the two media can be related by their ratio of mass energy-absorption coefficients. In this work, an efficient way to correlate absorbed dose to water and absorbed dose to tissue in brachytherapy calculations at clinically relevant distances for low-energy photon emitting seeds is proposed. A correction is introduced that is based on the ratio of the water-to-tissue photon energy-fluences. State-of-the art Monte Carlo calculations are used to score photon fluence differential in energy in water and in various human tissues (muscle, adipose and bone), which in all cases include a realistic modelling of low-energy brachytherapy sources in order to benchmark the formalism proposed. The energy-fluence based corrections given in this work are able to correlate absorbed dose to tissue and absorbed dose to water with an accuracy better than 0.5% in the most critical cases (e.g. bone tissue).

  15. Using a knowledge-based planning solution to select patients for proton therapy.

    PubMed

    Delaney, Alexander R; Dahele, Max; Tol, Jim P; Kuijper, Ingrid T; Slotman, Ben J; Verbakel, Wilko F A R

    2017-08-01

    Patient selection for proton therapy by comparing proton/photon treatment plans is time-consuming and prone to bias. RapidPlan™, a knowledge-based-planning solution, uses plan-libraries to model and predict organ-at-risk (OAR) dose-volume-histograms (DVHs). We investigated whether RapidPlan, utilizing an algorithm based only on photon beam characteristics, could generate proton DVH-predictions and whether these could correctly identify patients for proton therapy. Model PROT and Model PHOT comprised 30 head-and-neck cancer proton and photon plans, respectively. Proton and photon knowledge-based-plans (KBPs) were made for ten evaluation-patients. DVH-prediction accuracy was analyzed by comparing predicted-vs-achieved mean OAR doses. KBPs and manual plans were compared using salivary gland and swallowing muscle mean doses. For illustration, patients were selected for protons if predicted Model PHOT mean dose minus predicted Model PROT mean dose (ΔPrediction) for combined OARs was ≥6Gy, and benchmarked using achieved KBP doses. Achieved and predicted Model PROT /Model PHOT mean dose R 2 was 0.95/0.98. Generally, achieved mean dose for Model PHOT /Model PROT KBPs was respectively lower/higher than predicted. Comparing Model PROT /Model PHOT KBPs with manual plans, salivary and swallowing mean doses increased/decreased by <2Gy, on average. ΔPrediction≥6Gy correctly selected 4 of 5 patients for protons. Knowledge-based DVH-predictions can provide efficient, patient-specific selection for protons. A proton-specific RapidPlan-solution could improve results. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Discontinuous finite element space-angle treatment of the first order linear Boltzmann transport equation with magnetic fields: Application to MRI-guided radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    St Aubin, J., E-mail: joel.st.aubin@albertahealthservices.ca; Keyvanloo, A.; Fallone, B. G.

    Purpose: The advent of magnetic resonance imaging (MRI) guided radiotherapy systems demands the incorporation of the magnetic field into dose calculation algorithms of treatment planning systems. This is due to the fact that the Lorentz force of the magnetic field perturbs the path of the relativistic electrons, hence altering the dose deposited by them. Building on the previous work, the authors have developed a discontinuous finite element space-angle treatment of the linear Boltzmann transport equation to accurately account for the effects of magnetic fields on radiotherapy doses. Methods: The authors present a detailed description of their new formalism and comparemore » its accuracy to GEANT4 Monte Carlo calculations for magnetic fields parallel and perpendicular to the radiation beam at field strengths of 0.5 and 3 T for an inhomogeneous 3D slab geometry phantom comprising water, bone, and air or lung. The accuracy of the authors’ new formalism was determined using a gamma analysis with a 2%/2 mm criterion. Results: Greater than 98.9% of all points analyzed passed the 2%/2 mm gamma criterion for the field strengths and orientations tested. The authors have benchmarked their new formalism against Monte Carlo in a challenging radiation transport problem with a high density material (bone) directly adjacent to a very low density material (dry air at STP) where the effects of the magnetic field dominate collisions. Conclusions: A discontinuous finite element space-angle approach has been proven to be an accurate method for solving the linear Boltzmann transport equation with magnetic fields for cases relevant to MRI guided radiotherapy. The authors have validated the accuracy of this novel technique against GEANT4, even in cases of strong magnetic field strengths and low density air.« less

  17. SU-C-BRC-06: OpenCL-Based Cross-Platform Monte Carlo Simulation Package for Carbon Ion Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qin, N; Tian, Z; Pompos, A

    2016-06-15

    Purpose: Monte Carlo (MC) simulation is considered to be the most accurate method for calculation of absorbed dose and fundamental physical quantities related to biological effects in carbon ion therapy. Its long computation time impedes clinical and research applications. We have developed an MC package, goCMC, on parallel processing platforms, aiming at achieving accurate and efficient simulations for carbon therapy. Methods: goCMC was developed under OpenCL framework. It supported transport simulation in voxelized geometry with kinetic energy up to 450 MeV/u. Class II condensed history algorithm was employed for charged particle transport with stopping power computed via Bethe-Bloch equation. Secondarymore » electrons were not transported with their energy locally deposited. Energy straggling and multiple scattering were modeled. Production of secondary charged particles from nuclear interactions was implemented based on cross section and yield data from Geant4. They were transported via the condensed history scheme. goCMC supported scoring various quantities of interest e.g. physical dose, particle fluence, spectrum, linear energy transfer, and positron emitting nuclei. Results: goCMC has been benchmarked against Geant4 with different phantoms and beam energies. For 100 MeV/u, 250 MeV/u and 400 MeV/u beams impinging to a water phantom, range difference was 0.03 mm, 0.20 mm and 0.53 mm, and mean dose difference was 0.47%, 0.72% and 0.79%, respectively. goCMC can run on various computing devices. Depending on the beam energy and voxel size, it took 20∼100 seconds to simulate 10{sup 7} carbons on an AMD Radeon GPU card. The corresponding CPU time for Geant4 with the same setup was 60∼100 hours. Conclusion: We have developed an OpenCL-based cross-platform carbon MC simulation package, goCMC. Its accuracy, efficiency and portability make goCMC attractive for research and clinical applications in carbon therapy.« less

  18. Principles for Developing Benchmark Criteria for Staff Training in Responsible Gambling.

    PubMed

    Oehler, Stefan; Banzer, Raphaela; Gruenerbl, Agnes; Malischnig, Doris; Griffiths, Mark D; Haring, Christian

    2017-03-01

    One approach to minimizing the negative consequences of excessive gambling is staff training to reduce the rate of the development of new cases of harm or disorder within their customers. The primary goal of the present study was to assess suitable benchmark criteria for the training of gambling employees at casinos and lottery retailers. The study utilised the Delphi Method, a survey with one qualitative and two quantitative phases. A total of 21 invited international experts in the responsible gambling field participated in all three phases. A total of 75 performance indicators were outlined and assigned to six categories: (1) criteria of content, (2) modelling, (3) qualification of trainer, (4) framework conditions, (5) sustainability and (6) statistical indicators. Nine of the 75 indicators were rated as very important by 90 % or more of the experts. Unanimous support for importance was given to indicators such as (1) comprehensibility and (2) concrete action-guidance for handling with problem gamblers, Additionally, the study examined the implementation of benchmarking, when it should be conducted, and who should be responsible. Results indicated that benchmarking should be conducted every 1-2 years regularly and that one institution should be clearly defined and primarily responsible for benchmarking. The results of the present study provide the basis for developing a benchmarking for staff training in responsible gambling.

  19. A Simplified Approach for the Rapid Generation of Transient Heat-Shield Environments

    NASA Technical Reports Server (NTRS)

    Wurster, Kathryn E.; Zoby, E. Vincent; Mills, Janelle C.; Kamhawi, Hilmi

    2007-01-01

    A simplified approach has been developed whereby transient entry heating environments are reliably predicted based upon a limited set of benchmark radiative and convective solutions. Heating, pressure and shear-stress levels, non-dimensionalized by an appropriate parameter at each benchmark condition are applied throughout the entry profile. This approach was shown to be valid based on the observation that the fully catalytic, laminar distributions examined were relatively insensitive to altitude as well as velocity throughout the regime of significant heating. In order to establish a best prediction by which to judge the results that can be obtained using a very limited benchmark set, predictions based on a series of benchmark cases along a trajectory are used. Solutions which rely only on the limited benchmark set, ideally in the neighborhood of peak heating, are compared against the resultant transient heating rates and total heat loads from the best prediction. Predictions based on using two or fewer benchmark cases at or near the trajectory peak heating condition, yielded results to within 5-10 percent of the best predictions. Thus, the method provides transient heating environments over the heat-shield face with sufficient resolution and accuracy for thermal protection system design and also offers a significant capability to perform rapid trade studies such as the effect of different trajectories, atmospheres, or trim angle of attack, on convective and radiative heating rates and loads, pressure, and shear-stress levels.

  20. Benchmarking health IT among OECD countries: better data for better policy

    PubMed Central

    Adler-Milstein, Julia; Ronchi, Elettra; Cohen, Genna R; Winn, Laura A Pannella; Jha, Ashish K

    2014-01-01

    Objective To develop benchmark measures of health information and communication technology (ICT) use to facilitate cross-country comparisons and learning. Materials and methods The effort is led by the Organisation for Economic Co-operation and Development (OECD). Approaches to definition and measurement within four ICT domains were compared across seven OECD countries in order to identify functionalities in each domain. These informed a set of functionality-based benchmark measures, which were refined in collaboration with representatives from more than 20 OECD and non-OECD countries. We report on progress to date and remaining work to enable countries to begin to collect benchmark data. Results The four benchmarking domains include provider-centric electronic record, patient-centric electronic record, health information exchange, and tele-health. There was broad agreement on functionalities in the provider-centric electronic record domain (eg, entry of core patient data, decision support), and less agreement in the other three domains in which country representatives worked to select benchmark functionalities. Discussion Many countries are working to implement ICTs to improve healthcare system performance. Although many countries are looking to others as potential models, the lack of consistent terminology and approach has made cross-national comparisons and learning difficult. Conclusions As countries develop and implement strategies to increase the use of ICTs to promote health goals, there is a historic opportunity to enable cross-country learning. To facilitate this learning and reduce the chances that individual countries flounder, a common understanding of health ICT adoption and use is needed. The OECD-led benchmarking process is a crucial step towards achieving this. PMID:23721983

  1. Assessment of the effect of population and diary sampling methods on estimation of school-age children exposure to fine particles.

    PubMed

    Che, W W; Frey, H Christopher; Lau, Alexis K H

    2014-12-01

    Population and diary sampling methods are employed in exposure models to sample simulated individuals and their daily activity on each simulation day. Different sampling methods may lead to variations in estimated human exposure. In this study, two population sampling methods (stratified-random and random-random) and three diary sampling methods (random resampling, diversity and autocorrelation, and Markov-chain cluster [MCC]) are evaluated. Their impacts on estimated children's exposure to ambient fine particulate matter (PM2.5 ) are quantified via case studies for children in Wake County, NC for July 2002. The estimated mean daily average exposure is 12.9 μg/m(3) for simulated children using the stratified population sampling method, and 12.2 μg/m(3) using the random sampling method. These minor differences are caused by the random sampling among ages within census tracts. Among the three diary sampling methods, there are differences in the estimated number of individuals with multiple days of exposures exceeding a benchmark of concern of 25 μg/m(3) due to differences in how multiday longitudinal diaries are estimated. The MCC method is relatively more conservative. In case studies evaluated here, the MCC method led to 10% higher estimation of the number of individuals with repeated exposures exceeding the benchmark. The comparisons help to identify and contrast the capabilities of each method and to offer insight regarding implications of method choice. Exposure simulation results are robust to the two population sampling methods evaluated, and are sensitive to the choice of method for simulating longitudinal diaries, particularly when analyzing results for specific microenvironments or for exposures exceeding a benchmark of concern. © 2014 Society for Risk Analysis.

  2. Use of benchmarking and public reporting for infection control in four high-income countries.

    PubMed

    Haustein, Thomas; Gastmeier, Petra; Holmes, Alison; Lucet, Jean-Christophe; Shannon, Richard P; Pittet, Didier; Harbarth, Stephan

    2011-06-01

    Benchmarking of surveillance data for health-care-associated infection (HCAI) has been used for more than three decades to inform prevention strategies and improve patients' safety. In recent years, public reporting of HCAI indicators has been mandated in several countries because of an increasing demand for transparency, although many methodological issues surrounding benchmarking remain unresolved and are highly debated. In this Review, we describe developments in benchmarking and public reporting of HCAI indicators in England, France, Germany, and the USA. Although benchmarking networks in these countries are derived from a common model and use similar methods, approaches to public reporting have been more diverse. The USA and England have predominantly focused on reporting of infection rates, whereas France has put emphasis on process and structure indicators. In Germany, HCAI indicators of individual institutions are treated confidentially and are not disseminated publicly. Although evidence for a direct effect of public reporting of indicators alone on incidence of HCAIs is weak at present, it has been associated with substantial organisational change. An opportunity now exists to learn from the different strategies that have been adopted. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. An approach to radiation safety department benchmarking in academic and medical facilities.

    PubMed

    Harvey, Richard P

    2015-02-01

    Based on anecdotal evidence and networking with colleagues at other facilities, it has become evident that some radiation safety departments are not adequately staffed and radiation safety professionals need to increase their staffing levels. Discussions with management regarding radiation safety department staffing often lead to similar conclusions. Management acknowledges the Radiation Safety Officer (RSO) or Director of Radiation Safety's concern but asks the RSO to provide benchmarking and justification for additional full-time equivalents (FTEs). The RSO must determine a method to benchmark and justify additional staffing needs while struggling to maintain a safe and compliant radiation safety program. Benchmarking and justification are extremely important tools that are commonly used to demonstrate the need for increased staffing in other disciplines and are tools that can be used by radiation safety professionals. Parameters that most RSOs would expect to be positive predictors of radiation safety staff size generally are and can be emphasized in benchmarking and justification report summaries. Facilities with large radiation safety departments tend to have large numbers of authorized users, be broad-scope programs, be subject to increased controls regulations, have large clinical operations, have significant numbers of academic radiation-producing machines, and have laser safety responsibilities.

  4. OECD/NEA expert group on uncertainty analysis for criticality safety assessment: Results of benchmark on sensitivity calculation (phase III)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ivanova, T.; Laville, C.; Dyrda, J.

    2012-07-01

    The sensitivities of the k{sub eff} eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplificationsmore » impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods. (authors)« less

  5. High-Level Ab Initio Calculations of Intermolecular Interactions: Heavy Main-Group Element π-Interactions.

    PubMed

    Krasowska, Małgorzata; Schneider, Wolfgang B; Mehring, Michael; Auer, Alexander A

    2018-05-02

    This work reports high-level ab initio calculations and a detailed analysis on the nature of intermolecular interactions of heavy main-group element compounds and π systems. For this purpose we have chosen a set of benchmark molecules of the form MR 3 , in which M=As, Sb, or Bi, and R=CH 3 , OCH 3 , or Cl. Several methods for the description of weak intermolecular interactions are benchmarked including DFT-D, DFT-SAPT, MP2, and high-level coupled cluster methods in the DLPNO-CCSD(T) approximation. Using local energy decomposition (LED) and an analysis of the electron density, details of the nature of this interaction are unraveled. The results yield insight into the nature of dispersion and donor-acceptor interactions in this type of system, including systematic trends in the periodic table, and also provide a benchmark for dispersion interactions in heavy main-group element compounds. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Availability of Neutronics Benchmarks in the ICSBEP and IRPhEP Handbooks for Computational Tools Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Briggs, J. Blair; Ivanova, Tatiana

    2017-02-01

    In the past several decades, numerous experiments have been performed worldwide to support reactor operations, measurements, design, and nuclear safety. Those experiments represent an extensive international investment in infrastructure, expertise, and cost, representing significantly valuable resources of data supporting past, current, and future research activities. Those valuable assets represent the basis for recording, development, and validation of our nuclear methods and integral nuclear data [1]. The loss of these experimental data, which has occurred all too much in the recent years, is tragic. The high cost to repeat many of these measurements can be prohibitive, if not impossible, to surmount.more » Two international projects were developed, and are under the direction of the Organisation for Co-operation and Development Nuclear Energy Agency (OECD NEA) to address the challenges of not just data preservation, but evaluation of the data to determine its merit for modern and future use. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was established to identify and verify comprehensive critical benchmark data sets; evaluate the data, including quantification of biases and uncertainties; compile the data and calculations in a standardized format; and formally document the effort into a single source of verified benchmark data [2]. Similarly, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was established to preserve integral reactor physics experimental data, including separate or special effects data for nuclear energy and technology applications [3]. Annually, contributors from around the world continue to collaborate in the evaluation and review of select benchmark experiments for preservation and dissemination. The extensively peer-reviewed integral benchmark data can then be utilized to support nuclear design and safety analysts to validate the analytical tools, methods, and data needed for next-generation reactor design, safety analysis requirements, and all other front- and back-end activities contributing to the overall nuclear fuel cycle where quality neutronics calculations are paramount.« less

  7. Paediatric International Nursing Study: using person-centred key performance indicators to benchmark children's services.

    PubMed

    McCance, Tanya; Wilson, Val; Kornman, Kelly

    2016-07-01

    The aim of the Paediatric International Nursing Study was to explore the utility of key performance indicators in developing person-centred practice across a range of services provided to sick children. The objective addressed in this paper was evaluating the use of these indicators to benchmark services internationally. This study builds on primary research, which produced indicators that were considered novel both in terms of their positive orientation and use in generating data that privileges the patient voice. This study extends this research through wider testing on an international platform within paediatrics. The overall methodological approach was a realistic evaluation used to evaluate the implementation of the key performance indicators, which combined an integrated development and evaluation methodology. The study involved children's wards/hospitals in Australia (six sites across three states) and Europe (seven sites across four countries). Qualitative and quantitative methods were used during the implementation process, however, this paper reports the quantitative data only, which used survey, observations and documentary review. The findings demonstrate the quality of care being delivered to children and their families across different international sites. The benchmarking does, however, highlight some differences between paediatric and general hospitals, and between the different key performance indicators across all the sites. The findings support the use of the key performance indicators as a novel method to benchmark services internationally. Whilst the data collected across 20 paediatric sites suggest services are more similar than different, benchmarking illuminates variations that encourage a critical dialogue about what works and why. The transferability of the key performance indicators and measurement framework across different settings has significant implications for practice. The findings offer an approach to benchmarking and celebrating the successes within practice, while learning from partners across the globe in further developing person-centred cultures. © 2016 John Wiley & Sons Ltd.

  8. Dose audit for patients undergoing two common radiography examinations with digital radiology systems.

    PubMed

    İnal, Tolga; Ataç, Gökçe

    2014-01-01

    We aimed to determine the radiation doses delivered to patients undergoing general examinations using computed or digital radiography systems in Turkey. Radiographs of 20 patients undergoing posteroanterior chest X-ray and of 20 patients undergoing anteroposterior kidney-ureter-bladder radiography were evaluated in five X-ray rooms at four local hospitals in the Ankara region. Currently, almost all radiology departments in Turkey have switched from conventional radiography systems to computed radiography or digital radiography systems. Patient dose was measured for both systems. The results were compared with published diagnostic reference levels (DRLs) from the European Union and International Atomic Energy Agency. The average entrance surface doses (ESDs) for chest examinations exceeded established international DRLs at two of the X-ray rooms in a hospital with computed radiography. All of the other ESD measurements were approximately equal to or below the DRLs for both examinations in all of the remaining hospitals. Improper adjustment of the exposure parameters, uncalibrated automatic exposure control systems, and failure of the technologists to choose exposure parameters properly were problems we noticed during the study. This study is an initial attempt at establishing local DRL values for digital radiography systems, and will provide a benchmark so that the authorities can establish reference dose levels for diagnostic radiology in Turkey.

  9. Design of Unstructured Adaptive (UA) NAS Parallel Benchmark Featuring Irregular, Dynamic Memory Accesses

    NASA Technical Reports Server (NTRS)

    Feng, Hui-Yu; VanderWijngaart, Rob; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    We describe the design of a new method for the measurement of the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. The method involves the solution of a stylized heat transfer problem on an unstructured, adaptive grid. A Spectral Element Method (SEM) with an adaptive, nonconforming mesh is selected to discretize the transport equation. The relatively high order of the SEM lowers the fraction of wall clock time spent on inter-processor communication, which eases the load balancing task and allows us to concentrate on the memory accesses. The benchmark is designed to be three-dimensional. Parallelization and load balance issues of a reference implementation will be described in detail in future reports.

  10. Electron-helium S-wave model benchmark calculations. II. Double ionization, single ionization with excitation, and double excitation

    NASA Astrophysics Data System (ADS)

    Bartlett, Philip L.; Stelbovics, Andris T.

    2010-02-01

    The propagating exterior complex scaling (PECS) method is extended to all four-body processes in electron impact on helium in an S-wave model. Total and energy-differential cross sections are presented with benchmark accuracy for double ionization, single ionization with excitation, and double excitation (to autoionizing states) for incident-electron energies from threshold to 500 eV. While the PECS three-body cross sections for this model given in the preceding article [Phys. Rev. A 81, 022715 (2010)] are in good agreement with other methods, there are considerable discrepancies for these four-body processes. With this model we demonstrate the suitability of the PECS method for the complete solution of the electron-helium system.

  11. Analyzing the BBOB results by means of benchmarking concepts.

    PubMed

    Mersmann, O; Preuss, M; Trautmann, H; Bischl, B; Weihs, C

    2015-01-01

    We present methods to answer two basic questions that arise when benchmarking optimization algorithms. The first one is: which algorithm is the "best" one? and the second one is: which algorithm should I use for my real-world problem? Both are connected and neither is easy to answer. We present a theoretical framework for designing and analyzing the raw data of such benchmark experiments. This represents a first step in answering the aforementioned questions. The 2009 and 2010 BBOB benchmark results are analyzed by means of this framework and we derive insight regarding the answers to the two questions. Furthermore, we discuss how to properly aggregate rankings from algorithm evaluations on individual problems into a consensus, its theoretical background and which common pitfalls should be avoided. Finally, we address the grouping of test problems into sets with similar optimizer rankings and investigate whether these are reflected by already proposed test problem characteristics, finding that this is not always the case.

  12. Human Fecal Source Identification: Real-Time Quantitative PCR Method Standardization

    EPA Science Inventory

    Method standardization or the formal development of a protocol that establishes uniform performance benchmarks and practices is necessary for widespread adoption of a fecal source identification approach. Standardization of a human-associated fecal identification method has been...

  13. Adsorption structures and energetics of molecules on metal surfaces: Bridging experiment and theory

    NASA Astrophysics Data System (ADS)

    Maurer, Reinhard J.; Ruiz, Victor G.; Camarillo-Cisneros, Javier; Liu, Wei; Ferri, Nicola; Reuter, Karsten; Tkatchenko, Alexandre

    2016-05-01

    Adsorption geometry and stability of organic molecules on surfaces are key parameters that determine the observable properties and functions of hybrid inorganic/organic systems (HIOSs). Despite many recent advances in precise experimental characterization and improvements in first-principles electronic structure methods, reliable databases of structures and energetics for large adsorbed molecules are largely amiss. In this review, we present such a database for a range of molecules adsorbed on metal single-crystal surfaces. The systems we analyze include noble-gas atoms, conjugated aromatic molecules, carbon nanostructures, and heteroaromatic compounds adsorbed on five different metal surfaces. The overall objective is to establish a diverse benchmark dataset that enables an assessment of current and future electronic structure methods, and motivates further experimental studies that provide ever more reliable data. Specifically, the benchmark structures and energetics from experiment are here compared with the recently developed van der Waals (vdW) inclusive density-functional theory (DFT) method, DFT + vdWsurf. In comparison to 23 adsorption heights and 17 adsorption energies from experiment we find a mean average deviation of 0.06 Å and 0.16 eV, respectively. This confirms the DFT + vdWsurf method as an accurate and efficient approach to treat HIOSs. A detailed discussion identifies remaining challenges to be addressed in future development of electronic structure methods, for which the here presented benchmark database may serve as an important reference.

  14. A round-robin gamma stereotactic radiosurgery dosimetry interinstitution comparison of calibration protocols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drzymala, R. E., E-mail: drzymala@wustl.edu; Alvarez, P. E.; Bednarz, G.

    2015-11-15

    Purpose: Absorbed dose calibration for gamma stereotactic radiosurgery is challenging due to the unique geometric conditions, dosimetry characteristics, and nonstandard field size of these devices. Members of the American Association of Physicists in Medicine (AAPM) Task Group 178 on Gamma Stereotactic Radiosurgery Dosimetry and Quality Assurance have participated in a round-robin exchange of calibrated measurement instrumentation and phantoms exploring two approved and two proposed calibration protocols or formalisms on ten gamma radiosurgery units. The objectives of this study were to benchmark and compare new formalisms to existing calibration methods, while maintaining traceability to U.S. primary dosimetry calibration laboratory standards. Methods:more » Nine institutions made measurements using ten gamma stereotactic radiosurgery units in three different 160 mm diameter spherical phantoms [acrylonitrile butadiene styrene (ABS) plastic, Solid Water, and liquid water] and in air using a positioning jig. Two calibrated miniature ionization chambers and one calibrated electrometer were circulated for all measurements. Reference dose-rates at the phantom center were determined using the well-established AAPM TG-21 or TG-51 dose calibration protocols and using two proposed dose calibration protocols/formalisms: an in-air protocol and a formalism proposed by the International Atomic Energy Agency (IAEA) working group for small and nonstandard radiation fields. Each institution’s results were normalized to the dose-rate determined at that institution using the TG-21 protocol in the ABS phantom. Results: Percentages of dose-rates within 1.5% of the reference dose-rate (TG-21 + ABS phantom) for the eight chamber-protocol-phantom combinations were the following: 88% for TG-21, 70% for TG-51, 93% for the new IAEA nonstandard-field formalism, and 65% for the new in-air protocol. Averages and standard deviations for dose-rates over all measurements relative to the TG-21 + ABS dose-rate were 0.999 ± 0.009 (TG-21), 0.991 ± 0.013 (TG-51), 1.000 ± 0.009 (IAEA), and 1.009 ± 0.012 (in-air). There were no statistically significant differences (i.e., p > 0.05) between the two ionization chambers for the TG-21 protocol applied to all dosimetry phantoms. The mean results using the TG-51 protocol were notably lower than those for the other dosimetry protocols, with a standard deviation 2–3 times larger. The in-air protocol was not statistically different from TG-21 for the A16 chamber in the liquid water or ABS phantoms (p = 0.300 and p = 0.135) but was statistically different from TG-21 for the PTW chamber in all phantoms (p = 0.006 for Solid Water, 0.014 for liquid water, and 0.020 for ABS). Results of IAEA formalism were statistically different from TG-21 results only for the combination of the A16 chamber with the liquid water phantom (p = 0.017). In the latter case, dose-rates measured with the two protocols differed by only 0.4%. For other phantom-ionization-chamber combinations, the new IAEA formalism was not statistically different from TG-21. Conclusions: Although further investigation is needed to validate the new protocols for other ionization chambers, these results can serve as a reference to quantitatively compare different calibration protocols and ionization chambers if a particular method is chosen by a professional society to serve as a standardized calibration protocol.« less

  15. Evaluation of neutron skyshine from a cyclotron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huyashi, K.; Nakamura, T.

    1984-06-01

    The dose distribution and the spectrum variation of neutrons due to the skyshine effect have been measured with various detectors in the environment surrounding the cyclotron of the Institute for Nuclear Study, University of Tokyo. The source neutrons were produced by stopping a 52-MeV proton beam into a carbon beam stopper and were extracted upward from the opening in the concrete shield surrounding the cyclotron and then leaked into the atmosphere through the cyclotron building. The dose distribution and the spectrum of neutrons near the beam stopper were also measured in order to get information on the skyshine source. Themore » measured skyshine neutron spectra and dose distribution were analyzed with two codes, MMCR2 and SKYSHINE-II, with the result that the calculated results are in good agreement with the experiment. Valuable characteristics of this experiment are the determination of the energy spectrum and dose distribution of source neutron and the measurement of skyshine neutrons from an actual large-scale accelerator building to the exclusion of direct neutrons transported through the air. This experiment must be useful as a kind of benchmark experiment on the skyshine phenomenon.« less

  16. GENOPT 2016: Design of a generalization-based challenge in global optimization

    NASA Astrophysics Data System (ADS)

    Battiti, Roberto; Sergeyev, Yaroslav; Brunato, Mauro; Kvasov, Dmitri

    2016-10-01

    While comparing results on benchmark functions is a widely used practice to demonstrate the competitiveness of global optimization algorithms, fixed benchmarks can lead to a negative data mining process. To avoid this negative effect, the GENOPT contest benchmarks can be used which are based on randomized function generators, designed for scientific experiments, with fixed statistical characteristics but individual variation of the generated instances. The generators are available to participants for off-line tests and online tuning schemes, but the final competition is based on random seeds communicated in the last phase through a cooperative process. A brief presentation and discussion of the methods and results obtained in the framework of the GENOPT contest are given in this contribution.

  17. Unstructured Adaptive Meshes: Bad for Your Memory?

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Feng, Hui-Yu; VanderWijngaart, Rob

    2003-01-01

    This viewgraph presentation explores the need for a NASA Advanced Supercomputing (NAS) parallel benchmark for problems with irregular dynamical memory access. This benchmark is important and necessary because: 1) Problems with localized error source benefit from adaptive nonuniform meshes; 2) Certain machines perform poorly on such problems; 3) Parallel implementation may provide further performance improvement but is difficult. Some examples of problems which use irregular dynamical memory access include: 1) Heat transfer problem; 2) Heat source term; 3) Spectral element method; 4) Base functions; 5) Elemental discrete equations; 6) Global discrete equations. Nonconforming Mesh and Mortar Element Method are covered in greater detail in this presentation.

  18. MicroRNA array normalization: an evaluation using a randomized dataset as the benchmark.

    PubMed

    Qin, Li-Xuan; Zhou, Qin

    2014-01-01

    MicroRNA arrays possess a number of unique data features that challenge the assumption key to many normalization methods. We assessed the performance of existing normalization methods using two microRNA array datasets derived from the same set of tumor samples: one dataset was generated using a blocked randomization design when assigning arrays to samples and hence was free of confounding array effects; the second dataset was generated without blocking or randomization and exhibited array effects. The randomized dataset was assessed for differential expression between two tumor groups and treated as the benchmark. The non-randomized dataset was assessed for differential expression after normalization and compared against the benchmark. Normalization improved the true positive rate significantly in the non-randomized data but still possessed a false discovery rate as high as 50%. Adding a batch adjustment step before normalization further reduced the number of false positive markers while maintaining a similar number of true positive markers, which resulted in a false discovery rate of 32% to 48%, depending on the specific normalization method. We concluded the paper with some insights on possible causes of false discoveries to shed light on how to improve normalization for microRNA arrays.

  19. MicroRNA Array Normalization: An Evaluation Using a Randomized Dataset as the Benchmark

    PubMed Central

    Qin, Li-Xuan; Zhou, Qin

    2014-01-01

    MicroRNA arrays possess a number of unique data features that challenge the assumption key to many normalization methods. We assessed the performance of existing normalization methods using two microRNA array datasets derived from the same set of tumor samples: one dataset was generated using a blocked randomization design when assigning arrays to samples and hence was free of confounding array effects; the second dataset was generated without blocking or randomization and exhibited array effects. The randomized dataset was assessed for differential expression between two tumor groups and treated as the benchmark. The non-randomized dataset was assessed for differential expression after normalization and compared against the benchmark. Normalization improved the true positive rate significantly in the non-randomized data but still possessed a false discovery rate as high as 50%. Adding a batch adjustment step before normalization further reduced the number of false positive markers while maintaining a similar number of true positive markers, which resulted in a false discovery rate of 32% to 48%, depending on the specific normalization method. We concluded the paper with some insights on possible causes of false discoveries to shed light on how to improve normalization for microRNA arrays. PMID:24905456

  20. Change of urinary fluoride and bone metabolism indicators in the endemic fluorosis areas of southern china after supplying low fluoride public water

    PubMed Central

    2013-01-01

    Background Few studies have evaluated health impacts, especially biomarker changes, following implementation of a new environmental policy. This study examined changes in water fluoride, urinary fluoride (UF), and bone metabolism indicators in children after supplying low fluoride public water in endemic fluorosis areas of Southern China. We also assessed the relationship between UF and serum osteocalcin (BGP), calcitonin (CT), alkaline phosphatase (ALP), and bone mineral density to identify the most sensitive bone metabolism indicators related to fluoride exposure. Methods Four fluorosis-endemic villages (intervention villages) in Guangdong, China were randomly selected to receive low-fluoride water. One non-endemic fluorosis village with similar socio-economic status, living conditions, and health care access, was selected as the control group. 120 children aged 6-12 years old were randomly chosen from local schools in each village for the study. Water and urinary fluoride content as well as serum BGP, CT, ALP and bone mineral density were measured by the standard methods and compared between the children residing in the intervention villages and the control village. Benchmark dose (BMD) and benchmark dose lower limit (BMDL) were calculated for each bone damage indicator. Results Our study found that after water source change, fluoride concentrations in drinking water in all intervention villages (A-D) were significantly reduced to 0.11 mg/l, similar to that in the control village (E). Except for Village A where water change has only been taken place for 6 years, urinary fluoride concentrations in children of the intervention villages were lower or comparable to those in the control village after 10 years of supplying new public water. The values of almost all bone indicators in children living in Villages B-D and ALP in Village A were either lower or similar to those in the control village after the intervention. CT and BGP are sensitive bone metabolism indicators related to UF. While assessing the temporal trend of different abnormal bone indicators after the intervention, bone mineral density showed the most stable and the lowest abnormal rates over time. Conclusions Our results suggest that supplying low fluoride public water in Southern China is successful as measured by the reduction of fluoride in water and urine, and changes in various bone indicators to normal levels. A comparison of four bone indicators showed CT and BGP to be the most sensitive indicators. PMID:23425550

Top